Image-based lighting is a practical way to enhance the visual quality of computer graphics. I used to be confused by it until I read the book “High Dynamic Range Imaging“, which provides a very clear explanation about IBL. And I actually have implemented the algorithm in my offline renderer before, it was just that I didn’t know it is IBL. The book PBR has some materials talking about it without explicitly mentioning the term. Following images are generated with IBL in my renderer, except that the last one uses a single directional light.

As we can see from the above images, IBL generated images looks way more promising than the one with a directional light. The real beauty of it is that with physically based shading everything looks great in different lighting environments without the need to change material parameters at all.

Read the rest of this entry »

This was an internal presentation that I delivered at NVIDIA Shanghai a couple of months ago. Since it involves nothing confidential, I decided to share it as a public resource on my website. Anyone interested in offline rendering can take a look at the slide, which introduces whitted ray tracing, path tracing, light tracing and instant radiosity.

This is not a basic introduction to those classical methods, it requires a little bit background knowledge about computer graphics and mathematics. It won’t introduce how to write a ray tracer program, most of the content in the slide are about mathematical derivations of the rendering method.

I wish I could have more time putting more stuff about bidirectional path tracing in it in the foreseeable future.


I read about this instant radiosity algorithm in the book physically based rendering 3rd these days. It is mentioned as instant global illumination though, they are actually the same thing. I thought it should be a good algorithm until I have implemented in renderer, I’m afraid that it is not quite an efficient one. Although it is also unbiased like path tracing and bidirectional path tracing, the convergence speed is just terribly low comparing with the others. It can barely show pure specular materials objects, it definitely needs special handling on delta bsdf. Since it is already implemented, I’ll put some notes on it.

Read the rest of this entry »

I was always wondering why don’t we take the PDF of primary ray into account in a path tracer. Sadly there aren’t many resources available explaining it. I guess the book Physically based rendering 3rd will provide some explanation, however it is not released yet. After some searching on the internet, I finally got something to explain it. It actually gets cancelled with the terms in importance function and LTE. It gets cancelled in a very elegant way that we don’t need to put any resources on it at all, which is why many open-source ray tracer don’t consider it in the first place. In this blog, I’m gonna explain the detailed math behind the whole theory.

Read the rest of this entry »

In my previous post, I talked some basic stuff about naive bidirectional path tracing. However it is hard to show any real value since there are always too much noise comparing with best solutions depending on the scene to be rendered. That is because the contribution of each specific path is not properly weighted. And multiple importance sampling can be the key to the issue, the following comparison shows big difference between different methods. All of the images are generated with my open-source renderer.

This slideshow requires JavaScript.

Those images are generated with rough the same amount of time. No doubt about it, MIS BDPT dominates among all those results. It is less noisy and shows good caustics. Although light tracing can also shows good caustics, it is far from a practical algorithm due to the noise in the rest of the scene, not to mention it almost failed to show any radiance value on the glass monkey head. Traditional path tracing algorithm shows no caustics at all, not because it is a biased algorithm. It is unbiased for sure, however it just converges to the correct caustics in a unreasonable speed. Naive bidirectional path tracing also has roughly same amount of noise, however it also has dimmer monkey head because light tracing doesn’t do a good job on it. In other words, bidirectional path tracing barely reveals any value without MIS.

I searched a lot of materials about MIS in BDPT, however there are only quite limited materials on the internet. Although some open source renderers, like luxrender, give detailed implementation, most of them doesn’t give any insight in the math behind it, without which one can be quickly confused by its code. SmallVCM expends the algorithm further, offering a better solution over MIS BDPT and it has detailed paper on the math. However it is a little complex for someone who just wants to figure out how to do MIS BDPT. Eric Veach’s thesis gives the best explanation on MIS in BPDT,  sadly it doesn’t go further with MIS implementation. In this blog, I’m gonna talk something about MIS in bidirectional path tracing. Most of the theory comes from this paper.

Read the rest of this entry »

I posted a blog about path tracing some time ago, I didn’t regard it as a simple algorithm until I got my hands dirty on bidirectional path tracing. It really took me quite a while to get everything hooked up. Getting BDPT (short for bidirectional path tracing) converging to the same result with path tracing is far from a trivial task, any tiny bug hidden in the renderer will drag you into a nightmare. Those kind of bugs are not the same with the ones that usually appear in real time rendering, like the ones that can easily be exposed with some tools like Nsight, it may cost much more time if only a small component is missing in the target equations, which are totally crazy math.

Since traditional cornell box setting is pretty friendly to path tracing, I made a little change on it, the light source is a spot light facing towards right up. That said most of the scene is lit by the small direct illuminated area instead of the light, that just makes it a very unfriendly path tracing scene.

The following images are generated by BDPT(left), light tracing(top right) and path tracing(bottom right) with roughly the same amount of time.

Path tracing generates the most noisy image even if it is scaled down by four times, bidirectional path tracing result is better, however light tracing definitely gets the best result with least noise in it.

Read the rest of this entry »

Physically based shading has been around for years, it not only eases the workflow for artist, but also delivers high quality shading with neglectable overhead, I see no reason to avoid it in today’s game. Here is an image taken from UE4 document.


When the term first came out, I was totally no idea what this new stuff is. And it took me quite a while to get some basic idea on it because there are so many materials and some of them are a little confusing. I can’t say that I fully understand all of the theory simply because I don’t, however I would still like to write something that I knew and list something useful that I found in this blog. Hopefully it can be helpful for someone.

Read the rest of this entry »

Microfacet model can not only be used for rough metal, it can also be used to simulate rough glass material. This blog is about rendering glass material with microfacet model. Basically all of the theory comes from this paper. Different from the pure refraction model mentioned in my previous blog, the bxdf mentioned here can also refract a single incident ray into multiple directions instead of just one.


Read the rest of this entry »

I’m working on microfacet brdf model for my renderer these days, noticing that it is more than necessary to provide a separate sampling method for microfacet brdf instead of using the default one, which is usually used for diffuse like surfaces and highly inefficient for brdf with spiky shape, such as mirror like surfaces. The following image is one generated by the default sampling method:


The left monkey has pure reflection brdf which is mentioned in my previous blog post, the right one uses the microfacet model with zero as roughness value. I was expecting similar result for both of the monkeys while things turned out to be wrong here, we can barely see the reflection from the monkey. Actually there is nothing wrong, the fact is that the convergence rate of default sampling the microfacet model with 0 roughness is extremely low. As long as there are enough samples, it will reach the appearance similar to the left one. However the number of samples being enough can be arbitrarily high depending on how spiky your brdf is.

The right way of sampling microfacet brdf is mentioned in this paper. What I want to record in this blog is how these conclusions are derived from the original microfacet model. Using these better sampling method, we will get similar result for those two monkeys eventually.

Read the rest of this entry »

The book “Physically Based Rendering” already explains it, however I found it a little bit confusing the first time I read the chapters, which are chapters 8.2.2/8.2.3. And I also saw that there is one error of this chapter mentioned by Jérémy Riviere in the errata page. Although he provides a correct change on the equation however it is not clearly connected with the following context.

Read the rest of this entry »