Basic Color Science for Graphics Engineers

For more than a decade, we have been doing HDR rendering in our game engines, which means the intermediate render targets won’t be limited by the precision of the color formats. It is an even more important concept after the emerging of physically based rendering, which is almost what everyone does these days. However, after so much effort rendering everything in linear color space, it is quite wasteful that we can only display colors with only limited chromaticity and luminance defined by sRGB space due to limitations of LDR monitor and TVs.

With HDR monitors and TVs become more and more affordable, game industry has never been so serious about colors like it is now. It is almost a standard feature for modern AAA games. Luckily enough, I got a chance to work on supporting HDR monitor/TV for our game Skull & Bones these days. With limited knowledge about HDR monitor/TV support, I spent lots of my time this week learning some basic color science, which is almost mandatory for me to understand the whole thing. And I learned lots of interesting stuff by doing some really basic research in this field. Most importantly, I strongly believe that it won’t be long before game studios deploy the whole development pipeline in Rec.2020 color space for a wider gamut and better HDR once HDR monitors become more affordable. It is better for me to blog it before I forget everything so that it could be easy for me to pick up in the future.

Continue reading

Sampling Anisotropic Microfacet BRDF

I am working on material system in my renderer recently. My old implementation of microfacet models is all isotropic BRDF, as a result of which, I can’t render something like brushed metals in my renderer. After spending three days in my spare time to extend the system to support anisotropic microfacet BRDF, I easily noticed how much mathematics that it needs to understand all the importance sampling method. The fact that \theta and \phi are somewhat correlated makes the importance sampling a lot more complex than isotropic model. For a detailed derivation of isotropic microfacet importance sampling, please check out my previous blog. I strongly suggest checking it out if the math formula in this blog confuses you because there are a lot of basics that I won’t mention in this blog.

It is gonna be very boring to go through the whole blog. In case you are already bored, here is what you can achieve with the BRDF model. Hopefully, this image can convince you finish reading it.

4k

Continue reading

How does PBRT verify BXDF

Unit test for BXDF in offline rendering turns out to be way more important than what I thought it would be. I still remember it took me quite a long time when I debugged my bi-directional path tracing algorithm before I noticed there was a little BXDF bug, which easily led to some divergence between BDPT and path tracing. Life will be much easier if we could find any potential BXDF problem at the very beginning.

Unlike most simple real-time rendering applications, it requires a lot more than just evaluating a BXDF in pixel shader. Usually, in offline rendering, we need to support the following features for each BXDF:

  • Evaluating BXDF given an incoming ray and out-going ray.
  • Given a pdf and an incoming ray, we need to sample an out-going ray based on the pdf.
  • Given an incoming ray and the same out-going ray generated above, it should return exactly the same PDF given by the above method. Keeping this consistent with the above feature is very important, otherwise, there will most likely be bias introduced in the renderer.
  • It is usually hard-coded that which PDF we will use for a specific BXDF. Technically speaking, unlike the above features, one can totally ignore this one. The more similarities between the shape of the two functions, the faster it converges to the ideal result.

In this blog, I will talk about how PBRT verifies its BXDF in its ‘bsdftest’ program.

Continue reading

Volume Rendering in Offline Renderer

Finally, I have some time reading books, spending several days digesting the volume rendering part of PBRT, there are loads of stuff that interest me. Instead of repeating the theory in it, I decided to put some key points in my blog with some brief introduction and then provide some derivations which are not mentioned in the book.

In case someone is not familiar with what volume rendering is, the above textures are attached for your references. One can easily notice that the fog on the right image greatly enhances the fidelity of the scene to a whole new different level. For a detailed explanation of volume rendering, it is suggested to check out this wiki page. Volume rendering is commonly used as fog, water etc. And mastering the basics behind it is of great importance because it also serves the basic theory of sub-surface scattering, which is very common even in real time 3D applications, like video games.

Continue reading

Image Based Lighting in Offline and Real-time Rendering

Image-based lighting is a practical way to enhance the visual quality of computer graphics. I used to be confused by it until I read the book “High Dynamic Range Imaging“, which provides a very clear explanation about IBL. And I actually have implemented the algorithm in my offline renderer before, it was just that I didn’t know it is IBL. The book PBR has some materials talking about it without explicitly mentioning the term. Following images are generated with IBL in my renderer, except that the last one uses a single directional light.

As we can see from the above images, IBL generated images looks way more promising than the one with a directional light. The real beauty of it is that with physically based shading everything looks great in different lighting environments without the need to change material parameters at all.

Continue reading

Classical Methods in Offline Rendering

This was an internal presentation that I delivered at NVIDIA Shanghai a couple of months ago. Since it involves nothing confidential, I decided to share it as a public resource on my website. Anyone interested in offline rendering can take a look at the slide, which introduces whitted ray tracing, path tracing, light tracing and instant radiosity.

This is not a basic introduction to those classical methods, it requires a little bit background knowledge about computer graphics and mathematics. It won’t introduce how to write a ray tracer program, most of the content in the slide are about mathematical derivations of the rendering method.

I wish I could have more time putting more stuff about bidirectional path tracing in it in the foreseeable future.

 

Instant Radiosity in my Renderer

I read about this instant radiosity algorithm in the book physically based rendering 3rd these days. It is mentioned as instant global illumination though, they are actually the same thing. I thought it should be a good algorithm until I have implemented in renderer, I’m afraid that it is not quite an efficient one. Although it is also unbiased like path tracing and bidirectional path tracing, the convergence speed is just terribly low comparing with the others. It can barely show pure specular materials objects, it definitely needs special handling on delta bsdf. Since it is already implemented, I’ll put some notes on it.

Continue reading

The Missing Primary Ray PDF in Path Tracing

I was always wondering why don’t we take the PDF of primary ray into account in a path tracer. Sadly there aren’t many resources available explaining it. I guess the book Physically based rendering 3rd will provide some explanation, however it is not released yet. After some searching on the internet, I finally got something to explain it. It actually gets cancelled with the terms in importance function and LTE. It gets cancelled in a very elegant way that we don’t need to put any resources on it at all, which is why many open-source ray tracer don’t consider it in the first place. In this blog, I’m gonna explain the detailed math behind the whole theory.

Continue reading

Practical implementation of MIS in Bidirectional Path Tracing

In my previous post, I talked some basic stuff about naive bidirectional path tracing. However it is hard to show any real value since there are always too much noise comparing with best solutions depending on the scene to be rendered. That is because the contribution of each specific path is not properly weighted. And multiple importance sampling can be the key to the issue, the following comparison shows big difference between different methods. All of the images are generated with my open-source renderer.

This slideshow requires JavaScript.

Those images are generated with rough the same amount of time. No doubt about it, MIS BDPT dominates among all those results. It is less noisy and shows good caustics. Although light tracing can also shows good caustics, it is far from a practical algorithm due to the noise in the rest of the scene, not to mention it almost failed to show any radiance value on the glass monkey head. Traditional path tracing algorithm shows no caustics at all, not because it is a biased algorithm. It is unbiased for sure, however it just converges to the correct caustics in a unreasonable speed. Naive bidirectional path tracing also has roughly same amount of noise, however it also has dimmer monkey head because light tracing doesn’t do a good job on it. In other words, bidirectional path tracing barely reveals any value without MIS.

I searched a lot of materials about MIS in BDPT, however there are only quite limited materials on the internet. Although some open source renderers, like luxrender, give detailed implementation, most of them doesn’t give any insight in the math behind it, without which one can be quickly confused by its code. SmallVCM expends the algorithm further, offering a better solution over MIS BDPT and it has detailed paper on the math. However it is a little complex for someone who just wants to figure out how to do MIS BDPT. Eric Veach’s thesis gives the best explanation on MIS in BPDT,  sadly it doesn’t go further with MIS implementation. In this blog, I’m gonna talk something about MIS in bidirectional path tracing. Most of the theory comes from this paper.

Continue reading

Naive Bidirectional Path Tracing

I posted a blog about path tracing some time ago, I didn’t regard it as a simple algorithm until I got my hands dirty on bidirectional path tracing. It really took me quite a while to get everything hooked up. Getting BDPT (short for bidirectional path tracing) converging to the same result with path tracing is far from a trivial task, any tiny bug hidden in the renderer will drag you into a nightmare. Those kind of bugs are not the same with the ones that usually appear in real time rendering, like the ones that can easily be exposed with some tools like Nsight, it may cost much more time if only a small component is missing in the target equations, which are totally crazy math.

Since traditional cornell box setting is pretty friendly to path tracing, I made a little change on it, the light source is a spot light facing towards right up. That said most of the scene is lit by the small direct illuminated area instead of the light, that just makes it a very unfriendly path tracing scene.

The following images are generated by BDPT(left), light tracing(top right) and path tracing(bottom right) with roughly the same amount of time.

Path tracing generates the most noisy image even if it is scaled down by four times, bidirectional path tracing result is better, however light tracing definitely gets the best result with least noise in it.

Continue reading