top of page

localized IBL and relighting with light-stage style data set


This project was inspired by the great lighting work done by Digital Domain for Benjamin Button. That year I flu to New Orleans for Siggraph and talked to some of the key people to dig deeper and learn from them. The first one I talked to was Tadao Mihashi. He was a shader writer at DD at the time and he did a presentation at SXSW in Austin if my memory serves. Then I went to Siggraph to ask Jonathan Litt and Paul Lambert after their presentations.

I want to be able to light/relight the head in Nuke as a quick preview based on dynamic input of an HDR. Because I don’t have access to light-stage data set, I decided to do a virtual light stage to get a set of the head images as if it's lit by one of light-stage light a time.

The basic idea is very simple. I read several light stage papers. I think at the time it was light stage 1 to 6. The maquette of Benjamin Button was scanned in light stage 5. What I can achieve or implement was based on light stage 3 paper.

I built a virtual light stage in Maya based on the description mentioned in the paper. The virtual light stage is based on a twice-subdivided icosahedron to cut a sphere into triangles of 162 vertices in total. Based on the world position of vertices, 162 directional lights are set up aiming toward the world origin. Each directional light is corresponding to a vertex. Rendered the head one light at a time to generate 162 basis images. The basis image is calibrated as if the head is lit by a full white dome.

Each basis image can be weighted based on the color of the input HDR and then summed up to generate relit results. The weighting coefficient is decided by unwrapping a twice-subdivded icosahedron onto a lat-long map, so a lat-long map is divided into 162 triangles based on how you would project a twice-subdivided icosahedron onto a lat-long map.

Now the mapping becomes 1 directional light to one vertex to a point on the lat-log map. I calculated the weight by referencing Paul Debevec’s approach mentioned in one of his paper and calculated the barycentric weights of each pixel in the form of RGB triplet so that the pixel closer to the vertex will contribute more to the vertex's weight.

Once I get the weight of each vertex in RGB triplet, I simply multiply the basis image with weight and then linearly add up weighted basis images to approximate the final result.

Let me know if you are interested in knowing more and I'll add more details about the process and the python scripts I made that work in Nuke to calculate weights and modulate basis images.

cheers!

Featured Posts
Recent Posts
Archive
Search By Tags
Follow Us
  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square
bottom of page