During my second semester of my masters program, I had the chance to continue my work of my bachelor’s thesis in form of a guided research. The topic of this extension was “Light Source Estimation for Photorealistic Object Rendering”. The final paper can be found in the Papers section, this is a shorter summary focusing on the implementation details.
Motivation
The generator pipeline that I implemented during my thesis project worked good in most instances, but was lacking in some departments. The lighting & exposure was static and the same for every scene and frame, there were issues with objects clipping into scene geometry and some of the real images used for blending were blurred. Since the goal was photorealism, we wanted to fix the lighting problem by including light source estimation into the pipeline.
Scene Relighting
The first step in our light source estimator is relighting the scene mesh. All images used to reconstruct the scene are in LDR but we needed a HDR representation to both detect lights easily and get good initial estimates. The relighting is implemented fully in compute shaders and based on the assumption, that every vertex of the scene mesh has to have the same radiance, no matter from which image it originates. This means, that we can solve for per frame exposure to get an exposure corrected, relit, HDR version of the vertex colors. The entire pipeline runs in Python, using PlotOptiX for raytracing, PyCeres for all optimizations and a modified version of Python Compute Shader to run compute shaders from Python.
The relighting smooths out the colors and ensures, that they are as accurate to the original lighting conditions as possible. In the comparison above the difference in light intensity cannot be seen, but this is equally as important to get good light estimation results.
Illuminance Calculation
After relighting the scene, it is rendered as an equirectangular panorama from a plausible point at the center of the scene. Then, the rendering is converted to illuminance, as we are interested in the light from the light source, not the appearance of the scene itself. See the comparison below for how the results look like.
The illuminance map has two advantages over raw HDR: Light sources are accentuated (the intensity of the reflection on the ceiling is reduced) and the intensity at the top and bottom of the panorama are scaled according to its solid angle.
Light source estimation
Directional light
Finally, we can estimate light sources. We assume, that there is exactly one directional light source in the scene; the sun. Building on this assumption, we reproject all pixels, that have no vertex associated with it, as these are likely to be where windows are. During mesh reconstructions, it is usually not possible to reconstruct windows, resulting in holes in the mesh. Using all this information, we can determine the brightest spot in the directional map and extract its direction in world space, as well as its parameters.
The resulting direction and parameters are mostly very good and correspond to where you would expect sun light to come in from.
Point lights
After estimating the sun direction, we estimate point lights from the illuminance map. The algorithm here is as follows: Take the brightest spot, use flood fill to get its dimension, estimate the light and mask it out. This continues until an energy threshold is met and all important light sources are assumed to be found.
This works great and usually leaves no light sources undetected, although some bright reflections are sometimes falsely also declared as being a point light. An ellipse is then fitted around the detected pixels and form it we can determine the mean solid angle of the light source. We can also get the position and color from the depth map and HDR panorama respectively. The intensity is calculated from the illuminance map and then optimized via rendering based minimization. The minimization is done using spherical gaussians and brings the intensity close to what you would expect it to be.
The resulting light sources are used during rendering as well as the corrected exposure values. See below for a few resulting final images with different light conditions and exposures.
Final words
The guided research overall was a definite success, most scenes now look much more realistic and believable. While some issues still persist (as they were not addressed), the pipeline does produce much better results now than before. I also learned a lot about photometry and computer vision, as well as deep learning. If you wish to take a look at the updated project, it is available on GitHub. Although there is no precompiled build of the latest version available, the light estimation pipeline can be run regardless. It is a separate python module which does not require the rest of the rendering pipeline to function. For a more in-depth explanation and more results, please refer to the paper.