During the second semester of my master’s program, I continued the work of my bachelor’s thesis as part of a guided research project. The topic of this extension was Light Source Estimation for Photorealistic Object Rendering. The full paper is available in the Papers section; this page provides a shorter summary focused on the implementation.
Motivation
The rendering pipeline from my bachelor’s thesis produced promising results, but still had several visible limitations. In particular, lighting and exposure were static across scenes, some inserted objects clipped into scene geometry, and some of the underlying real images were affected by motion blur. Since the goal of the project was photorealistic data generation, the guided research focused on improving the realism of the rendered objects by introducing light source estimation into the pipeline.

Scene Relighting
The first step of the light estimation pipeline is scene relighting. The reconstructed scenes were available only in LDR, but the light estimation stage required an HDR representation in order to recover plausible light intensities and identify dominant light sources more reliably.
The relighting pipeline was implemented entirely with compute shaders and based on the assumption that each scene vertex should have consistent radiance regardless of which input image it originated from. This made it possible to solve for per-frame exposure values and reconstruct an exposure-corrected HDR version of the scene mesh.
The overall implementation was written in Python and combined several tools: PlotOptiX for ray tracing, PyCeres for optimization, and a modified version of Python Compute Shader for GPU compute execution from Python.


The relighting step produced smoother and more consistent scene colors while, more importantly, recovering the relative light intensities needed for later estimation stages.
Illuminance Calculation
After relighting, the scene was rendered as an equirectangular panorama from a plausible point near the center of the scene. This panorama was then converted into an illuminance map, since the goal was to reason about incoming light rather than the direct appearance of the scene geometry itself.


The illuminance representation offered two main advantages: it emphasized actual light sources over indirect reflections, and it correctly scaled intensity near the top and bottom of the panorama according to the associated solid angle.
Light Source Estimation
Directional Light
The first estimated source was a single directional light, corresponding to the sun. The method assumed that there is exactly one such source in the scene. To recover it, pixels without associated mesh vertices were reprojected, as these regions often corresponded to windows or reconstruction holes. Combined with the directional illuminance map, this made it possible to identify the brightest candidate region and recover a plausible sun direction in world space together with its main parameters.
The recovered directional light was generally consistent with the visible structure of the scene and with the expected position of incoming sunlight.
Point Lights
After estimating the sun direction, the pipeline estimated point lights from the illuminance map. The procedure was iterative: locate the brightest region, determine its spatial extent using a flood-fill step, estimate a light source from that region, mask it out, and continue until a predefined energy threshold was reached.


This approach usually recovered the dominant point lights reliably, although strong reflections could occasionally be misclassified as light sources. For each detected region, an ellipse was fitted to estimate its mean solid angle. The position and color were obtained from the depth map and HDR panorama, while the intensity was initialized from the illuminance map and then refined through rendering-based optimization.
The optimization step used spherical Gaussians to fit light intensity more robustly and brought the estimated values closer to physically plausible lighting conditions.


The final estimated light sources, together with the corrected exposure values, were then fed back into the rendering pipeline. This resulted in noticeably more coherent lighting across scenes and more believable synthetic composites.

Conclusion
This guided research project significantly improved the realism of the original rendering pipeline. While some limitations from the earlier work remained outside the scope of this extension, the addition of light source estimation produced results that were visibly more coherent and believable across many scenes.
The project also gave me deeper experience with photometry, computer vision, optimization-based estimation, and supporting tooling around synthetic data generation. The updated project is available on GitHub. While no precompiled build of the latest full pipeline is available, the light estimation component can be run independently as a separate Python module. For a more complete discussion of the method and additional results, please refer to the paper.

