GlossyGS: Inverse Rendering of Glossy Objects with 3D Gaussian Splatting

TVCG 2025
(IEEE Transactions on Visualization and Computer Graphics)

Shuichang Lai * 1       Letian Huang * 2       Jie Guo ✝ 2       Kai Cheng 3       Bowen Pan 1       Xiaoxiao Long 4       Jiangjing Lyu 1       Chengfei Lv 1       Yanwen Guo 2      
* Denotes equal contribution       Corresponding author
1Tao Technology Department, Alibaba Group       2State Key Lab for Novel Software Technology, Nanjing University       3School of artificial intelligence and data science, University of Science and Technology of China       4Department of Computer Science, The University of Hong Kong
1 2
3 4

Abstract

Reconstructing objects from posed images is a crucial and complex task in computer graphics and computer vision. While NeRF-based neural reconstruction methods have exhibited impressive reconstruction ability, they tend to be time-comsuming. Recent strategies have adopted 3D Gaussian Splatting (3D-GS) for inverse rendering, which have led to quick and effective outcomes. However, these techniques generally have difficulty in producing believable geometries and materials for glossy objects, a challenge that stems from the inherent ambiguities of inverse rendering. To address this, we introduce GlossyGS, an innovative 3D-GS-based inverse rendering framework that aims to precisely reconstruct the geometry and materials of glossy objects by integrating material priors. The key idea is the use of micro-facet geometry segmentation prior, which helps to reduce the intrinsic ambiguities and improve the decomposition of geometries and materials. Additionally, we introduce a normal map prefiltering strategy to more accurately simulate the normal distribution of reflective surfaces. These strategies are integrated into a hybrid geometry and material representation that employs both explicit and implicit methods to depict glossy objects. We demonstrate through quantitative analysis and qualitative visualization that the proposed method is effective to reconstruct high-fidelity geometries and materials of glossy objects, and performs favorably against state-of-the-arts.

Video

Pipeline

Illustration of the inverse rendering pipeline for our GlossyGS. The initial points are used to generate anchors, which carry the macroscopic features. These features are fed to GS decoder and material encoder, generating neural 3D Gaussians and neural materials (BRDF). Normal map prefiltering is used as a prefilter for these neural Gaussians and materials to obtain the corresponding material maps. The final rendering is shading of these maps and differentiable environment lighting, supervised by ground-truth images and micro-facet geometry segmentation prior.

Architecture




Motivation

Normal Map Filtering

The diffuse color is linear in the normal. The alpha-blending in 3D-GS is a linear operation. For diffuse objects, the order of alpha-blending and shading has little effect on the final color. However, specular reflection does not satisfy this linear relationship. The two aforementioned orders differ when the shading function is nonlinear in the normal.

Micro-facet Geometry Segmentation Prior

The success of the micro-facet model relies on the ability of the microscopic normal distribution to produce visual effects similar to the macroscopic normal. However, achieving these similar visual effects presents challenges for inverse rendering tasks. A low-frequency macroscopic normal distribution with a large roughness (a) and a high-frequency macroscopic normal distribution with a small roughness (b) can model the same glossy appearance. This leads to different roughness-normal pairs for the individual object.

Comparisons of normal reconstruction

Comparisons of the estimated environment maps

Comparisons of relighting results

Comparisons of novel view synthesis

BibTeX

@article{glossygs,
      author={Lai, Shuichang and Huang, Letian and Guo, Jie and Cheng, Kai and Pan, Bowen and Long, Xiaoxiao and Lyu, Jiangjing and Lv, Chengfei and Guo, Yanwen},
      journal={IEEE Transactions on Visualization and Computer Graphics}, 
      title={GlossyGS: Inverse Rendering of Glossy Objects With 3D Gaussian Splatting}, 
      year={2025},
      volume={},
      number={},
      pages={1-14},
      keywords={Rendering (computer graphics);Geometry;Three-dimensional displays;Image reconstruction;Gaussian distribution;Image color analysis;Microscopy;Image segmentation;Surface reconstruction;Material properties;Inverse rendering;neural rendering},
      doi={10.1109/TVCG.2025.3547063}
}

References

[Kerbl et al. 2023] Bernhard Kerbl, Georgios Kopanas, Thomas Leimkühler, George Drettakis. 3D Gaussian Splatting for Real-Time Radiance Field Rendering. ACM Transactions on Graphics (SIGGRAPH Conference Proceedings) 42, 4 (July 2023).

[Jiang et al. 2024] Yingwenqi Jiang, Jiadong Tu, Yuan Liu, Xifeng Gao, Xiaoxiao Long, Wenping Wang, and Yuexin Ma. GaussianShader: 3D Gaussian Splatting with Shading Functions for Reflective Surfaces. in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 5322-5332.

[Liang et al. 2024] Zhihao Liang, Qi Zhang, Ying Feng, Ying Shan, Kui Jia. in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 21644-21653.

[Mai et al. 2023] Alexander Mai, Dor Verbin, Falko Kuester, Sara Fridovich-Keil. Neural Microfacet Fields for Inverse Rendering. in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 408-418.

[Li et al. 2024] Jia Li, Lu Wang, Lei Zhang, Beibei Wang. TensoSDF: Roughness-aware Tensorial Representation for Robust Geometry and Material Reconstruction. ACM Transactions on Graphics (SIGGRAPH Conference Proceedings) 43, 4 (July 2024).

[Liu et al. 2023] Yuan Liu, Peng Wang, Cheng Lin, Xiaoxiao Long, Jiepeng Wang, Lingjie Liu, Taku Komura, Wenping Wang. NeRO: Neural Geometry and BRDF Reconstruction of Reflective Objects from Multiview Images. ACM Transactions on Graphics (SIGGRAPH Conference Proceedings) 42, 4 (July 2023).