TransparentGS: Fast Inverse Rendering of
Transparent Objects with Gaussians

SIGGRAPH 2025
(ACM Transactions on Graphics)


Letian Huang1      Dongwei Ye1      Jialin Dan1      Chengzhi Tao1      Huiwen Liu2     
Kun Zhou3,4      Bo Ren2      Yuanqi Li1      Yanwen Guo1      Jie Guo✝ 1     
Corresponding author
1State Key Lab for Novel Software Technology, Nanjing University
2TMCC, College of Computer Science, Nankai University
3State Key Lab of CAD & CG, Zhejiang University
4Institute of Hangzhou Holographic Intelligent Technology


1 2 3 4

Abstract

The emergence of neural and Gaussian-based radiance field methods has led to considerable advancements in novel view synthesis and 3D object reconstruction. Nonetheless, specular reflection and refraction continue to pose significant challenges due to the instability and incorrect overfitting of radiance fields to high-frequency light variations. Currently, even 3D Gaussian Splatting (3D-GS), as a powerful and efficient tool, falls short in recovering transparent objects with nearby contents due to the existence of apparent secondary ray effects. To address this issue, we propose TransparentGS, a fast inverse rendering pipeline for transparent objects based on 3D-GS. The main contributions are three-fold. Firstly, an efficient representation of transparent objects, transparent Gaussian primitives, is designed to enable specular refraction through a deferred refraction strategy. Secondly, we leverage Gaussian light field probes (GaussProbe) to encode both ambient light and nearby contents in a unified framework. Thirdly, a depth-based iterative probes query (IterQuery) algorithm is proposed to reduce the parallax errors in our probe-based framework. Experiments demonstrate the speed and accuracy of our approach in recovering transparent objects from complex environments, as well as several applications in computer graphics and vision.
Comparison of transparent object reconstruction methods in terms of training time (A), rendering time (B), and the capability of supporting ambient light (C), nearby contents (indirect light) (D), highfrequency refraction details (E), accurate reflection-refraction decoupling (F), colored refraction (G), and re-rendering (e.g., relighting or material editing [Khan et al. 2006]) (H).

Method

The overview of our TransparentGS pipeline. Each 3D scene is firstly separated into transparent objects and opaque environment using SAM2 [Ravi et al. 2024] guided by GroundingDINO [Liu et al. 2024]. For transparent objects, we propose transparent Gaussian primitives, which explicitly encode both geometric and material properties within 3D Gaussians. And the properties are rasterized into maps for subsequent deferred shading. For the opaque environment, we recover it with the original 3D-GS, and bake it into GaussProbe surrounding the transparent object. The GaussProbe are then queried through our IterQuery algorithm to compute reflection and refraction.

GaussProbe

Illustration of our baking pipeline for Gaussian light field probes. Given a set of environmental images with the transparent object removed, we can reconstruct the 3D scene using the original 3D-GS [Kerbl et al. 2023]. We voxelize the scene and place virtual cameras around the bounding box of the transparent object. For each virtual camera, we project the Gaussian primitives onto the tangent plane of the unit sphere [Huang et al. 2024], generating tangent-plane Gaussians. Finally, an 𝛼-blending pass bakes the 360° panoramic color and depth maps at each point, which are subsequently stored in the voxels.

IterQuery

To address the parallax issue inherent to the probes and enhance the details of refraction/inter-reflection, we design a depth-based iterative probes query algorithm (IterQuery) which achieves plausible results only after a few iterations.

Results

Inverse Rendering

3D scene segmentation results on the Glass scene. Top left: Image segmentation results. Bottom left: Segmented scene represented by the original 3D-GS [Kerbl et al. 2023]. Right: Probes baked from the segmented scene.

Graph for sam results

Surface mesh reconstruction results of our method on the real-captured and synthetic datasets.

Graph for meshes

Detailed intermediate results of our method. Left: the environment and the corresponding GaussProbe. Right: maps of additional parameters.

Graph for maps

Applications

Our method supports the rendering and navigation of scenes that integrate triangle meshes, traditional 3D-GS and transparent Gaussian primitives, as well as non-pinhole cameras.

Visual Comparisons

Ours
Reflection
NU-NeRF [Sun et al. 2024]
Reflection
Ours
Refraction
NU-NeRF [Sun et al. 2024]
Refraction
Ours
Normal
GShader [Jiang et al. 2024]
Normal
Ours
Rendering
NU-NeRF [Sun et al. 2024]
Rendering

BibTeX

@article{transparentgs,
  author={Huang, Letian and Ye, Dongwei and Dan, Jialin and Tao, Chengzhi and Liu, Huiwen and Zhou, Kun and Ren, Bo and Li, Yuanqi and Guo, Yanwen and Guo, Jie},
  journal={ACM Transactions on Graphics}, 
  title={TransparentGS: Fast Inverse Rendering of Transparent Objects with Gaussians}, 
  year={2025}
}

Acknowledgments and Funding

The authors would like to thank the anonymous reviewers for their valuable feedback. This work was supported by the National Natural Science Foundation of China (No. 61972194 and No. 62032011) and the Natural Science Foundation of Jiangsu Province (No. BK20211147).