
Reflection

Reflection

Refraction

Refraction
The overview of our TransparentGS pipeline. Each 3D scene is firstly separated into transparent objects and opaque environment using SAM2 [Ravi et al. 2024] guided by GroundingDINO [Liu et al. 2024]. For transparent objects, we propose transparent Gaussian primitives, which explicitly encode both geometric and material properties within 3D Gaussians. And the properties are rasterized into maps for subsequent deferred shading. For the opaque environment, we recover it with the original 3D-GS, and bake it into GaussProbe surrounding the transparent object. The GaussProbe are then queried through our IterQuery algorithm to compute reflection and refraction.
Illustration of our baking pipeline for Gaussian light field probes. Given a set of environmental images with the transparent object removed, we can reconstruct the 3D scene using the original 3D-GS [Kerbl et al. 2023]. We voxelize the scene and place virtual cameras around the bounding box of the transparent object. For each virtual camera, we project the Gaussian primitives onto the tangent plane of the unit sphere [Huang et al. 2024], generating tangent-plane Gaussians. Finally, an 𝛼-blending pass bakes the 360° panoramic color and depth maps at each point, which are subsequently stored in the voxels.
To address the parallax issue inherent to the probes and enhance the details of refraction/inter-reflection, we design a depth-based iterative probes query algorithm (IterQuery) which achieves plausible results only after a few iterations.
3D scene segmentation results on the Glass scene. Top left: Image segmentation results. Bottom left: Segmented scene represented by the original 3D-GS [Kerbl et al. 2023]. Right: Probes baked from the segmented scene.
Surface mesh reconstruction results of our method on the real-captured and synthetic datasets.
Detailed intermediate results of our method. Left: the environment and the corresponding GaussProbe. Right: maps of additional parameters.
Our method supports the rendering and navigation of scenes that integrate triangle meshes, traditional 3D-GS and transparent Gaussian primitives, as well as non-pinhole cameras.
@article{transparentgs,
author={Huang, Letian and Ye, Dongwei and Dan, Jialin and Tao, Chengzhi and Liu, Huiwen and Zhou, Kun and Ren, Bo and Li, Yuanqi and Guo, Yanwen and Guo, Jie},
journal={ACM Transactions on Graphics},
title={TransparentGS: Fast Inverse Rendering of Transparent Objects with Gaussians},
year={2025}
}
The authors would like to thank the anonymous reviewers for their valuable feedback. This work was supported by the National Natural Science Foundation of China (No. 61972194 and No. 62032011) and the Natural Science Foundation of Jiangsu Province (No. BK20211147).