A quantized-diffusion model for rendering translucent materials

Abstract

We present a new BSSRDF for rendering images of translucent materials. Previous diffusion BSSRDFs are limited by the accuracy of classical diffusion theory. We introduce a modified diffusion theory that is more accurate for highly absorbing materials and near the point of illumination. The new diffusion solution accurately decouples single and multiple scattering. We then derive a novel, analytic, extended-source solution to the multilayer searchlight problem by quantizing the diffusion Green’s function. This allows the application of the diffusion multipole model to material layers several orders of magnitude thinner than previously possible and creates accurate results under high-frequency illumination. Quantized diffusion provides both a new physical foundation and a variable-accuracy construction method for sum-of-Gaussians BSSRDFs, which have many useful properties for efficient rendering and appearance capture. Our BSSRDF maps directly to previous real-time rendering algorithms. For film production rendering, we propose several improvements to previous hierarchical point cloud algorithms by introducing a new radial-binning data structure and a doubly-adaptive traversal strategy.

Publication
SIGGRAPH 2011
Date
Links