Copyright © 2011 by the Association for Computing Machinery, Inc.
Permission to make digital or hard copies of part or all of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for commercial advantage and that copies bear this notice and the full citation on the
first page. Copyrights for components of this work owned by others than ACM must be
honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on
servers, or to redistribute to lists, requires prior specific permission and/or a fee.
Request permissions from Permissions Dept, ACM Inc., fax +1 (212) 869-0481 or e-mail
permissions@acm.org
.
I3D 2011, San Francisco, CA, February 18 – 20, 2011.
© 2011 ACM 978-1-4503-0565-5/11/0002 $10.00
Real-Time Volume Caustics with Adaptive Beam Tracing
G´abor Liktor
∗
Carsten Dachsbacher
†
Computer Graphics Group / Karlsruhe Institute of Technology
27 FPS
23 FPS
26 FPS
Figure 1: Our method renders surface and volume caustics using approximate beam tracing. These results demonstrate two-sided refractions,
inhomogeneous participating media as well as multi-bounce light-surface interactions rendered at real-time frame rates.
Abstract
Caustics are detailed patterns of light reflected or refracted on spec-
ular surfaces into participating media or onto surfaces. In this paper
we present a novel adaptive and scalable algorithm for rendering
surface and volume caustics in single-scattering participating me-
dia at real-time frame rates. Motivated by both caustic mapping
and triangle-based volumetric methods, our technique captures the
specular surfaces in light-space, but traces beams of light instead
of single photons. The beams are adaptively generated from a grid
projected from the light source onto the scene’s surfaces, which is
iteratively refined according to discontinuities in the geometry and
photon distribution. This allows us to reconstruct sharp volume
caustic patterns while reducing sampling resolution and fill-rate at
defocused regions. We demonstrate our technique combined with
approximate ray tracing techniques to render surfaces with two-
sided refractions as well as multiple caustic light bounces.
CR Categories: I.3.7 [Computer Graphics]: Three-Dimensional
Graphics and Realism
Keywords: volume caustics, beam tracing, specular effects, level-
of-detail
1
Introduction
Reflection, refraction, and the scattering of light as it interacts with
surfaces and different materials can result in interesting visual ef-
fects and stunning light patterns. However, simulating these ef-
fects is often computationally expensive and interactive rendering
of dynamic scenes with changing lighting, materials, and geometry
e-mail: gabor.liktor@kit.edu
y
e-mail: dachsbacher@kit.edu
is challenging. In particular this holds for caustics and volumet-
ric effects, and of course for the combination of both. Methods
such as (bidirectional) path tracing [Veach 1997] and photon map-
ping [Jensen 2001] are widely used to compute accurate results,
however, not at interactive speed. Radiosity-based methods, such
as [Dachsbacher et al. 2007], and precomputed radiance transfer,
e.g. [Sloan et al. 2002], are typically not well suited to render caus-
tics, as the high frequency light transport forming the caustics needs
to be represented in the respective basis functions.
Two different approaches, however, allow for interactive rendering
of caustics: particle tracing, e.g. [Wyman and Nichols2009; Hu
et al. 2010] (discussed in Sect. 2), and beam tracing methods [Ernst
et al. 2005]. Methods that fall within the latter category compute
beams of light reflected or refracted at a part of a surface, e.g. a
triangle in a mesh. Every beam is bounded by a base triangle and
three bilinear patches, and its contribution to the image is computed
by intersecting the beams with surfaces (surface caustics), and by
intersecting eye rays with the beam (volume caustics). Computing
the inscattered light for each ray-beam intersection is non-trivial
and the radiance along these “warped volumes” has been deeply
analyzed by Ernst et al. [2005]. Liktor and Dachsbacher [2010]
demonstrated a variant of this method generating the warped vol-
umes and accumulating inscattered light entirely on the GPU, how-
ever, in a brute force manner with a huge number of beams and
high fill-rate consumption. Recently, Hu et al. [2010] described a
fast technique resorting to accumulating the inscattered light using
many lines (light rays) instead of beams, which, however, requires
rendering a large number of lines to not to suffer from undersam-
pling problems.
We present a novel scalable method for rendering volume caustics
in single-scattering participating media which is based on adap-
tive beam tracing [Heckbert and Hanrahan 1984]. We render the
scene from the light source’s view to capture the locations of the
first light-surface interaction. We then generate the beams from this
projected grid and use approximate ray tracing algorithms to han-
dle two-sided refraction and multiple bounces. The beam genera-
tion adapts to geometric discontinuities of the specular objects, the
light’s intensity in the case of spot or textured light sources, and to
the beams’ contribution in the camera image. Our method does not
require any precomputation and handles fully dynamic scenes. We
also present a tight bounding volume computation that saves fill-
rate when accumulating the inscattered light from caustic beams.
47
2
Previous Work
In principle, rendering surface and volume caustics can be easily
accomplished using path tracing, but capturing the high frequency
caustics adequately requires bidirectional methods such as bidirec-
tional path tracing, metropolis light transport [Veach 1997], or pho-
ton mapping [Jensen 2001]. However, the prohibitively high ren-
der cost of these methods (for converged images) is typically only
affordable for off-line rendering. Recently, Wang et al. [2009] pre-
sented a method for interactive global illumination, including sur-
face caustic rendering, based on a fast GPU-based photon mapping
algorithm of Zhou et al. [2008].
Surface Caustics Most methods that render surface caustics at
interactive speed work similar to photon mapping and trace pho-
tons from the light sources until they hit a diffuse surface. Typ-
ically they avoid costly ray-triangle intersections by using image-
based approximations of the geometry, and the nearest neighbor
search by accumulation or splatting of the photons.
The latter
can take place in texture space [Szirmay-Kalos et al. 2005], im-
age space [Dachsbacher and Stamminger 2006; Wyman and Dachs-
bacher 2006; Shah et al. 2007; Wyman and Nichols2009], or the
objects’ coordinate system [Umenhoffer et al. 2008]. Photon splat-
ting typically suffers from the varying photon density resulting in
noise in regions that receive few photons only. Wyman and Dachs-
bacher[2006] address this problem by adapting splat sizes. Recent
work of Wyman et al. [2008;2009] renders high-quality caustics by
hierarchical sampling of the caustic map. Umenhoffer et al. [2008]
reconstruct smooth caustics from multiple reflections and refrac-
tions (computed using layered distance maps [Szirmay-Kalos et al.
2005]) by rendering caustic triangles instead of splatting.
Ernst et al. [2005] render caustic beams by rasterizing a convex
bounding prism and using a pixel shader to accumulate the inscat-
tered light. Their algorithm yields high quality for surface and vol-
ume caustics, however, it does not handle occlusions and requires
CPU assistance. We improve on this method in several aspects by
computing tighter bounding volumes, supporting multiple bounces
of light, and adaptively generating caustic beams.
Yu et al. [2007] also present a geometric method based on caustic
surfaces. These surfaces are swept by the foci of light rays and thus
can be used to visualize caustic phenomena. Instead of using beams
to discretize rays, their approach uses the two-plane parameteriza-
tion of the General Linear Camera model [Yu and McMillan 2004]
(cross-slit camera) to estimate caustic surfaces by finite sets of rays.
Their characteristic equation for finding caustic surfaces can be also
interpreted as the area of the triangles formed by triplets of neigh-
boring rays, closely related to the area function used in this paper.
Volume Caustics Rendering is much more intricate if participat-
ing media is present. Sun et al. [2008] render caustics from refrac-
diverging
converging
warped
camera
ray
direct
light
caustic
beam
bilinear
patch
caustic
triangle
Figure 2: Caustic beams, formed by a base triangle (yellow) and
three bilinear patches (blue), can represent warped, diverging, or
converging bundles of rays (left). To compute the inscattered light
from a beam one intersects the warped volumes with camera rays
(right).
tions in material with varying scattering properties at almost inter-
active speed. They trace photons and accumulate the out-scattered
radiance along a line segment (in between two refractions) into a 3D
grid. The photon tracing is accelerated representing the refractive
object using an octree. The final image is generated by ray march-
ing along primary rays through the grid. Eikonal rendering [Ihrke
et al. 2007] also achieves interactive rendering using wavefront
tracing, but requires precomputation and thus does not allow fully
dynamic scenes. Nisthita and Nakamae [1994] render underwater
volume caustics using a beam tracing variant. Iwasaki et al.[2002]
adapted this algorithm for GPUs, however, they used constant shad-
ing of the caustic beams resulting in blocky artifacts when using
too few beams. Recently, Hu et al. [2010] described a technique for
volume caustics which is based on tracing photons and rendering
line primitives in image space. Their method achieves interactive
to real-time performance, but can suffer from undersampling prob-
lems in regions where light rays are highly divergent. Most related
to our work, Liktor and Dachsbacher [2010] render volume caustics
using beams generated from a projected grid. However, the beam
generation is not adaptive as in our method, and the accumulation
of the beams consumes a vast amount of fill-rate due to the use of
suboptimal bounding prisms.
3
Overview
Generally speaking the primary goal of all interactive caustics ren-
dering methods is to decouple the caustic photon distribution start-
ing at the light sources, from the gathering of photons scattered
towards the camera. The gathering of inscattered light is mostly
performed by rasterization, as it trivially solves the direct visibility
problem. Our work is motivated by the observation that accumulat-
ing caustic light contributions using bounding volumes or lines is
largely bound by the cost of the rasterization itself. Triangle-based
methods are usually more scalable in terms of fill-rate and screen
resolution, but less flexible due to their direct dependency on the
surface geometry.
We combine ideas of caustic mapping approaches with triangle-
based caustic rendering to provide adaptive sampling in the light’s
image space while reducing the rasterization cost as much as possi-
ble. Fig. 3 provides an overview of our algorithm. Akin to caustic
mapping, we first capture all directly visible surfaces from the light
source by rasterizing them into geometry buffers. Then we trans-
form this representation into a triangle-based one by resampling the
geometry buffer according to a 2D grid. Instead of using a regular
grid with a resolution fine enough to faithfully represent all surface
details, we start with a coarser resolution and adaptively refine the
grid in subsequent steps. Each triangle in the sampling grid is a
base of a caustic beam, which we trace across surfaces to accumu-
late focused light in the volume.
For every beam we then need to determine the radiance scattered
towards the camera depending on the phase function and density
of the participating media. For this we rasterize bounding volumes
of the caustic beams and perform the integration of the inscattered
light in a pixel shader.
When also considering occlusion and higher order caustics, we
store the primary beams after generation for later passes.
The
beams are then intersected with the surrounding scene geometry us-
ing approximate ray tracing [Szirmay-Kalos et al. 2005]. The chal-
lenging part of higher order beam tracing is the non-trivial splitting
of beams where a surface is partially intersected. We handle this
problem similarly to the primary adaptive grid refinement, however,
using scene geometry impostors instead of the light space geometry
buffer.
48
CAUSTIC GEOMETRY
BUFFERS
CAUSTIC TRIANGLE BUFFER
SCREEN SPACE G-BUFFERS
VERTEX + GEOMETRY SHADER
PIXEL SHADER
A
B
C
D
+
Figure 3: The outline of our algorithm. The caustic generator surfaces are rendered into a light G-buffer (A). A regular grid is projected
to the sampled geometry (B), then adaptively refined based on surface discontinuities and camera distance (C). Finally, the beam bounding
volumes are extruded in the geometry shader, and accumulated by rasterization to the screen (D). Screen-space G-buffers provide the data
for culling the beams and rendering surface caustics.
4
Beam Generation
The adaptive beam generation from rasterized surfaces in the light’s
G-buffer is the key component of our method. In this section we
detail the adaptive refinement process, and the formation of tight
bounding volumes.
4.1
Hierarchical Projected Grid
In the first pass of our algorithm, we rasterize the specular surfaces
as seen from the light source to off-screen geometry buffers storing
positions, normals, and material properties (including transparency,
index of refraction, etc.). This essentially provides a simple 2D
parameterization over the directly lit surfaces. Note that this step
is identical to the initial phase of standard caustic mapping algo-
rithms, but instead of originating caustic photons at the sampled
locations, we use this texture parameterization to project a planar
grid to the lit surfaces. Taking a coarse, regular grid with the same
texture parameterization, the projection step becomes trivial as each
vertex can be moved to the position sampled from the correspond-
ing G-buffer location.
We can regard this projected grid as a downsampled, piecewise
linear approximation of the lit surfaces. The main advantage of
capturing the surfaces in this way is the decoupling of geometric
complexity from the actual caustic generation process: the sampled
mesh can have densely triangulated as well as coarse regions which
do not necessarily coincide with the caustic sampling. Instead of re-
quiring surfaces to be modeled with uniform mesh density, we use
this new “light-space approximation” to generate the primary caus-
tic beams due to the first light-surface interaction. Each cell of the
initial regular grid is treated as two triangles to later form caustic
beams with triangular bases. Naturally, invalid triangles crossing
the silhouettes of specular objects or lying outside are omitted.
Of course, projected grids have limitations as well: the boundaries
of generated beams typically do not match the silhouettes (discon-
tinuities) of the surfaces. We can largely suppress these problems
using adaptive subdivision, however, at the expense of generating
more beams. The goal of our hierarchical projected grid method is
to generate caustic beams that adapt not only to discontinuities of
the specular object, but also to the light’s emission characteristics,
and the beam’s contribution to the final rendered image.
4.2
Split Heuristics and Refinement
After generating potential caustic triangles from the initial coarse
grid we perform adaptive refinement that first detects undersam-
pled regions in the grid, and splits the corresponding grid triangles
into smaller ones. At the core of this procedure is an oracle decid-
ing whether a beam should be kept, subdivided or discarded. The
decision is based on an empirically set-up heuristic depending on
three factors:
• Discontinuities of the Specular Surface To render detailed
caustics we refine beams if we detect that they span geomet-
ric discontinuities. For this we compare the sampled surface
parameters, i.e. normal and depth, at the location where the
beam edges start, and split the triangles if they differ signifi-
cantly (similar to [Nichols et al. 2009]).
• Beam Energy We also account for the beams’ energy and
their contribution to the final image. Divergent beams are typ-
ically unproblematic (unless their energy is very high) as their
contribution is distributed rather evenly across their projected
screen area. Convergent beams, however, focus the light and
are refined with higher priority. In case of non-isotropic light-
sources, e.g. spotlights or textured lights, we could account
for lighting discontinuities in this step as well.
• Camera Parameters Lastly, we consider the relative posi-
tion of the beams to the camera. Beams closer to the viewer
are further refined than those that are far away and point away
from the camera. For this we would have to consider the spa-
tial extent of the entire beam, and not only the caustic triangle.
As the exact geometry of the beam is not yet known in this
step, we use a simple but well working estimate: we find the
closest point to the camera on the beam’s central ray (average
of the three beam edges), and we then base our decision on
the distance of this point to the camera.
The whole refinement process is implemented using a geometry
shader which allows outputting variable amount of vertices and
streaming data back to GPU memory. In out experiments 3 to 4 re-
finement steps proved to be sufficient for most of the cases (Fig. 4).
49
4.3
Bounding Volume Extrusion
Finally, we have to invoke the pixel shader on fragments influenced
by each caustic beam. The previously refined grid and the light
source defines incident beams on the caustic surface. These beams
may get both reflected and refracted on the surface originating the
caustic beams. For now, let us assume that we generate one caustic
beam for every base triangle, i.e. the surfaces are reflective, but not
transparent. We will extend this to surfaces that are both reflective
and refractive and to multiple bounces of beams in Sect. 6. If the
vertices of the base triangle are denoted as v
0
, v
1
, v
2
, and the
edges of the beam (the reflection or refraction direction of the light
impinging on the caustic triangle’s vertices) as d
0
, d
1
, d
2
, then the
beam is bounded by the edges v
i
+ t
i
d
i
, i ∈ {0..2}; where two
of these edges form a bilinear patch. Now we need to generate a
bounding volume for every beam that can actually be rasterized, as
bilinear patches cannot be rendered directly.
Extrusion Length In order to compute the bounding volumes we
need to first determine the required extrusion length of the beams. A
trivial maximum length can be computed by intersecting the beams
with the view frustum, but we also handle occlusion from other
objects in the scene. For this we use approximate ray tracing, as de-
scribed by Szirmay-Kalos et al. [2005], where the geometry shader
intersects the beam edges with surrounding geometry to determine
the extrusion length. Furthermore, we only want to rasterize caustic
beams that have a significant contribution to the resulting image. It
means that if a beam is divergent, there is a specific distance above
which we consider the contribution of the beam negligible, there-
fore we maximize the beam extrusion using energy approximation.
Area Function As we describe in Sect. 5, the caustic radiance de-
pends on the change of the cross section along the beam. Where the
cross section parallel to the caustic triangle is smaller than its area,
the rays get focused. The area of the caustic triangle can be easily
computed using the cross product of the edges (in fact the length of
the cross product is the double of the area). In order to approximate
the caustic radiance, we need a way to evaluate the cross-section of
the beam at any distance along the rays. Ernst et al.
[2005] ap-
plied a special coordinate system, which we will call beam space
in this paper, where the cross-section calculation can be performed
efficiently. Here we briefly overview the idea for completeness, for
details please refer to [Ernst et al. 2005]. The coordinate system
is chosen so that the origin coincides with one vertex of the caustic
triangle, the y-axis is the triangle normal, and the beam direction
vectors are scaled so that their y-coordinate is one. Using this coor-
dinate system, it is possible to express the area of the parallel cross-
section of the beam as a second order expression A(y) over the
local y-coordinate. We precompute the three coefficients of A(y)
during the beam-generation step, so later on we can evaluate it with
a single dot product.
Bounding Volumes Prior to rasterization, a geometry shader pro-
cesses each caustic triangle and emits new triangles forming the
bounding geometry of the caustic volume. However, the extrusion
is non-trivial, as the sides of the caustic beams are bilinear patches,
which cannot be represented by a finite set of triangles in the gen-
eral case. Ernst et al. [2005] proposed to set up touching planes for
each side of the beam: three touching planes are constructed, each
containing one edge of the base triangle, and one of the top vertices
such that the entire beam lies in the negative half space of the plane
(Fig. 5). The intersection of these planes then yields the edges of
the bounding volume.
Improved Bounding Volumes Accumulating the beams’ contri-
bution consumes a considerable amount of fill rate, and thus it is of
utmost importance to keep the number of rasterized fragments for
the individual beams as low as possible. The bounding volume gen-
90 beams
4.2 ms
314 beams
8.16 ms
383 beams
9.7 ms
1739 beams
22 ms
687 beams
11.4 ms
1374 beams
16 ms
16 x 16 regular grid
32 x 32 regular grid
64 x 64 regular grid
16 x 16 r. grid, 1 adaptive step
16 x 16 r. grid, 2 adaptive steps
16 x 16 r. grid, 3 adaptive steps
Figure 4: Quality and performance of the caustics rendering
method using different grid resolutions and refinement steps. The
left side uses static projected grids (equivalent to Liktor et al.
[2010]), while the right side shows the progress of our adaptive
method using the smallest regular grid as a basis. The small insets
visualize the beam origins in light's image space (Please zoom in in
the electronic version). The caustics beams in the blank areas were
discarded due to total internal refraction. The time values are for
the volume and surface caustic effect only.
eration by Ernst et al. [2005] works well for slightly warped beams,
but generates overly large volumes in the worst case as depicted in
Fig. 5. It is easy to recognize that the bounding volumes could be
much tighter if one would perform the same procedure using the
top edges as base edges of the bounding prism, then intersect the
two prisms with each other. We use a simplification to make this
computationally intensive task practical: as each bounding plane is
defined by one triangle edge and one vertex of the opposite triangle,
we can take the formed triangle directly as one side of the bounding
volume. Thus the final bounding volume is defined by six triangles,
selected for each edge independently. This method greatly reduces
fill-rate, however, suffers from numerical problems: the bounding
volume might have multiple consistent triangulations (i.e. one side
of the beam is close to planar), but we select each triangle indepen-
dently. Instead of performing a costly test again to ensure a wa-
tertight bounding volume, we revert to the bounding prism method
when we find multiple triangle candidates for any edge.
The bounding prism method yields the worst approximation to the
beam boundaries, when the beam is strongly warped. This is usu-
ally the case when the beam is focused so that the top triangle “flips
over” compared to the bottom one. Our method results in a tighter
convex hull with the same number of triangles, significantly reduc-
ing the fill-rate in such cases (Fig. 6).
50
right touching
plane defined by
v
1
, v
2
, v
0
'
v
0
v
1
v
2
v
0
'
v
1
'
v
2
'
Figure 5: Computing tight bounding volumes for warped beams
is non-trivial.
Left, middle: Ernst et al. propose to set up three
touching planes each containing one edge of the base triangle and
one vertex of the top triangle.
Right: we select 6 triangles using the
edges of the two triangles and consequently one vertex for each on
the opposite triangle. In case of a strongly warped beam this can
greatly reduce the pixel coverage of the volume.
5
Inscattering and Surface Caustics
In the last step of our method, the fragment shader computes the
amount of radiance scattered towards the viewer. In principle, every
beam is intersected (at every fragment) with a camera ray and the
contribution is (Fig. 7):
L(ω) =
∆x
e
−σ
t
(s
1
+s
2
)
σ
s
(x)p(ω · ω
0
)L
in
(ω
0
) ds
where L(ω) is the radiance scattered towards the viewer, ∆x is the
interval of the ray-beam intersection, σ
t
the extinction coefficient,
σ
s
(x) the scattering coefficient at a point (x), p(·) is the phase
function, and s
1
+ s
2
form the length of the light’s path from the
caustic triangle to the camera. Assuming that the beams are narrow
enough in image space, a single sample in the integration domain
is sufficient to approximate this integral with a low (and impercep-
tible) error:
L(ω) ≈ ∆xe
−σ
t
(s
1
+s
2
)
σ
s
p(ω · ω
0
)L
in
(ω
0
)
For the evaluation, the shader has to perform the following steps for
every fragment:
• Get the intersection of the beam (not the bounding volume)
and the viewing ray.
• Compute the incident radiance at the sample on the ray and
the scattered fraction thereof towards the camera.
5.1
Beam Intersection
When the bounding volumes emitted by the geometry shader are
rasterized, we need to determine if the camera ray corresponding
to each fragment intersects the beam, and if so the location of the
intersection. Ramsey et al. [2004] describe an exact computation
of ray-bilinear patch intersection, however, it is too costly for a
large number of caustic beams (in our typical scenes the number
of beams is thousands). Instead we resort to the scan plane-based
estimate [Ernst et al. 2005] (Fig. 7). The scan plane is an auxiliary
construction, in principle an arbitrary plane containing the camera
ray. The intersections of the scan plane and the edges of the caustic
beam form a triangle containing the caustic ray segment of length
∆x. The inaccuracy of this method stems from the fact that the
edges of the scan plane triangle are actually curved. This error can
be minimized by choosing the normal vector of the plane to be close
to the normal of the caustic triangle:
n
p
= r
view
× (r
view
× n
4
)
Fragments drawn:
Tight Volumes (38 FPS)
1050 beams
Bounding Prisms (32 FPS)
0
80
Figure 6: As the rendering of caustics is highly pixel shader bound,
it is crucial to minimize the number of fragments rasterized for
each beam.
Top: a glass sphere generates strongly focused caustic
beams. This is a typical scenario when the bounding prism method
of Ernst et al. [2005] generates looser volumes than our 6-triangle
method. We have visualized the fill-rate of our volume generation
(
bottom left) and the bounding prism method (bottom right), ren-
dering equivalent images.
where r
view
is the viewing ray and n
4
the triangle normal. The
desired length of the ray segment ∆x can then be obtained by a
cheap 2D ray-triangle edge intersection.
5.2
Radiance Estimation
After having determined an intersection of a view ray and a beam,
we need to compute the radiance scattered towards the viewer. The
amount depends on the reflected or refracted radiance at the spec-
ular surface and the cross section of the beam at the intersection
location. If we would neglect attenuation in the participating media
– here only for simplicity of the explanation – then the radiant flux
due to caustic triangle ∆
c
Φ(∆ω, ∆
c
) =
∆
c
∆ω
L(p, ω) cos θ dωdA
would be constant along the beam (p is a point on the infinitesimal
surface element, θ is the angle between n
4
and ω). Caustics are
formed due to the narrowing of the beams, and thus increased radi-
ance. Note that we account for attenuation in our implementation.
We compute the inscattered radiance as described by Ernst et
al. [2005]. To perform the intersection, the fragment shader requires
considerable amount of data: v
i
, d
i
, the area of
(v
0
, v
1
, v
2
), and
the radiance values at the base triangle’s vertices. It is important to
51
diverging
converging
camera
ray
bilinear
patch
base
triangle
v
0
v
1
v
2
p
0
p
1
p
2
p
near
p
far
Δx=|| p
near
- p
far
||
s
1
s
2
r
view
Figure 7: Intersection of a caustic beam with a camera ray. The
contribution of the beam to the image is computed using the in-
tersection length
∆x, the cross section at the vertices of the scan
triangle
p
i
, and the distance
s
1
, from the caustic triangle.
tightly pack this information into vertex attributes in the preceding
geometry shader.
To intersect the beams we first transform the view ray to the beam’s
local space introduced in Sect 4.3. The fragment shader takes a sin-
gle representative point on ∆x (in our implementation we simply
take the midpoint) and then evaluates the area function A(y). This
cross section area determines the radiance scaling according to the
(de)focus of the beam. Finally the shader computes the fraction of
light scattered towards the viewer according to the scattering in the
participating medium.
Caustic beams rendered in this manner suffer from clipping arti-
facts: when intersecting solid surfaces, the volumetric effect sud-
denly disappears as the rasterized fragments are occluded. We solve
this problem by applying a smoothening in the spirit of soft-particle
methods [Lorach 2007]: using the difference between the screen
space depth of the caustic receiver and the depth value of the beam’s
faces we can smoothly attenuate the caustic effect to avoid discon-
tinuities.
5.3
Surface Caustics
Our method can also render surface caustics using the very same
beams as for volume caustics. For this we generate an additional
geometry buffer for the camera view prior to caustics rendering.
This buffer holds the diffuse surface colors and view-space depth
values (z-coordinate) of the caustics receivers. In order to add sur-
face caustics to our method we only need to extend the aforemen-
tioned caustic fragment shader: during rasterization we sample the
camera-view geometry buffer and perform point-in-volume tests on
each surface point covered by the beam, similarly to [Ernst et al.
2005]. First we intersect the beam with a plane parallel to the caus-
tic triangle. This is trivial, as the y coordinate of the surface point
defines its distance along the caustic triangle normal in the local
beam space (refer back to 4.3). If the point is inside the resulting
triangle, we use the barycentric coordinates to interpolate between
the initial radiance values of the beam edges, and A(y) to determine
the amount of relative radiance change. Note that this screen-space
method is completely independent from the receiver surfaces and
automatically adapts to high frequency details (in contrast to splat-
ting methods, where the splatting resolution needs to be increased
at high frequency details).
6
Multi-Bounce Caustics
So far we considered beams originated from the first light-surface
intersections. Our method easily extends to multiple surface inter-
actions, including refractive and reflective surfaces, two-sided re-
fraction for transparent objects, and also multiple subsequent light
bounces across the scene.
Reflection and refraction Transparent objects typically both re-
flect and refract light. Note that it is sufficient to run the refinement
process once for both reflection and refraction, since the heuristic
only depends on the geometry, but not the material properties. Af-
ter refinement, we compute the reflection direction for every inci-
dent beam’s edges to create a new beam representing the reflection
caustics. Since transparent objects typically refract light on both
the front and back side, we handle two-sided refractions directly:
we first compute the refraction beam at the first intersection, and
immediately compute the second refraction. To intersect rays from
inside the objects, we use a distance impostor: a cube map storing
the surface normals and distance from the reference point (typically
the object’s center) [Szirmay-Kalos et al. 2005]. Computing both
refractions at once speeds up the rendering, however, caustics inside
the object cannot be rendered.
Multiple surface interactions The input of the beam rasteriza-
tion step can be also used to take the simulation one step further
and compute higher-order light bounces. Having the scene raster-
ized into impostors (using a representative center of projection),
we can detect whether a caustic beam intersects other specular sur-
faces. Note that we can use a slightly modified version of the beam
splitting oracle we introduced in Sect. 4. This allows us to split
secondary beams at silhouettes (which is often intricate with beam
tracing). Instead of examining the contents of the light’s geometry
buffer, we can now compare the difference of photon hits at beam
edges and surface normals to detect depth discontinuities. Fig. 1.
(right) demonstrates multiple refractions on the surface of water of
a beam focused by a glass sphere.
7
Implementation
We have implemented our method using Direct3D 11 and HLSL
(The algorithm only requires Shader Model 4 architecture). In this
section we briefly summarize the implementation specific details of
the above procedures.
Beam generation The generation of the primary caustic beams
takes place in light-space and the adaptive subdivision is performed
globally for the first reflective and refractive bounce. For this we
use a geometry shader that is executed recursively (via stream-out
feedback) for each refinement step. Instead of processing triangle
primitives, we pack the vertices of each beam triangle into a sin-
gle point primitive. This decision has positive effect on the perfor-
mance of the subdivision for two reasons. First, the vertex shader
can also do part of the triangle processing, taking off some bur-
den from the geometry shader. Second, we expected that handling
the data at higher granularity gives a hint to the pipeline that spe-
cific attributes belong together, and the geometry shader outputs 0-4
vertices instead of 0-12.
In all our examples, 3-4 iterations produced pleasing results for a
coarse initial 16 × 16 grid. The beam extrusion is performed sep-
arately for reflection and refraction in the beam-rasterization pass,
using the same adaptive base grid. If multiple bounces are enabled,
an additional stream-out pass is necessary to determine the inter-
sections with the environment. After having computed these in-
tersections, the adaptive beam refinement is restarted as previously
described.
Surface caustics Ernst et al. [2005] smoothly interpolate between
neighboring caustic volumes using adjacent caustic triangles and
instead of using a single area function, one area function is used
for each triangle vertex and averaged over the caustic beams. One
limitation in our current implementation is that we use a single area
function per beam. This is due to the adaptive subdivision in the
52
A
B
C
D
E
Figure 8: The final image (E) is composited of the direct illumi-
nation of surfaces (A), volumetric lighting with shadows (B), the
volume caustics (C) and the surface caustics (D). Note that we ren-
der surface caustics separately to perform low-pass filtering.
geometry shader where triangle adjacency information is lost on
the GPU after stream-out. As a result of using only one area func-
tion, small discontinuities appear at beam boundaries when using
coarser grids. This is insignificant in case of the volume caustics,
but causes sharp edges on the generated surface caustics. As our
method primarily targets volume caustics rendering, we can still
use the implicit surface caustics, but we have to apply low-pass
filtering to remove sharp edges from the light patterns. Note that
we only use the filter on the surface caustics. The advantage of
this method is that the surface caustics comes virtually “for free”,
since it is directly evaluated during beam rasterization. When high
quality surface caustics is required we recommend combining our
method with caustic photon splatting methods, such as [Wyman
and Nichols2009].
Volumetric Lighting As volume caustics are not plausible with-
out the presence of lit participating media in the image, we have
implemented additional volumetric illumination techniques. The
final image is composed of: the direct surface illumination term,
the lit participating media with volumetric shadows, and two caus-
tic buffers (Fig. 8). We render surface caustics to a separate render
target to apply the aforementioned low-pass filtering.
8
Results and Discussion
Figure shows three of our test scenes we have used to evaluate our
method. All measurements have been made using a test platform
0
10
20
30
40
50
Regular
Adaptive
Regular
Adaptive
Regular
Adaptive
Skull
Smoke
Water
11.26
6.04
14.4
7.7
21.8
14
7.94
6.16
26.6
12.1
19
9.7
Geometry Processing
Rasterization
Caustics (ms)
19.2
12.2
41
19.8
40.8
23.7
# Beams
2646
2045
1617
492
5300
3300
Total FPS
22
26
15
22
16
22
Table 1: Performance results on various test scenes. The above
chart provides detailed information about the time spent on the gen-
eration of beam volumes and rasterization in milliseconds.
equipped with an NVIDIA Geforce GTX 470 GPU and an Intel
Core i7 920 CPU (Table 8). The featured images were all rendered
at a resolution of 1024 × 768 pixels. The resolution of the initial
coarse grid is 16 × 16, and we performed three steps of adaptive
subdivision. To evaluate the performance gain of our method com-
pared to static beam generation [Liktor and Dachsbacher 2010], we
have also rendered the same images on a 64 × 64 regular grid. This
resolution is a subjective choice, which we have found to be the
closest match to the adaptively generated images. As the relatively
simple implementation of the finely sampled volumetric lighting
takes considerable amount of rendering time, we measured the tim-
ings of the caustics rendering method separately besides the total
frame rate.
• The glass skull scene contains a simple environment receiv-
ing caustics from the translucent model; only single-bounce
refraction caustics are rendered. This is the most frequent
scenario for real-time caustics, when the frame budget for ad-
ditional global illumination effects is low.
• The water scene demonstrates a complex, multi-bounce ef-
fect when refracted caustics beams are refracted once more
on the water surface. Underwater caustics is also challeng-
ing in terms of performance, as hardly any caustic beams are
occluded and thus the fill-rate is very high.
• The smoke scene demonstrates volumetric caustics in inho-
mogeneous participating media. When evaluating the incident
radiance, the fragment shader performs ray-marching inside
the participating media to numerically approximate the vol-
ume rendering equation.
One of the ideal examples for the advantages of the adaptive scheme
is the water scene, where both refractors are relatively smooth sur-
faces, but there is a sudden discontinuity at the silhouette of the
sphere as seen from the light source. While regular grid can only
capture the surface borders at a high resolution, our method in-
creases the number of beams at the boundary regions while keeping
the resolution low at the remaining parts.
Obviously, if high detail is required everywhere, then the adaptive
subdivision does not significantly reduce the total number of beams.
This is the case in the skull example, where the refractive geometry
contains high-frequency details (teeth, eye sockets). As the gen-
eration of the beams is a geometry-intensive process, the render-
ing of very high-frequency refractors might perform better using
an approach similar to [Hu et al. 2010]. On the other hand, the
approximation of volume caustics with solid beam volumes always
provides a continuous effect and does not suffer from line aliasing
artifacts.
53
9
Conclusion and Future Work
In this paper we presented a real-time method for rendering volume
and surface caustics with multiple reflections and refractions. It is
based on adaptive beam tracing and can be entirely implemented on
programmable graphics hardware. We showed how to adaptively
generate and split the beams, and compute tight bounding volumes
for an efficient accumulation of the beams’ contribution.
Presumably the primary source of inaccuracy in our method is the
approximate ray tracing that is used for multiple bounces, as impos-
tors are generally not able to capture all surfaces in the scene. Al-
though this can be sidestepped, e.g. using depth peeling, the emer-
gence of fast methods for constructing spatial index structures for
triangle-based ray tracing possibly suggests to rely on those in the
future [Gr¨un 2010].
An interesting direction of future work is the replacement of the
geometry shader stage in the adaptive grid refinement step with a
compute shader kernel. This would allow us to keep track of the
topology of the grid, maintaining a spatial indexing structure. Us-
ing the adjacency information, we can generate a smooth transitions
among the beams, thus significantly improve the quality of the sur-
face caustics.
Acknowledgements
The first author of this paper has been funded by Crytek GmbH.
References
D
ACHSBACHER
, C.,
AND
S
TAMMINGER
, M. 2006. Splatting indi-
rect illumination. In Proc. of the 2006 Symposium on Interactive
3D Graphics and Games
, 93–100.
D
ACHSBACHER
, C., S
TAMMINGER
, M., D
RETTAKIS
, G.,
AND
D
URAND
, F. 2007. Implicit visibility and antiradiance for inter-
active global illumination. In ACM Trans. on Graphics (Proc. of
SIGGRAPH '07)
, vol. 26(3), 61.
E
RNST
, M., A
KENINE
-M ¨
OLLER
, T.,
AND
J
ENSEN
, H. W. 2005.
Interactive Rendering of Caustics using Interpolated Warped
Volumes. Proc. of Graphics Interface, 87–96.
G
R
¨
UN
, H., 2010. Direct3d 11 indirect illumination. Presentations
at Game Developers Conference.
H
ECKBERT
, P. S.,
AND
H
ANRAHAN
, P. 1984. Beam Tracing
Polygonal Objects. Computer Graphics (Proc. of SIGGRAPH
'84)
, 119–127.
H
U
, W., D
ONG
, Z., I
HRKE
, I., G
ROSCH
, T., Y
UAN
, G.,
AND
S
EIDEL
, H.-P. 2010. Interactive Volume Caustics in Single-
scattering Media. In Proc. of the ACM SIGGRAPH Symposium
on Interactive 3D Graphics and Games
, 109–117.
I
HRKE
, I., Z
IEGLER
, G., T
EVS
, A., T
HEOBALT
, C., M
AGNOR
,
M.,
AND
S
EIDEL
, H.-P. 2007. Eikonal Rendering: Efficient
Light Transport in Refractive Objects. In ACM Trans. on Graph-
ics (Proc. of SIGGRAPH '07)
, vol. 26(3), 59.
I
WASAKI
, K., D
OBASHI
, Y.,
AND
N
ISHITA
, T. 2002. An Ef-
ficient Method for Rendering Optical Effects Using Graphics
Hardware. Computer Graphics Forum 21, 4, 701–712.
J
ENSEN
, H. W. 2001. Realistic Image Synthesis using Photon
Mapping
. A. K. Peters, Ltd.
L
IKTOR
, G.,
AND
D
ACHSBACHER
, C. 2010. Real-Time Volu-
metric Caustics with Projected Light Beams. Proc. of the 5th
Hungarian Conf. on Computer Graphics and Geometry
, 12–18.
L
ORACH
, T., 2007. Soft Particles. NVIDIA White Paper.
N
ICHOLS
, G., S
HOPF
, J.,
AND
W
YMAN
, C. 2009. Hierarchi-
cal Image-Space Radiosity for Interactive Global Illumination.
Computer Graphics Forum 28
, 4, 1141–1149.
N
ISHITA
, T.,
AND
N
AKAMAE
, E. 1994. Method of Displaying
Optical Effects within Water using Accumulation Buffer. SIG-
GRAPH '94
, 373–379.
R
AMSEY
, S. D., P
OTTER
, K.,
AND
H
ANSEN
, C. 2004. Ray Bi-
linear Patch Intersections. Journal of Graphics, GPU, and Game
Tools 9
, 3, 41–47.
S
HAH
, M. A., K
ONTTINEN
, J.,
AND
P
ATTANAIK
, S. 2007. Caus-
tics Mapping: An Image-Space Technique for Real-Time Caus-
tics. IEEE Trans. on Vis. and Computer Graphics 13, 272–280.
S
LOAN
, P.-P., K
AUTZ
, J.,
AND
S
NYDER
, J. 2002. Precomputed
Radiance Transfer for Real-time Rendering in Dynamic, Low-
frequency Lighting Environments. In ACM Trans. on Graphics
(Proc. of SIGGRAPH '02)
, vol. 21(3), 527–536.
S
UN
, X., Z
HOU
, K., S
TOLLNITZ
, E., S
HI
, J.,
AND
G
UO
, B. 2008.
Interactive Relighting of Dynamic Refractive Objects. In ACM
Trans. on Graphics (Proc. of SIGGRAPH '08)
, vol. 27(3), 1–9.
S
ZIRMAY
-K
ALOS
, L., A
SZ
´
ODI
, B., L
AZ
´
ANYI
, I.,
AND
P
RE
-
MECZ
, M. 2005. Approximate Ray-Tracing on the GPU with
Distance Impostors. Computer Graphics Forum, 695–704.
U
MENHOFFER
, T., P
ATOW
, G.,
AND
S
ZIRMAY
-K
ALOS
, L. 2008.
Caustic Triangles on the GPU. Proc. of Computer Graphics In-
ternational
.
V
EACH
, E., 1997. Robust Monte Carlo Methods for Light Trans-
port Simulation. Ph.D. dissertation, Stanford University.
W
ANG
, R., W
ANG
, R., Z
HOU
, K., P
AN
, M.,
AND
B
AO
, H. 2009.
An efficient GPU-based approach for interactive global illumi-
nation. In ACM Trans. on Graphics (Proc. of SIGGRAPH '09),
vol. 28(3), 1–8.
W
YMAN
, C.,
AND
D
ACHSBACHER
, C. 2006. Improving Image-
Space Caustics via Variable-Sized Splatting. Journal of Graph-
ics Tools 13
, 1.
W
YMAN
, C.,
AND
N
ICHOLS
, G. 2009. Adaptive Caustic Maps
Using Deferred Shading. Computer Graphics Forum 28, 2, 309–
318.
W
YMAN
, C. 2008. Hierarchical Caustic Maps. Proc. of the 2008
Symposium on Interactive 3D Graphics and Games
, 163–171.
Y
U
, J.,
AND
M
C
M
ILLAN
, L. 2004. General linear cameras. In
ECCV (2)
, 14–27.
Y
U
, X., L
I
, F.,
AND
Y
U
, J. 2007. Image-space caustics and curva-
tures. Computer Graphics and Applications, Pacific Conference
on 0
, 181–188.
Z
HOU
, K., H
OU
, Q., W
ANG
, R.,
AND
G
UO
, B. 2008. Real-Time
KD-Tree Construction on Graphics Hardware. In ACM Trans.
on Graphics (Proc. of SIGGRAPH Asia '08)
.
54
Dostları ilə paylaş: |