openexr-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Openexr-devel] understanding deep pixels and volumetrics


From: Thomas Loockx
Subject: [Openexr-devel] understanding deep pixels and volumetrics
Date: Thu, 15 Oct 2015 11:34:34 +1300

Currently I'm trying to build deep pixel rendering into Octane (a GPU
based render engine). So far that was a smooth ride until we tried to
tackle volumetrics...

I'll try to explain our current approach. Octane supports varying
density volumes based on OpenVDB. We didn't want to break open the
volume ray marching kernel itself so what the render engine spits out
for each ray going through a volume is a colour and a front and back
depth. The front depth is where the ray enters the volume and the back
depth is where the ray scatters or get's absorbed by the volume. Note
that this already breaks the assumption that samples have constant
optical density and colour because each sample is the result of
integrating the varying density volume (As explained in "The Theory of
OpenEXR Deep Samples" by dr. Peter Hillman). We shoot thousands of
rays per pixel and hence have to deal with thousands of volume
samples. Because we don't have big amounts of memory on a GPU, we try
to compact these samples. To compact, we first collect a series of
"seed" samples (usually 32 but it depends on the amount of ram) and
then try to build a bin distribution based on these seed samples. Each
bin has a front and back depth and stores accumulated RGB values.
After we created the bins, each subsequent samples is accumulated into
the bins that fully overlap with the sample. We try to estimate the
transmittance function based on the number of samples accumulated in
each bin.

If we inspect the deep images in Nuke, we get something that resembles
a puff of smoke or a cloud but it doesn't look too great. I know we're
cutting some corners here and I wonder if somebody can explain the
proper way of doing this. or point me to some example code. I studied
"Camera Space Volumetric Shadows" but it doesn't answer my questions.
This quote from Dr. Hillman "All we could get was the final color of
all the objects combined together," in fxguide's "The Art of Deep
Compositing" hints that it's possible :)

My questions are:

1) Can somebody explain to me what the best approach is to integrate
deep samples into a ray tracer with volumetrics?
2) How do you manage the heaps of samples?
3) Can you still try to do something useful with samples that have
varying density values integrated into them?
4) Can you use deep images with volumes that have scattering or do I
have to limit myself to just absorption?
5) Are there any tools dealing with deep images apart from Nuke? Right
now I'm inspecting our renders via the DeepSample node in Nuke but I'm
looking for a tool that can open an EXR file and then for example plot
the deep pixel alpha as a function.

thanks,
Thomas Loockx



reply via email to

[Prev in Thread] Current Thread [Next in Thread]