Very helpful document, Florian, thanks.
I have some questions about the meanings of deep images (mostly from the point
of view of writing software to manipulate them).
Is the One True Accepted definition of the color and alpha channels that they are always
the premultiplied, accumulated (that is, pre-composited?) values at the depth of each
sample? So we never have to worry about deep images that have channels whose values are
the "local" contributions (rather than the cumulative amounts)?
If so, how can you represent samples "behind" an opaque object? Or can't you?
Does the spirit of deep OpenEXR allow samples with a Z that is greater than the point
where alpha == 1?
For the "flatten" operation, should the flattened Z be the depth at which alpha
became opaque? That kind of makes sense, in that it would keep the Z paired with its
alpha at that depth; but on the other hand, that means that a flattened deep file would
in general not end up with a Z channel that would match, say, a traditional render Z
output, which usually registers the closest hit regardless of opacity.
On p. 2 of Florian's document, it says "Every deep OpenEXR image must contain either a single alpha
channel, A, or three alpha channels RA, GA, BA." Are we to take this literally, that it is not
considered valid to have a deep OpenEXR that doesn't contain alpha, or whose channels are not given these
precise names? (Example, it will *always* be "RA", and *never* "opacity.R"?)
On Sep 24, 2013, at 3:10 PM, Florian Kainz wrote:
To the more mathematically inclined OpenEXR users and developers:
If you can spare the time, I would like you to read the attached
document and give me feedback on it.
The IlmImf library defines the file format for deep images and it
provides convenient methods for writing deep image files. However,
the library and the existing documentation (as of September 2013)
do not explain in detail how deep images are meant to be interpreted.
The attached document attempts to describe what the deep data in a file
mean, and how compositing of deep images works. The document also points
out numerical issues with the representation of volumetric samples, and
proposes an alternate volumetric sample representation that would address
those issues.
I believe that agreement on the interpretation of deep OpenEXR files
is desirable because it enables compatibility among different vendors'
image compositing applications. In addition, anyone developing algorithms
such as lossy compression would be able to rely on a standardized
interpretation of a file's contents.
Florian
<Deep Image Data 09-24-13.pdf>_______________________________________________
Openexr-devel mailing list
address@hidden
https://lists.nongnu.org/mailman/listinfo/openexr-devel
--
Larry Gritz
address@hidden