AWS Thinkbox Discussion Forums

Deep Shadow issues (or misunderstanding)

1.5.2.40413

Trying out the deep shadow outputs…

  1. The output doesn’t seem to be affected based on NN/BL/BC filtering. Looks like bilinear all the time.

  2. The naming of the sample layers seems odd to me, any chance we can change that? Also, the sample number isn’t padded, so it sorts funny for me.

  3. Is depth supposed to be saved out per layer? I only see one depth layer.

  4. Not sure what the order is, but in my test, the light was 200 units from the first particle, and was generating 50 samples at 10 unit spacing. Shouldn’t I see the first 20 samples as empty? Instead, they appear to be showing the first 10 units worth of particles (from 200 to 210).

To clarify, the shadows aren’t deep shadows as you might expect them, they are implemented from this paper: cemyuksel.com/research/deepopacity/ so are kind of like very limited deep shadow maps.

1.) The filtering is currently forced to bilinear, mostly out of spite. I can change this fairly easily.
2.) What do you propose for the naming? It doesn’t really matter to me. The only part suggested by OpenEXR was to use AR, AG, AB since the stored values are alphas. I can easily pad the number to 4 digits or so.
3.) The depth is stored to the first particle along a pixel/ray. After that the layers are sampled at a regular Z-distance (which can be read of the EXR via a float property called “PFDeepAttenuation:SampleSpacing”). Check out my sweet diagram below.
4.) The samples are stored only after the Z-distance specified in the Z image in the EXR.

I plan to get around to deep shadow maps eventually, but I needed to pound out an implementation pretty quickly so this is what we ended up going with.

Deep Opacity Maps:

                 z0           z0+d           z0+2d          z0+3d          z0+4d          z0+5d
Camera ----------|--------------|--------------|--------------|--------------|--------------|-----------------------Infinity
                             sample0        sample1        sample2        sample3        sample4

The Z pass marks the first particle along a ray(ie. pixel) and attenuation values are stored at regular spacings (of ‘d’) along the ray after this depth. At z0 (the value read from the Z image) there is no attenuation. The attenuation is linearly interpolated between samples. Any evaluation beyond the final layer (z0+5d in this case) will return the attenuation value stored at sample4.

Neat! Ok, makes a lot more sense now.

Regarding the naming, I mostly didn’t like the padding and the seemingly verbose “sample”. Something like s0001.R or s0001.AR would seem to me to be more readable, but don’t sweat it unless someone else chimes in.

  • Chad

Now that you know how to write out channels and metadata to EXRs… there’s some wishlist stuff we can bring up again, right?

Yep. Though posting it in a different (appropriately named) thread will do a lot to prevent it from slipping through the cracks.

But boy don’t they look a hell of a lot better than the old shadows :slight_smile:

Fun Fact: OpenEXR names only have the first 31 characters stored in their internal data structure, but the error messages use the original string representation when they get confused.

How this affects you: I’m going to change all the property names in the OpenEXR images for Deep Shadows. This will kill your existing Deep Attenuation maps.

At least you discovered this now :slight_smile:

Privacy | Site terms | Cookie preferences