I know there was some talk about possibly changing how shadowing is handled in the future. Deep shadows or something else. But for now, my shadow workflow often involves mixing a large number of lights together to get smooth shadows.
So for a spotlight, I might make the map size 512. Unfortunately, this misses fine details and makes ugly moire patterns. So I clone the light and give it a map size of 64. Then I set the output of each light to half. Now I get smoother shadows with less objectionable moire patterns, but I still miss fine details. So I clone the light again, set the outputs to be 33% less, etc.
But for each one of these lights, I have to manage them as separate objects, and worse, Krakatoa has to resort the particles for each, even though they have the same exact transform and cone.
Krakatoa is pretty crazy fast at drawing particles to raster, so what if the light rendered out multiple shadow maps, at varying resolutions and we just averaged them together?
The same idea would also be nice if applied to the actual camera render too. If we could render 256, 1k, and 4k at the same time, we could use the multiple resolutions in our composites later. In most renderers, this wouldn’t make any sense, as the rendertimes would accumulate, and the 4k render would always look best. In Krak, the larger the render, the more likely it will look like Nerf or sand, but it will be more likely to preserve fine details, and the actual painting to raster is so fast that it will only add a small amount to the render time.
That’s a cool idea. Mark came up with the name “Multi-resolution Shadow Maps”, which sounds pretty sexy. We can definitely give you a healthy speed boost by implementing this as a hierarchical map where all levels are written to/read from at a time. We’ll need to spend some time thinking about how to choose the number of levels, and the blending weights of each one, etc.
Yeah, I’m not sure if averaging is the way to go or not. It’s just the only thing I can do now. There might be better ideas. Looking up mipmapping for shadows reveals lots of papers that have no application to something like Krak.
Something to watch for… Krakatoa will, with different shadow map sizes, produce different shadow densities. I figure it’s because larger maps can find “holes” in the pointset and thus shine light deeper into the volume.
So in the case of this new feature, you might want to consider this effect and normalize the shadows. But for general purpose lighting, is there any way to compensate for this?
For the multi-resolution shadow thingy, since you render depth-sorted points, could you render each “slice” at a different resolution?
I’m wondering if particles near to the light shouldn’t be rendered with a smaller map, and ones far away with a larger map? So the first (far) slice might be 2048^2 pixels, while the last (near) slice might be 64^2 pixels.
That way points near the light will have fuzzy shadows, and ones far will have sharp. This would model the behaviour of an area light inexpensively. This might also model the particles changing size as they got closer to the light.
You’d have to modify the shadow density per slice for reasons mentioned earlier.
And this could be done in addition to the multi-resolution map described earlier. So the far slice might be rendered at 16x, 12x, and 8x, while the near points would be rendered at 3x, 2x, and 1x.