AWS Thinkbox Discussion Forums

Deep Compositing implementation

Hello.

I have not worked yet with the C++ SDK. But i am thinking to implement a Deep Compositing feature which is not available in Krakatoa yet.
Does anyone know whether this is possible in principle with the SDK? Or is the sdk not designed for it and allows no deep interventions in the rendering process?

Regards.
Michael

My guess would be no, you would have to wait for us to implement deep image output (which is on the ToDo list anyway). That being said, we would LOVE to hear your ideas about how you think it should work and what workflows you would envision in Nuke or elsewhere. What particle channels would you like to encode in the deep output?

For example, one of the possible workflows given deep information about Color and Density would be the ability to insert a 3D geometry object inside a volumetric rendering and change its Z depth position, producing correct coverage both in front of it, and behind it (both for correct AA and esp. if the object itself is semi-transparent). But in the case of Krakatoa, people would also expect an object inside the volume to cast volumetric shadows into the volume, which is unlikely to happen using just deep data as lighting and attenuation / absorption are part of the rendering process.

Last Siggraph we had a demo at our booth demonstrating Krakatoa rendering inside a compositing package (Eyeon Fusion in that case) which would solve a lot more problems than the attempt to do volumetric compositing in post since both lighting and rendering would be part of the compositing workflow.

Any feedback regarding what you hope to achieve using Krakatoa and deep compositing would be very welcome!

As Bobo said, the best bet might be to have Krakatoa inside your compositing application (since you have to re-render for shadows or any other change to the volume, like the phase function or whatever, and a lot of things you want to tweak anyway like motion blur or depth of field would totally invalidate your deep image anyway) but send very specific render calls to it. So instead of saying “Rasterize all the particles”, you provide a pair of Z-depth masks and say “Rasterize all the particles between these two non-planar depth slices”. Any compositor will be able to generate those masks trivially, so that’s not an issue. You could also do it with a pair of position passes, but it’s the same thing.

Even without caching or RoI or other lower level requests to Krakatoa, we were still able to get relatively interactive rendering of decently sized PRT sets from within Fusion.

Is loading PRT files an I/O bummer for the compositor? Sure. But loading in a bunch of deep images are too, and those had to be generated from PRT I/O at some point anyway. Having final look control over the volume rendering is very nice to have in the compositing side and is probably worth the effort.

  • Chad
Privacy | Site terms | Cookie preferences