Render slice control? 1.2ness?

Something for the 1.2 wishlist…



I’d love to have control over slicing of the image in rasterizing such that we could define what points to draw after the lighting is done. This would mostly be to assist in compositing, where we need to be able to insert objects or footage inside the Krakatoa points, or to modify the Krakatoa renderings based on depth or intersection with other objects/footage.



This somewhat harkens to the wish for the ability to “bake” in lighting. If I could take the illumination and store that in a new PRT, then I could use clipping objects to do this sort of thing in a round-about way.


  • Chad

Something for the 1.2

wishlist…



I’d love to have control over

slicing of the image in

rasterizing such that we could

define what points to draw

after the lighting is done.

This would mostly be to assist

in compositing, where we need

to be able to insert objects

or footage inside the Krakatoa

points, or to modify the

Krakatoa renderings based on

depth or intersection with

other objects/footage.



This somewhat harkens to the

wish for the ability to “bake”

in lighting. If I could take

the illumination and store

that in a new PRT, then I

could use clipping objects to

do this sort of thing in a

round-about way.





While most of our plans for render-pass and control related stuff are generally under a further stage (possibly 1.3), getting the drawing of points multi-threaded will require tweaking the rendering process, so we might be able to look into it. You can also use camera clipping planes to render slices at arbitrary positions.



Saving the lighting into a channel is also on the list for 1.3, but might not be too difficult to push earlier. We will surely discuss these in the new year when we move on with 1.2…



Thanks for the suggestions!


Seems like if you can bake in the lighting, you could make a 2 stage process out of the rendering. Stage 1 would calculate the lighting and either multiply that by the color and then use that as the color channel in a new PRT, or add a new “lighting color” channel to the PRT. Stage 2 would just render without lighting (since it’s baked into color). We did this with Box#3 as a wacky test before 1.0 came out, and while slow, was really fun to play with.



Anyway, if you did have the lighting baked in, then the clipping would be easy, as would tile rendering. They could even be split to multiple machines with Deadline.



Actually, correct me if I’m wrong, but if you don’t use lighting (of if you bake to the color), wouldn’t you be able to alpha composite the slices together in post with no change at all to the existing rendering? Like if each slice was 100 units thick, and the whole scene was 1000 units thick, wouldn’t those 10 renders just comp over one another to make the correct final image?




  • Chad

With 1.2, will we still need to keep all the points in memory? We won’t be able to stream in points, right, since they will still need to be sorted? I remember someone saying something about streaming particles being a possible future feature, but not anytime soon?



I was thinking about this in the context of the slices, where you wouldn’t know what was in a slice unless you either loaded all the points up, or had pre-sorted them, which would still require that you had previously loaded them all up, unless you did some sorting & partitioning step and unioned.



But we’re still likely going to need many gigabytes of memory for the foreseeable future, correct?


  • Chad

Chad,



The streaming used to be a feature of the pre-Krakatoa version of the renderer we used on the movie “Stay”, but it used only Additve mode. When we add particle streaming, it will also be for additive mode with no lighting because the order of drawing additively plays no role, A+B = B+A, same applies to accumulating colors.



I guess it would be a 1.3 feature when we plan to refactor the complete rendering workflow to allow for custom passes, unless it turns out it is easy to add to the current system without breaking too many things in 1.2.



For Volumetric rendering, we will have to live with the high memory usage, unless some form of custom Krakatoa-specific swapping to disk could be introduced (which would be slower than rendering from memory but still faster than waiting for Windows to swap to disk). With 64 multi-core systems becoming the standard, we can assume that typical RAM amount will be getting higher (just like in the past - my first PC in 1993 had 4 MB of RAM, my workstation in 2008 has 4 GB, or 1024 times more after 15 years. Let’s see if the trend will continue… ;o)



Mark might have some more details to share.

I was hoping to get the streaming method for additive mode done in 1.2, because it’s not a major change. Doing the Krakatoa-specific swapping to disk (i.e. an out of core algorithm) would be a fairly major undertaking, and I don’t think we want to tackle that until we’ve got multithreading, and more flexible scriptability.



-Mark