Render Elements and Passes Discussion

With all the talk about Render Elements and Passes lately, we would like to ask you for some feedback on how a potential system (outside of the 3ds Max Render Elements system) should work from a user’s point of view.
Keep in mind that some types of “Render Elements” don’t make much sense in Krakatoa, while some other data might not be considered directly a Render Element but could be saved easily.

Let’s first look at the data we already have there and how we deal with it (or not).

*We can obviously save the RGBA output of Krakatoa to an EXR or other supported format. The output goes wherever the 3ds Max Render Dialog tells it to.
*We save optionally a subfolder with Attenuation Maps per light for shadow casting in other renderers using light projection maps. We hope this system will be replaced with something better someday (an actual Krakatoa Shadows Generator on the Light’s list), but not in 1.5. But this shows a possible approach to store data in a sub-folder with a consistent name inside the output path.

*Now that we added Matte Objects Rasterizing, we actually produce a rather nice Z-Depth pass from Matte Geometry and even a Normal Map out of it. Since Krakatoa already supports the loading of 3rd party Z-Depth image sequences, it might be useful (and relatively trivial) to dump the Matte Depth Pass into images, possibly in its own sub-folder. These could be used for debugging (seeing what Krakatoa is “seeing” from the geometry) and reloading at a later point or even passing the sequence to others or to network rendering without the overhead of heavy Max files with millions of polygons.

*Voxel Rendering processes the particles in light passes. For each light (and one more for the environment map), a full pass is processed. We have already received the request to allow the saving of these passes as separate images so lighting could be tweaked in post using the components in addition to getting the already composited result in the RGBA output. Where should these go? Into a sub-folder called something like VoxelLighting? Should the images just contain the name of the light as suffix? Or should there be a subfolder inside that subfolder for each light so the light passes don’t pile up in the same folder?

*If/when we add global KCM support, the data from any channel could be dumped into an RGB image. For example, get the normals channel, multiply by some value and dump to a custom named channel that, being a vector, could be saved out as an EXR containing that data in its pixels. Or outputting the UV coordinates expressed as color for debugging purposes, or whatever you want. We could go the way of Gelato/Renderman and allow a pass to save any internal channel where a KCM is run first to populate that channel, or it could go out raw. Or the KCM could be run on the loaded particles while they are being drawn and write directly to the frame buffer without storing the data into a specific channel, thus reducing memory requirements. Obviously, Krakatoa will have to load all particles once, then sort and rerender each pass using the desired data in place of color and save the output somewhere. Like with Blended Camera Distance and other currently implemented options, some other conditions like Post Divide By Alpha or turning off lighting might be needed when saving passes, so we would need ways to specify these per pass, too.

See below for more questions regarding this:

How much customization of these output paths do you feel is needed? We could just add checkboxes to enable whatever is supported and name the outputs automatically, or provide fields to specify the naming of the folders, or provide explicit full paths for each pass (but you would have to specify each one every time which might be more hassle)…

Many of these ideas are considered 2.0 material, but those that are trivial might make it into 1.5. But we don’t want to attack this without knowing exactly what workflows, saving procedures or user interface you would like to see implemented.

Let the discussion begin!

Interesting… It’s conceivable someone could render their own Matte Object passes out of, say, Houdini. And pre-processing the matte objects and just sending tiny max files to Deadline with images sounds potentially useful too, especially if the geometry itself is a memory hog. Sounds like the kind of feature you never want to need, but could get you out of a bind. Not a huge priority for me, but I could come crawling back in a few months asking for it. :slight_smile:

Separate folders, please, named after the light that generated it. With my whirlylight setup, I would make a lot of passes that would look very similar to each other. Separate folders makes it easier to sort. And as you may know, Fusion is very slow when reading directories with a large number of files.

Global support? Interesting. For the time being, we could just instance the modifiers, right? Or let DL or RPM add them before submission.

RGB/A? As long as you’re saving out float, why not offload that to post?

The kicker on these passes is the blending… For many of the passes, blending in screen XY or in Z with density is going to be a no-no. And not blending is going to be a no-no. There’s a reason we’re working with a 3D particle system and not a 2D one. Now, if we could save layers (in the RPF sense) or depth slices… I know how deep shadow maps could be used to store density, but I wonder if you could store other data that way… Not that I have a way of ingesting it. Fusion doesn’t know how to deal with a 1D function per pixel.

Yeah, saving out depth slices or the voxel planes would be the most directly usable for most of us. You could set up a 3D comp from that pretty easily, but 2D ones would work as well. For particles it would be the same, while for voxels it would be a toggle for rendering out the voxel plane (my preference) or rendering out the projection of the plane to the camera.

I like “automatic w/override” personally. When I check “save depth”, have a preset folder and filename added, but let us override them if we want. But heck, the Save Particles paths would work fine, just put a that for each pass in a separate rollout.

It’s trivial for us to manually do a composite to see what works. Like I wanted to render out matte passes for the particles, and that was easy to do manually. I’ve been able to render out normals, depth, specular, etc all manually, so it’s not like we need this to be automatic for it to work at all. So it shouldn’t be that hard for someone to come up with workflow examples that show what passes are useful without you and Darcy having to do anything.

Except of course for saving of the depth slices / voxel planes… That would be hard for us to do. Or saving out the shading per-particle (which would be the workaround). Both of those are on AT’s wishlist for a reason, while saving normals, matte, depth, etc are not.

EDIT: It’s also conceivable that the user will need to change a wide variety of things for the passes. Like the environment reflection might look better with large voxel filter radius, while the alpha would look better with a small one. In the end, I suspect we’ll just need to get everything handled by scene states or RPM.

EDIT: The “Shading Channel” that would save the shading color to a data channel would be useful for compositing too. You could use that to mix various pre-rendered particle sets together in Krakatoa and save yourself a lot of memory and time, I should think.

A variation on this would be to feed it a set of Z depth images.

Simple case: 1 image, for each pixel of the image in screen space, if the particle is closer than the intensity, it renders (or partitions to .prt’s!) to the layer 1 image, otherwise it renders to the layer 2 image. Then in Fusion, just do 1 over 2.

Slightly more interesting case: 2 images, for each pixel in screen space, find the min intensity between the 2 images, and the difference between them (min + difference = max, so you could do it that way in this case). If the particle is closer than the min, render/save to layer 1. If the particle is further than the min, but closer than the max, render/save to layer 2. If the particle is farther than the max, render/save to layer 3.

Even better: Buncha folders full of images (or IFL’s). Rank the intensities at each pixel, test the particles depth and render to the appropriately ranked layer.

So at this point, a Mr. Compositor could render out a set of image sequences of the matte objects they have in their comp, and pass them back to Mr. Krakatoa, who renders out the layers.

I was thinking about this in the context of Fusion, but at that point, you could alternatively export from Fusion an FBX file with the camera, lights, and the matte objects. If Krakatoa could then sort particles to layers by recording the ray intersects on that geometry (either directly or with the matte object rasterizing) then you could have another way of communication the layers between the applications. Could go the other way, too, where you send Fusion the planes/cameras/render layers, but that’s not hard to do currently.

What would be swell of course, is to do all the “sorting to layers and rasterizing to multiple image sequences” in one step. Determine the number of layers needed, and then as you sort the particles back-to-front, also sort them to the rasterizing buffers.

  • Chad

As for arbitrary passes support something i have been longing for would be beeing able to output them to a single multilayer exr instead of many sequences. Whilst this is cumbersome in other apps (so i hear at least) it is plain great in nuke. Would be great to be able to output arbitrary data from a kcm to a multilayer exr via layernames.

Regards,
Thorsten

The question I have there is memory. How much does it take to store all those channels concurrently for all the particles? If your memory were to quadruple (or more), you would only be able to render 1/4 of the particles.

And the other issue is that you might not be able (or desire) to have the channels be concurrent. Like you MIGHT want to change the density so that it’s one setting for color, and another for normal and depth. So you’d have to decide what settings are used where. Likewise, you might want to change lighting or filter or somesuch. That’s where scene states or RPM comes in handy.

Finally, you can always combine multiple separate passes together into an EXR as a post-process. This could be done in Deadline as a post-render task.

It’s an interesting problem, of course, how do you reduce the computational repetitions, while maintaining a small memory footprint? Like you don’t want to sort over and over again, but depending on the KCM’s or the lighting, you might have to.