With all the talk about Render Elements and Passes lately, we would like to ask you for some feedback on how a potential system (outside of the 3ds Max Render Elements system) should work from a user’s point of view.
Keep in mind that some types of “Render Elements” don’t make much sense in Krakatoa, while some other data might not be considered directly a Render Element but could be saved easily.
Let’s first look at the data we already have there and how we deal with it (or not).
*We can obviously save the RGBA output of Krakatoa to an EXR or other supported format. The output goes wherever the 3ds Max Render Dialog tells it to.
*We save optionally a subfolder with Attenuation Maps per light for shadow casting in other renderers using light projection maps. We hope this system will be replaced with something better someday (an actual Krakatoa Shadows Generator on the Light’s list), but not in 1.5. But this shows a possible approach to store data in a sub-folder with a consistent name inside the output path.
*Now that we added Matte Objects Rasterizing, we actually produce a rather nice Z-Depth pass from Matte Geometry and even a Normal Map out of it. Since Krakatoa already supports the loading of 3rd party Z-Depth image sequences, it might be useful (and relatively trivial) to dump the Matte Depth Pass into images, possibly in its own sub-folder. These could be used for debugging (seeing what Krakatoa is “seeing” from the geometry) and reloading at a later point or even passing the sequence to others or to network rendering without the overhead of heavy Max files with millions of polygons.
*Voxel Rendering processes the particles in light passes. For each light (and one more for the environment map), a full pass is processed. We have already received the request to allow the saving of these passes as separate images so lighting could be tweaked in post using the components in addition to getting the already composited result in the RGBA output. Where should these go? Into a sub-folder called something like VoxelLighting? Should the images just contain the name of the light as suffix? Or should there be a subfolder inside that subfolder for each light so the light passes don’t pile up in the same folder?
*If/when we add global KCM support, the data from any channel could be dumped into an RGB image. For example, get the normals channel, multiply by some value and dump to a custom named channel that, being a vector, could be saved out as an EXR containing that data in its pixels. Or outputting the UV coordinates expressed as color for debugging purposes, or whatever you want. We could go the way of Gelato/Renderman and allow a pass to save any internal channel where a KCM is run first to populate that channel, or it could go out raw. Or the KCM could be run on the loaded particles while they are being drawn and write directly to the frame buffer without storing the data into a specific channel, thus reducing memory requirements. Obviously, Krakatoa will have to load all particles once, then sort and rerender each pass using the desired data in place of color and save the output somewhere. Like with Blended Camera Distance and other currently implemented options, some other conditions like Post Divide By Alpha or turning off lighting might be needed when saving passes, so we would need ways to specify these per pass, too.
See below for more questions regarding this:
How much customization of these output paths do you feel is needed? We could just add checkboxes to enable whatever is supported and name the outputs automatically, or provide fields to specify the naming of the folders, or provide explicit full paths for each pass (but you would have to specify each one every time which might be more hassle)…
Many of these ideas are considered 2.0 material, but those that are trivial might make it into 1.5. But we don’t want to attack this without knowing exactly what workflows, saving procedures or user interface you would like to see implemented.
Let the discussion begin!