Oh, is it this?
ri.Option( “render”, “DeepMatteLoadingPath”, “holdout_matte_input.dtex” )
Oh, is it this?
ri.Option( “render”, “DeepMatteLoadingPath”, “holdout_matte_input.dtex” )
That is correct, to specify a holdout mask, use this:
ri.Option( "render", "DeepMatteLoadingPath", "holdout_matte_input.dtex" )
For holdout masks, this is a very useful forum thread to read to understand how they work (specifically the info on occluded/unoccluded particles): viewtopic.php?f=115&t=7561
To specify a per-light deep shadow, use this:
ri.Light( ..., "AttenMapLoadingPath", "deep_shadow_map_input.dtex" )
Keep in mind, you will need to specify a loading library for the dtex files, since it’s a closed format. Use this:
KrakatoaSR.SetDtexLibraryPath( "/some/path/libprman.so" ) #or other ".so" file
Hi Conrad,
will the deep data workflow be implemented in Krakatoa MX once it’s ready for SR?
Take care,
Dziga
It is part of Krakatoa’s core now. So, it would be relatively easy to add it to MX. It would just be a matter of adding it to the user interface. Are there any 3dsmax renderers able to output DTEX images? If there aren’t then it might not be that useful at the moment. We decided to do it for our SR and Maya versions because there are renderers such as RenderMan and 3Delight that are also capable of deep image outputs.
Maybe Bobo or Darcy (our Krakatoa MX guys) would be able to answer that question better.
Thanks Conrad. That’s a good point. I will ask the Chaos Group guys if they plan to use DTEX in the future (or if they are allowed to do so). For now, they are developing an OpenEXR 2.0 deep data implementation only and an own deep file format as far as I know.
I might think in the wrong direction but wouldn’t it still be useful to write DTEX with MX for effects passes and render the rest of the scene geometry with RenderMan and Maya? Am I missing a major factor here that hinders deep compositing in Nuke with both outputs?
Best regards,
Dziga
From what I understand, DTEX requires licensing from Pixar because they hold the patent on it.
I’m not sure what OpenEXR does to avoid that patent. But if they do, then it’s a much better choice for anyone who is not interested in paying for a DTEX license.
Another option (since OpenEXR 2.0 is taking forever) would be to use another format (Like OpenEXR 1.x) and just implement something yourself that avoids the patent claims and can still be “read” by any application. Would the data need interpretation in a specialized pipeline? Sure, but so do deep shadows/deep images anyway.
dzisign,
Yes, DTEX intput and output out of 3dsmax probably would be useful for some pipelines. We haven’t seen a lot of interest in it just yet, but if VRay started outputting DTEX files, then we might see more interest.
Chad,
We actually have our own EXR deep image format. It’s used in Krakatoa MX for saving and loading shadow attenuation (“Krakatoa Shadows”). This format can also be used in place of DTEX in Krakatoa SR, but it’s not very good because our software is the only software that can currently reads or generates these files.
Conrad, Is the your EXR 1.x depth format supported in SR? I didn’t see it anywhere. Everything just says dtex.
Yes, it is supported, you can pass in .exr files as parameters to the deep holdout masks and deep shadow map options. It’s not in the documentation because I didn’t think anyone would use it, and didn’t want to overcomplicated things.
These two parameters can take dtex (renderman), dsm (3delight), and exr (our own homebrew) files:
ri.Option( "render", "DeepMatteLoadingPath", "deep_image.exr" )
ri.LightSource( ..., "AttenMapLoadingPath", "deep_image.exr" )
Oh look, OpenEXR 2.0 is on github. Maybe things will move along soon.
So it is… Sweet! And I also see a DTEX to OpenEXR 2.0 converter.
This sounds like a pretty fantastic feature. Looking forward to seeing the implementation!
Where are we on this?
We have a need to output a “deep image” in a generic sense. We don’t care what format it is in, we could just take itty bitty Z slices as we have done in the past, but it’s obviously very inefficient to keep re-sorting just to cull everything outside the slice. But if you have something that is more efficient for doing voxel or deep image rasterization, we’d love to use it.
Deep image output is something we’ve talked about for a long time. We’d like to be early arrivals in the game in that regard. It’s not on the immediate to do list, but it is a longer-term to do. That is, after the Krakatoa MY and SR public releases.
We’re working on an SR implementation inside of a compositing application. We can convert from voxels to deep images and back, so even if we got the existing rasterization setup going that would help.
Currently you render out N buffers to do multithreading, right? If we had control over how many threads (and thus how many bins/slices you sort the particles into) to use, we could render out for instance a 256x256 image and make 256 slices. You wouldn’t have to have all of them in memory at the same time, you could do say 4 or 8 or however many cores you have at a time and just return the image and move on. We would then convert the 256^3 array into the suitable output format.
The big benefit over the current workaround (prelighting in one pass, then culling into 256 bins ourselves for rendering) is that you’d be able to do it all in one go, only sorting the particles for rendering once and not having to do the prelighting pass.
Obviously splatting the particles to a voxel array or directly rendering deep images with raymarching would be awesome, but I figure the existing rendering could be adapted quickly and easily and give good results.
Actually! The latest version of Krakatoa SR has the ability to set the number of threads (as of yesterday). I only released the Python version, and not the C++ version so far, but it is in both. If you need the C++ API, I can post that also.
Might solve the problem in the short term, yes. Can this build of Krakatoa return each of the resulting images?
We don’t plan on exposing the data passed internally between various threads. I can see how you might find that useful, but ideally I would like users to be separated from the implementation details. Our threading algorithm does not divide up slices of particles identically (or evenly) between runs, so exposing the output of individual threads seems troublesome to me.
I have been putting a lot of thought into exposing an interface in the API to write your own deep image data. That might be more useful for you.
Even if you had a “depth binning” mode that we could specify the bins for, that would be useful.
So Krakatoa would get the particles and light them. Then for rasterization you would allow us to specify the depth bins we would like the particles sorted to. Then you would sort, bin, and rasterize each bin and return an image.
We can do all of this now, but since we sort all of the particles for each bin, it’s a huge performance issue of particle count multiplied by bin count, whereas if the binning and sorting happened at the same time, it would be a linear scaling for the particle count, but constant for the bin count.
The advantage is that you don’t need to change anything about how the rasterization happens, you only have to change how the binning happens.
Ah, I see what you mean. I will try to support this idea when we start making changes to support deep image outputs.