What would seriously rock would be multichannel exr support alike to renderelements. Esp now that we got KCE it would be incredibly cool to be
able to output channels as layers in a multi-layer exr.
Is that possible at all ? (aka something i can at least dream of)
*Having KCM support in PRT Loaders, we would like to add Global level KCMs which operate on the complete particle cloud after loading.
*This means that Z-Depth, Velocity, Normals, Colors etc. could be manipulated globally regardless of the particle source and directly from memory without reloading (if PCache is used)
*This will also replace the current “Color Override” controls in the Main Controls rollout - if you want a single color or texture on all particles, you will be able to load a preset flow or create a new one to get exactly the color you want, or Z Depth and Camera distance, or texture, or some channel dumped into the color. This will be a poor man’s Render Elements system since you would have to redirect your data into the Color channel and render again and again without pass management, but you could use KCE’s power to get about anything.
*There are some other things we want to do with KCMs, but we will talk about them when the time comes.
–Around Krakatoa 2.0 Timeframe:
*For Render Elements, you should be able to read any channel value from the already loaded particle cloud and output it to an image file without overriding the Color channel.
*Once this is possible, outputting multiple channels into a single EXR file might be an interesting option.
These are just preliminary ideas. Our plans and timeframes might change.
From a workflow standpoint, the multichannel EXR doesn’t make a lot of practical sense. It eats up memory, increases I/O, and adds complexity. Multichannel EXR’s can easily be constructed as a post process from intermediate EXR files if it’s really needed.
@chad: Well currently to achieve the same thing i have to re-render as often as many channels i need. If channels could be in a multilayer exr i could render once and have them all.
Thus is what we do with VRay atm and it’s a breeze. Using loads of MMREs is so muc more convenient and faster to create then manually rendering matte passes. We do have all sorts
of post processing tools to change compression, split, join etc with EXRs so creating them isnt really the problem. I was rather after making rendering a load of passes easier and quicker.
@Bobo : sounds great. Esp. Global KCM sounds very very cool!
But each one of those passes is storing data per-point. Your memory consumption could easily double, triple, or worse, depending on how many passes there were.
Krakatoa rasterizes crazy fast. The slow part is the loading and modifying and sorting. If we speed that up, or allow that to be cached more effectively, then you would see all of your renders go faster, not just the ones that require multiple passes.
And I’m not sure that passes make a lot of sense in Krakatoa. At least not when the renders are all “pre-occluded”. Without the ability to split the renders out by depth, what’s the point of having a depth buffer, or velocity? Unless your particles have very high density and nearest neighbor sampling… But even then, you’re going to be combining multiple particles per pixel…
I’m saying I’d rather see a proof of concept before the actual work of making the EXR I/O happens, to see if it’s even useful. There’s a lot of things that need to be fixed that will reduce the need for compositing that we’re dealing with because we can’t do occlusions, or mixing of elements in a practical manner.
You got some good points there i havent really thought about so far. And yes of course Filtering is a big issue. Velocities did work very nice for vectorblur tho…hence the idea
I suspect they would for cases where all the particles are moving in the same direction. But if the front particles are moving perpendicular or opposite from the back particles, then it would not. However if you could render the front particles separately from the back particles, then that wouldn’t be an issue.
But if the particles are moving randomly, then you’d be screwed no matter what, and would have to render high temporal frequency or use the built in motion blur.