Is there any plans on adding a stereo rendering mode that would load the particles once then render the an image side by side (Similar to vray or final render)? Either by extrapolating a left and right cameras from the main camera or by selecting two cameras in the scene for left and right (great when we get 2 cameras exported from tracking).
The problem is that you’d need 2x more memory. I don’t see the benefit there. Unless you mean that you would render out the left, then the right eye and composite them together before saving out the image? Wouldn’t save you much, and would probably make more sense as a Deadline feature than a Krakatoa one.
- Chad
I could imagine saving two or more cameras in one go - load particles once, light them, then sort and render beauty and all render elements from the one eye, then sort and render from the other. It would save the loading and lighting for the additional cameras.
That is not “native” stereo rendering. The VRay “native” rendering using the shading map etc. does not produce identical results to brute-force rendering, and there is nothing in Krakatoa that would benefit from such an approach.
You could indeed create a Deadline MAXScript job which enables the PCache and LCache in network mode, loads and renders one camera, then renders the other camera out of the cache, then clears the cache and moves on to the next frame. I could look into writing a prototype like that in the coming weeks if you are interested…
That’s assuming that the shading/lighting is not view dependent. Which in many cases is a valid assumption.
Still seems more like a Deadline thing though, especially if you are planning on integrating other non-Krakatoa render passes, too. If Deadline could make rendering from multiple cameras to multiple outputs a single machine task, that would be useful for stuff beyond Krakatoa.
- Chad
I was only interested because in vray and final render they can save quite a bit of time by sharing info like GI samples and shadow map creation. If there is no such gain or if they are minimal it might not be woth the effort. Also, for me, if it doesnt work when using RPM to manage passes its wouldnt be too useful.
We used stereo rendering in Gelato years before VRay and FR added the feature, so we have some experience with the technique. For surface shading, it saves time because the shading is performed relatively to a single camera and the data is then reused for the left and right eyes. In Krakatoa, the KCMs and volumetric lighting could be processed once and the drawing could be done for left and right eye out of the same data, so it WOULD cut down the time for the second eye as it would skip reloading and relighting. Now the question is - should Krakatoa do this internally the way it does Render Elements (which are basically a similar technique where Krakatoa draws into multiple frame buffers, but using the same camera), or should it be scripted in the UI with unlimited number of cameras leveraging the particle caches. I am not sure how it would interact with RPM since the camera management itself would be in Krakatoa and not in RPM.
I did a simple test: Created a box, filled with 10MP using PRT Volume. Created two Spot Lights to illuminate it. Created two cameras. Rendered from the left camera in 13.3 seconds. Then enabled LCache and PCache and rendered the right camera in 2 seconds. So brute force left and right cameras would have taken 26.6 seconds, instead I got both rendered in just 15.3 seconds (1.73 times faster). Adding more lights and more processing overhead to the first camera (including maps, KCMs etc.) will increase the processing time of that pass, but the second eye would always process in the same time given the same particle count of 10 million, so the speed up can be even bigger with complex scenes.
I then increased the particle count to 100MP, with the rest of the setup the same. Left camera brute force loading/lighting/shading took 99.187 seconds. Right camera from cache took only 15.5 seconds. So both cameras brute force rendering would have taken 2*99.187 = 198.374 seconds, but with the cached particles the total time was only 99.187+15.5 = 114.687 which is 1.72 times faster. So you see the speed up is the same regardless of the particle count as long as the rest of the scene is the same.
Obviously, the speed up will approach 2x without ever reaching it (unless the second camera takes 0 seconds to draw, which makes no sense), but you can cut a significant part of the render time. Note that camera-dependent effects like environment reflections are calculated at render time, thus allowing correct results from cached particles. The only case I can think of where this would not work is if a KCM is applied to the particles and it is camera-dependent (since we are skipping the update of particle data for the second eye, the KCM would not be reapplied with the correct camera info).
It is surely worth exploring. We can do it “the right way” (internal to Krakatoa with just the cameras provided by the scripted UI), or “the Bulgarian way” (script the whole thing). I will wait to see what Darcy thinks…