We’re trying to render 1.200.000.000 particles and we’re having some RAM issues. So, we would like to know the maximum info on krak render optimization.
1 - Is there a way to render krak in batch mode without opening 3dsmax?
2 - Is it possible to have something similar to mental ray proxy (loaded at render time, and only during the time they are needed)?
3 - We’ve noticed memory spikes and not 100% sure, but it seems to be while computing prtcloner. Does this make any sense?
4 - Krakatoa allocates internally one frame buffer for every CPU used to render (from what I’ve read). Can this value be measured per cpu? Or even better, if I know the exact value per channel, per particle I could do the maths and know exactly the ram memory needed to render x particles, right?
5 - Is there any converter from prt to alembic?
You can render in slave mode on the network using Deadline. Not sure how much memory you would save from that though… Max command line rendering is similar (it uses Backburner internally). So it should be possible.
It would not help. PRTs are kinda like that, but they always get loaded into memory because skipping particles would affect volumetric lighting. With meshes, you can skip a poly if it is never seen. With volumetric rendering, you cannot skip a particle because it would affect the look of all other particles it might be shadowing… So the PRT Loader is as close as you can get to a proxy.
There are several things that can look like spikes. Whenever more memory is needed, you might see spikes as the particle data stream is being copied internally to double its size. The PRTCloner Repopulation obviously has to allocate memory for a voxel grid to distribute the channels and create the new particles. Various image buffers will be allocated during lighting and rendering.
The frame buffers obviously need data for each RGB pixel, for the Alpha channel and for any other Render Elements that might have been requested. I believe that the data is actually stored as float32 so you get 4 bytes per channel. Just for RGBA, you would then have 16 bytes per pixel. At a high resolution of 10Kx10K, you would then have 10,00010,00016 bytes = 1,600,000,000 bytes or 1.49 GB. If you have 32 threads, this will eat 47.68 GB of RAM just for the final pass image buffers. If you are using Matte Objects, we render internally a Z-Depth buffer which would add to the memory consumption. Also if the Matte Objects rendering is set to use Sampling of 2, this will double the depth map resolution along each axis (4 times more data), a value of 3 will require 9 times more space etc.
Not that I know of, but there is a PartIO project on GitHub that supports nearly every particle format including PRTs. I don’t know if it supports Alembic at this point, but it just might…