Wishlist: Render memory estimate

If we know how many points we are loading (from PRT’s of course) and we know what data we are loading, and what we are doing in the render (lighting or not, sorting method, frame buffer size, DOF sampling, etc) could it be possible to get an estimate of the RAM required to make a rendering? I’m noticing that I sometimes accidentally do 9 GB renders, and that last 1 GB causes my machine to go really slow.



Right now I’m doing test renders with 10% of the points, looking at taskman to see what my VM size is, and multiplying by 10, but I figure the estimate would save me from having to make that 10% render.


  • Chad

If we know how many points we

are loading (from PRT’s of

course) and we know what data

we are loading, and what we

are doing in the render

(lighting or not, sorting

method, frame buffer size, DOF

sampling, etc) could it be

possible to get an estimate of

the RAM required to make a

rendering? I’m noticing that

I sometimes accidentally do 9

GB renders, and that last 1 GB

causes my machine to go really

slow.



Right now I’m doing test

renders with 10% of the

points, looking at taskman to

see what my VM size is, and

multiplying by 10, but I

figure the estimate would save

me from having to make that

10% render.







Hi Chad,



Currently, the memory footprint of a particle is 38 bytes (this will hopefully change in 1.1.0 where you might be able to squeeze more into the same memory if some channels are not needed). The image buffers are probably insignificant in comparison and if you are using FF Threaded sorting, the sorting should not eat much memory either.

The Particle Analyzer Utility that ships with Krakatoa does this multiplication already and shows the max. RAM your particles would require. Taking into account that we don’t know how much memory Max and Windows are using, you would have to see this value as an estimate only, but in most cases it is rather precise.

Just multiply the particle count by 38, divide by 1024^3 and you will get the GB needed.



For example, 42 million particles need



42000000*38.0/1024^3 = 1.48639 GB



You might add 10% for safety, but that should be a good guideline.



Cheers,

Bobo

Ah, so the memory is fixed per particle… Interesting.



So the number at the bottom of the Particle Loaders rollout should be an ok guide. Thanks.


  • Chad

Ah, so the memory is fixed per

particle… Interesting.





http://www.franticfilms.com/software/support/krakatoa/requirements.php




This is the case in 1.0.x. It was done for simplicity. Right now, this is being reworked because the PRTs already support arbitrary named channels and the renderer should allocate channels only if they are actually used.



For example, the Velocity channel is being used by the Motion Blur feature of the renderer. If no Motion Blur is required, we are wasting 6 bytes per particle. If no normal shading is required, this means 6 bytes more. If a cloud of particles with single color and constant density were to be rendered, the color and density channels could be eliminated and the global override color and a global density could be assumed for each particle, thus reducing the memory footprint to just positions and lighting.



So in theory, a single color / constant density rendering without motion blur and no specular highlights could be calculated with 18 bytes per particle instead of 38, allowing you more than twice as many particles to be loaded…



Obviously, we would have to tweak our memory evaluation methods to multiply by the actual memory footprint instead of 38 should this change really make it into the product.

That’s promising. Getting 10% to 100% improvement in memory efficiency would be really welcome here. We’re banging our heads against 8GB now, so we’re looking to move some of the render farm machines to 16.


  • Chad