Hi we use the current python api to render scenes.
If we render a 2k (2475 x 1750) image, the renderer needs about 8GByte ram, that okay.
Bur if we double the image output resolution (4950 x 3500), the renderer needs more than 64GByte ram. How much it really needs is impossible to say because we do not have any machines with more then 64GByte ram.
Is there an reason why the memory usage is so high?
That is unexpected to me. I will run a test here and take a look at memory usage. It may be the case that we are being inefficient at allocating memory for output images. If this is the case, I’ll try fixing it. If I can’t find a reason for the memory usage, I’ll have to take a look at the scene you’re using.
Thanks for reporting this.
Okay, I’ll try to collect a litte bit smaller scene, the current one uses a serveral hundreds mb particle file.
I have an idea of the possible cause.
Krakatoa is highly multi-threaded. During the render calculation, it has to allocate output frame memory memory for each thread that is launched. The number of threads created depends on the number of CPU cores available.
So in changing the resolution from 2K to 4K, you effectively change the memory allocated for the output image from 99MBnumThreads to 396MBnumThreads. This number also increases the same way if you have “z depth” pass or “normals” pass, etc.
So, basically… What I’m describing is a long standing issue in Krakatoa. To get a better understanding of what your scene is doing, can I get some more info:
-How many CPU cores do you have (number of threads)?
-What render element passes are you rendering (z depth, normal, etc.)
Other things that will effect memory usage (not as important in this case though):
-How many particles are in your scene?
-How many bytes-per-particle (this can be determined by setting the logging level to “debug”, and viewing what it says for “Using particle layout”)
It does seem odd to me that it would need 64GBytes of memory though in your case. Hopefully we can get to the bottom on this.