Hello Everyone,
First, thank you Bobo and David for letting me be a part of the beta for Krakatoa.
All you guys here have been working with this for a while now, and I’m still playing catchup. I’ve finally figured out the workflow for Krakatoa, which is different from way I usually work, but it makes a lot of sense. I’ve read every posting here on this forum, but I haven’t found the answer to this question. What determines when to use partitioning? Obviously you would need it to increase to particle counts in your render to get to the tens and hundreds of millions of particles. But is there any other criteria to gauge when partitioning is useful, say for performance reasons?
I hope I’m making sense with this question.
Thanks in advance,
Stephen Lebed
Hi Stephen,
Welcome to the Eruption!
There are a couple of things to keep in mind (and we have a FAQ on the Manuals page which we will be updating constantly that mentions some of them)
http://support.franticfilms.com/manuals/index.php?title=Krakatoa:Frequently_Asked_Questions#Partitioning
First of all, PFlow has both speed and memory limitations, as any other complex system. Obviously, when you want to render directly from PFlow, both Max and Krakatoa have to share the same memory, so 10 million particles would be in memory twice - once in PFlow and once in Krakatoa.
This is where you start saving to PRT files (not necessarily Partitions) - when Krakatoa is saving particles, it is NOT loading them into memory, but streaming them from PFlow to disk, thus you could calculate a lot more particles and save to disk than you could calculate AND render in Krakatoa if using direct PFlow To Krakatoa rendering.
Once you start saving to disk, you will notice that updating the PFlow itself often takes as long as dumping the data to disk. If you are rendering with Volumetric shading, rendering 1 million particles will give you similar density appearance as 20 million, but would calculate 20 times faster. So it is a good idea to save one million particles to disk quickly as 1 of 20 partition, load the PRT sequence and play with shading, transforms, deformations, culling and so on. If it turns out that the 1 million you saved work great, you can always dump 19 more partitions from the same PFlow, load into the PRT Loader and you have the desired density.
For users of Deadline, this adds another bonus - having 20 partitions means you could calculate them on up to 20 CPUs - depending on the number of slaves, memory and CPU count you could use 5 to 20 network machines to calculate the same particle amount in up to 1/20 of the time it would take a single workstation.
And then there is a limit of how many particles PFlow can process without crashing (esp. on 32 bit with 2GB RAM systems). If calculating 50 million in a single go is not technically possible, calculating 25 partitions with 2 million each is surely possible.
Another level of flexibility added by the PRT Loader is that you can load just one of N partitions to work with, with full distribution. For example, if you want to render “only” 1 million particles, you could save one sequence of 1 million and the viewport will show 1% by default, or 10K particles. But if you switch to “Load Every Nth” mode, the PRT Loader would have to read 1 million particles to show 10K. If you had saved 10 partitions with 100K each, you could load one partition and potentially show 100% in the viewport and the PRT Loader would still have to read 10x less data, so it would refresh much faster, while still showing you a fully sampled cloud.
These are some thoughts, there might be other reasons why you would like to use partitions.