I’m currently saving particles in 12 partitions of 5 million each, for 130 frames. But it’s taking way too long - around 4 hours+ to generate a single partition.
Does anyone have any hints or tips about speeding up saving particles to a file sequence?
Thanks!
Adam
Are you saving from cache, or do you need to recalculate the particles before saving?
- Chad
Adam,
The saving itself should be fast. The processing of the particle system is usually the slow part. Are you using PFlow? What operators do you have in it?
The main way to speed things up is to use Deadline on a render farm with as many PCs as you need partitions. In that case, all 12 partitions would be done in 4+ hours. Everything else is outside of Krakatoa's control - if you cannot optimize your system to update faster, there is not much you can do.
Cheers,
Bobo
Hey guys,
You’re quite right Bobo - the saving itself is fast, the pflow particle system processing is super slow. The operators are just a bunch of turbulence forces though - nothing overly outrageous.
I’d thought I could use Box #3 disk caching to maybe help but it’s crashing max right now - don’t know if that’s because it’s bashing into some internal max limit or something?
I do have 12+ machines I can work with here, so I’ll need to go the multiple machines route at some point. But I’d really like to speed this guy up somewhat first - gotta a bunch of extra particle passes that need to be “krak’d out” after this one…
I think the most effective method in this case will be to reduce the particle count of each partition, then possibly cranking up the number of partitions to compensate. I’d give one million particles for each partition a shot.
I suspect that the Box#3 disk cache may be slower or produce bigger cahce files than the Krakatoa particle saving, because it stores more data about the particle system itself, whereas Krakatoa just stores what it needs to render.
Cheers,
Mark
If you have Box#3, you could set say 1,000,000 to move, and disk cache those in box#3, then move the others based on the cached particles.
Might not be faster. Depends on what your forces have to calculate.
Oh, and something else, you could do 120 partitions of 500,000 particles. Won’t be faster, but might FEEL faster in that you can get feedback really fast.
- Chad
Okay, making some progress here.
I finally got hold of extra machines so set additional krak partitions going on them. One thing I’ve done is bake out a krak-specific version of my max file, stripping out all unnecessary pflow events, deleting anything turned off. Seems to be running about 20% faster already.
It just occurred to me that maybe using the box3 disk cache operator is not such a good thing since wouldn’t it bake the random seeds into that file? Means each partition would have to be set off individually, with an accompanying particle cache file … might not be such a time saver after all.
Still can’t get that cache file to generate to test it - e-mail to Oleg methinks…
I’d only use the box3 cache if you needed an intermediate cache.
Like if you wanted to run 100k particles, cache them, then use those cached particles to spawn 1000 particles each and have the new particles get prevalent speed from the original 100k.
If all you are caching is the rendered particles, then skip box3 and just let krakatoa make the PRT cache.
- Chad