Submitting Frames as Tasks on One Machine

Hi, can i submit saving out a single partition on multiple threads (frames as tasks), or can i only save multiple partitions each partition for one thread? would be nice if i can save ONE partition on multiple threads —

thanks

Hi,

You should be asking these questions on the Krakatoa side of the forum, but…
You can do both, but there are performance limitations that are unrelated to Deadline and Krakatoa.

If you send one partition out and each frame of the partition is being processed by a different machine, the nature of Particle Flow (if you are using that) will cause each machine to preroll the whole animation. For example, if you have two slaves and the one starts frame 0 while the other does frame 1, when the first slave finishes 0 and goes to 2, it has to also calculate (but not save) frame 1. Since the calculation time of PFlow is often the slower part of the process, you are wasting resources. Even worse, if you have 10 slaves, when slave 10 picks up task 9, it has to preroll frames 0 to 9 which are being processed by the other 9 machines and calculate these too! So in that mode, it is a better idea to limit each job to a single machine and calculate, say, 10 partitions with one slave on each.

If you submit partition as task, each partition will be a single MAXScript Job where the slave starts on the first frame and runs in a MAXScript For Loop processing all frames in a row. The drawback here is that if it should crash for some reason, it would have to start from the beginning. But you won’t get any duplication of efforts because each slave will calculate unique particle data without the potential redundancy of the other method. Also, if you have 4 or 8 cores and enough RAM, since PFlow is single-threaded, you can let the slave run up to 4 or 8 partitions on the same machine by starting that number of 3ds Max instances. So if you have two 8 core slaves, you could calculate 16 partitions on two machines in parallel.

So we cannot do one partition on multiple slaves very efficiently because of the sequential nature of Particle Flow. A possible workaround would be the use of the Box #3 Cache which Krakatoa supports on Deadline where previously calculated frames would be stored on disk as the jobs go and no real preroll would be needed for already calculated frames. But this is expensive both because Box #3 is not cheap and because it would require double the disk space to store the cache.

Hope this helps.

hi bobo, sorry i thought this would go to deadline,

i understand the history dependance double calculation , ther are cases when i have history independant pflow systems, also
using animated-modified prt_volumes (that can be later used for faster acess or to use back in pflow for simulation) dont need the frame prior , for those 2 cases this option would be nice, but jea
im happy, with an i7 im 8x faster partitioning now, wonder why i didnt try it earlier :slight_smile:

The option IS there, unless you want some third option I am not understanding right.
You can let multiple machines work on a single partition where each machine or core will do one frame.
You cannot have multiple cores working on the same frame because PFlow is not multithreaded, and saving particles isn’t either.