I’m rendering some Krakatoa partitions on Deadline, and my jobs are set to Enforce Sequential Rendering. However, if for some reason a job gets an error, the errored frame is skipped and the next frame is rendered. Because these are particle partitioning jobs, this isn’t helpful behavior, and doesn’t seem to be in the spirit of “enforce sequential”.
In most cases, the current behavior makes sense, but it’s not that much different than the behavior that would exist normally.
If I had set “first to last” and a limit of 1 machine, it would behave the same way as it does now, rendering in order unless it hit an error.
The “enforce sequential” would seem redundant.
With particle partitioning, though, I really do need the tasks to be done in order.
At some point, I wish that Krakatoa partitioning was it’s own job, so I could handle it differently than I do normal 3ds max jobs. For instance, I can’t get task progress from a particle partitioning, so I need to set the timeout limit to super high, like 170000. But that doesn’t make sense for doing a Brazil rendering, where I need the task update timeout to be low, like 1000.
Unfortunately, the reason there are no progress updates is because there isn’t a way to get progress updates from Particle Flow, otherwise Krakatoa would include this in its regular progress update. I guess what you’re looking for is the ability to change the task timeout in the Krakatoa submission GUI in a way which is sticky for just Krakatoa jobs.
Yeah, I would want it per job catagory. But that’s not possible now, right?
I assumed just making another plugin type would be possible.
Partitioning particles with Deadline is cool, but there’s assumptions made by Deadline that aren’t ideal.
Like I submit a 200 frame partioning job (frame per task) and a slave gets 180 frames into it, and someone submits a regular rendering job with a higher priority. The slave jumps off the Krakatoa job as soon as it finishes the current frame and jumps on the other, abandoning many hours of particle run-up.
But running partition per task is just as bad, since the task might fail 80% through and the slave has to start all the way at the beginning anyway, including the writing out of PRTs.