AWS Thinkbox Discussion Forums

Enforce Sequential and errors

Deadline 2.7.27948

Krakatoa beta 18

3ds max 8 sp2



I’m rendering some Krakatoa partitions on Deadline, and my jobs are set to Enforce Sequential Rendering. However, if for some reason a job gets an error, the errored frame is skipped and the next frame is rendered. Because these are particle partitioning jobs, this isn’t helpful behavior, and doesn’t seem to be in the spirit of “enforce sequential”.



Is this a bug?


  • Chad

Hi Chad,



This was originally done by design, with the idea in mind that the slave

would only go back to do frames that were skipped after completing the

last frame in the range. The problem that could occur if Deadline didn’t

do this is that a slave could get stuck on a bad task indefinitely until

the job/task fails (assuming that failure detection is even enabled), or

if a user manually intervenes.



Is there ever a case where you would want the slave to skip over “bad

tasks”, or does this not even make sense because the subsequent tasks

won’t have the data they need from that “bad task” anyways?



Cheers,

In most cases, the current behavior makes sense, but it’s not that much different than the behavior that would exist normally.



If I had set “first to last” and a limit of 1 machine, it would behave the same way as it does now, rendering in order unless it hit an error.



The “enforce sequential” would seem redundant.



With particle partitioning, though, I really do need the tasks to be done in order.



At some point, I wish that Krakatoa partitioning was it’s own job, so I could handle it differently than I do normal 3ds max jobs. For instance, I can’t get task progress from a particle partitioning, so I need to set the timeout limit to super high, like 170000. But that doesn’t make sense for doing a Brazil rendering, where I need the task update timeout to be low, like 1000.


  • Chad




At some point, I wish that

Krakatoa partitioning was it’s

own job, so I could handle it

differently than I do normal

3ds max jobs. For instance, I

can’t get task progress from a

particle partitioning, so I

need to set the timeout limit

to super high, like 170000.

But that doesn’t make sense

for doing a Brazil rendering,

where I need the task update

timeout to be low, like 1000.



Unfortunately, the reason there are no progress updates is because there isn’t a way to get progress updates from Particle Flow, otherwise Krakatoa would include this in its regular progress update. I guess what you’re looking for is the ability to change the task timeout in the Krakatoa submission GUI in a way which is sticky for just Krakatoa jobs.



-Mark

Yeah, I would want it per job catagory. But that’s not possible now, right?



I assumed just making another plugin type would be possible.



Partitioning particles with Deadline is cool, but there’s assumptions made by Deadline that aren’t ideal.



Like I submit a 200 frame partioning job (frame per task) and a slave gets 180 frames into it, and someone submits a regular rendering job with a higher priority. The slave jumps off the Krakatoa job as soon as it finishes the current frame and jumps on the other, abandoning many hours of particle run-up.



But running partition per task is just as bad, since the task might fail 80% through and the slave has to start all the way at the beginning anyway, including the writing out of PRTs.






  • Chad

Hi Chad,



Yeah, you’re right that it doesn’t make sense for the slave to skip over

frames like that when it errors on them. We’ve logged this as a bug. The

fix will be that whenever Deadline errors out on a sequential job, it

will start again at the first available task again (whether that be the

same frame that it errored on, or a previous frame that was requeued).



For the priority issue, the temporary workaround would be to set the

job’s priority at 100, so that it’s not interrupted (this is what we do

here). If you don’t want your particle jobs to take over your farm, you

can create a Krakatoa limit group with a limit of 10 for example, and

submit your Krakatoa jobs with this Krakatoa limit group selected. This

will ensure that no more than 10 particle jobs are rendering at a time.



Cheers,

Just an FYI that we’ve fixed the sequential rendering bug mentioned

earlier (where the slave skips over frames it errors on). This fix will

be included in a maintenance release we’ll be doing shortly after

Siggraph this year.



Cheers,

Privacy | Site terms | Cookie preferences