AWS Thinkbox Discussion Forums

job timeout

Hi there,

So, in some cases, we have tasks that are slowly creeping closer and closer to the preallocated job timeout. Sadly, sometimes thats in the range of 20 hours (and its legit processing).

If i quickly go and modify the job property to have a higher timeout value, will the slave that’s about to time out ‘double check’ with the db before it cancels itself? If not, could it do that? I think we are losing days of processing because of this issue on certain tasks :frowning:

cheers,
laszlo

To get rid of “hanging 32bit MR frames”, we use the Deadline auto-timeout feature based on 2 variables.

  1. Don’t start looking for hanging frames till the job is X% complete. (We use something like 90%)
  2. Take into account different spec machines by using the slave render/load timeout multipliers (under slave settings)

(we don’t get the above much anymore as we moved to x64bit OS like everyone else)

Maybe you could use these 2 settings in combination with the standard job timeout setting to get your ideal setup?

I still think there is room for improvement by making these above settings “pool” or “group” specific, so studios can throw sim jobs at 1 timeout pool, animation sequences at another pool and high-res single frame stills at another pool, each picking up different “pool specific timeout” settings across all jobs that hit that particular pool or group.

We’ll make the change in the next beta so that the slave confirms the current timeout setting for the job before throwing a timeout error.

Cheers,

  • Ryan

Awesome, thanks Ryan!

Privacy | Site terms | Cookie preferences