AWS Thinkbox Discussion Forums

Deadline Update + Python error + odd behavior

Above is the error we are getting across our entire farm. This error is new since we’ve updated, all jobs previously did not have any issue. The strange thing is, the job does actually render to completion after the error. In watching the animation job on the farm, it appears that when it starts a new frame it automatically starts at around 95% complete, fails, then starts over again and the 2nd time it renders in a normal time frame and the saved image is perfectly fine.

Any ideas as to what is going on? Prior to the Deadline update, we had zero issue with this job or any other job.

I should also note that we have 21 farm machines, the errors didn’t start showing up until frame 22. So it appears that the first frames on the farm rendered without error, but every frame after that is throwing the errors.

This is the current version we are on;
Deadline Client Version: 10.1.9.2 Release (3d6a64d94)
FranticX Client Version: 2.4.0.0 Release (0b549a42a)

Repository Version: 10.1.17.4 (d3559fe75)
Integration Version: 10.1.17.4 (d3559fe75)

Update: I ran a test on a simple teapot scene and did not get this error. I also ran a test from a previous job that did render successfully, and also did not get any errors or odd behavior. This scene is very similar in complexity to the one that is giving us issues, so it might or might not be related to how complex the scene is. Though when this job moves on to the next frame in the animation, it starts at 0%. The job that has issues seems to start all follow-up frames at 95% or higher, then fails.

This is a 3ds max issue, reported many times previously, e.g. :

It is related to the scene, or some MaxScript that is failing.

Yeah, I’ve narrowed it down to at least one of the xref files. Which is odd as the other job that renders fine also has xrefs.

I’ve isolated it to one specific XREF. The file itself is fine, but it is fairly complex with a lot of rail clone and forest pro objects. If I set deadline to reload Max each task, the error goes away. It’s almost as if the next task is starting too fast for this particular job and the previous task may not have fully released in memory.

Does anyone know if there is a way to set the delay between tasks to be a per-job variable or is it something that needs to be set at the repository?

Privacy | Site terms | Cookie preferences