AWS Thinkbox Discussion Forums

One task renders all the frames

Hi,

I have a weird issue; by default all frames are submitted from Maya as their own task, ie. a task size of 1 frame/task. The problem is that each task, or some tasks, just renders all the frames instead of its designated frame only, so it appears as if the job is like 1% complete and the first frames are really slow, but in reality there are lots of frames already on disk. I’ve set my Maya scene to ignore existing frames so when it eventually finishes the first task, it will then to though the remaining tasks quite quick, but this, of course, is not the intended behavior.

Any idea why this happens? Is there some setting I accidentally changed?

Cheers

Edit: In case it’s important, we’re running Maya 2018 and Redshift, Deadline 10.0.7.4, on Windows.

Maybe worth upgrading Deadline to latest 10.0.9.4?

docs.thinkboxsoftware.com/produ … notes.html

Maya
When submitting a job with multiple render elements, the beauty pass is now always the first element when viewing output for the job.
Arnold elements are now supported properly when using Export jobs while Tile Rendering and Merge AOV are enabled.
The integrated submitter now properly respects back slashes \ in the output File Name Prefix.
Half frames or frame renumbering is now prevented when using V-Ray.
Fixed a bug in the integrated submitter that prevented Machine Lists or Limits from being set for Redshift export jobs.
Fixed a bug that could cause the first frame of a Redshift render (version 2.5.40 or later) to fail when using layer overrides.

Thanks, I’ll see if we can update it for sure. Not sure if that particular issue listed in the fixes is what causes this, but it’s worth a try!

We are also experiencing this same issue described by Arvid.
So far failed to find the common thread from scenes causing this problem.

We were using Deadline 10.0.8.4, updated to 10.0.9.4 but still the same issue.
It’s not on every job but enough to be a problem.
We are using Maya 2018.2 and Redshift v2.5.49.

So far best thing to do is ensure that the “Ignore existing frames” option is turned on with the Maya render options of the scene before sending the render to Deadline, as you cant set that option once the job is in the queue.
But when you have 10-15 slave nodes all doing the same thing, it’s hard to know which slave rendered which frame if something goes wrong, and task timeouts are useless to catch any failed/stalled slaves because there’s no way of knowing which slave is currently doing what.

No idea what is causing this, as we’ve been using Deadline for Maya-Redshift renders for almost 2 years now and never had this problem until very recently.

On the support side, we see this when Redshift renders within Maya and render layer overrides of the frame number is done.

I’ll sync with the team here and see if that’s all fixed up in SP9 or not.

Yes, that matches my experience as well! I tick all those 3 boxes in my scenes here.

Seems like a pretty big deal as I always try to optimize my renders by restricting the sequence start and end. Keep us posted!

You may be able to work around this by doing a local Redshift export job. I’m bumping the priority of this over here as I think it has something to do with the new render layer system, or how Redshift reads what it’s start and end frame is.

Privacy | Site terms | Cookie preferences