I have a weird issue; by default all frames are submitted from Maya as their own task, ie. a task size of 1 frame/task. The problem is that each task, or some tasks, just renders all the frames instead of its designated frame only, so it appears as if the job is like 1% complete and the first frames are really slow, but in reality there are lots of frames already on disk. I’ve set my Maya scene to ignore existing frames so when it eventually finishes the first task, it will then to though the remaining tasks quite quick, but this, of course, is not the intended behavior.
Any idea why this happens? Is there some setting I accidentally changed?
Cheers
Edit: In case it’s important, we’re running Maya 2018 and Redshift, Deadline 10.0.7.4, on Windows.
Maya
When submitting a job with multiple render elements, the beauty pass is now always the first element when viewing output for the job.
Arnold elements are now supported properly when using Export jobs while Tile Rendering and Merge AOV are enabled.
The integrated submitter now properly respects back slashes \ in the output File Name Prefix.
Half frames or frame renumbering is now prevented when using V-Ray.
Fixed a bug in the integrated submitter that prevented Machine Lists or Limits from being set for Redshift export jobs. Fixed a bug that could cause the first frame of a Redshift render (version 2.5.40 or later) to fail when using layer overrides.
We are also experiencing this same issue described by Arvid.
So far failed to find the common thread from scenes causing this problem.
We were using Deadline 10.0.8.4, updated to 10.0.9.4 but still the same issue.
It’s not on every job but enough to be a problem.
We are using Maya 2018.2 and Redshift v2.5.49.
So far best thing to do is ensure that the “Ignore existing frames” option is turned on with the Maya render options of the scene before sending the render to Deadline, as you cant set that option once the job is in the queue.
But when you have 10-15 slave nodes all doing the same thing, it’s hard to know which slave rendered which frame if something goes wrong, and task timeouts are useless to catch any failed/stalled slaves because there’s no way of knowing which slave is currently doing what.
No idea what is causing this, as we’ve been using Deadline for Maya-Redshift renders for almost 2 years now and never had this problem until very recently.
You may be able to work around this by doing a local Redshift export job. I’m bumping the priority of this over here as I think it has something to do with the new render layer system, or how Redshift reads what it’s start and end frame is.