Job state corruption/data race with AppendJobFrameRange

This is with 7.1.2.1 on Fedora 19

We have a task generation pipeline that we are attempting to adapt to run on Deadline, and that process involves tasks that spawn other jobs and then subsequently append tasks to them as the original task yields more work. The append operation is performed by hitting a web service script that then uses RepositoryUtils.AppendJobFrameRange to add task(s) to the job.

It looks like appending tasks too quickly after a job is submitted is a reliable way to create corrupt display state in a job document.

I’ve got a job that was spawned by another job in the pipeline, initially with a single task. Then, 14 more tasks were appended to it in rapid succession using the aforementioned web service script.

The job document for this job contains the following values (no, the -1 is not a typo):

"CompletedChunks": 15, "QueuedChunks": 1, "SuspendedChunks": 0, "RenderingChunks": -1
As a result, this is what the monitor displays in the job list:
TaskAppendStateCorruption.png
This is in spite of the fact that all of the job’s actual tasks have completed successfully, and this same pattern has repeated itself at least once under similar conditions.

Hey Nathan,
By chance I just hit this same issue this morning using the equivalent “./DeadlineCommand -AppendJobFrameRange …”. I’ll write up a ticket now for the core team to handle.
Thanks!

Good to hear. Thanks Mike.