I just ran into a problem with attempting to suspend two large jobs on a high-latency connection (in our other studio).
Two jobs, one with 1045 tasks and one 778. Each was running with a single slave. I tried to suspend them one at a time. Only about 1/3 of the tasks actually ended up in the Suspended state, the job continued running, and its status counted up from “Rendering (1)” until it eventually hit “Rendering (6)”. The slaves just picked up tasks after the first 1/3 that had been suspended.
I tried suspending the jobs repeatedly, but nothing happened after that. Eventually, I just VNC’ed into the remote location, started a monitor there, and suspended the jobs. Everything worked fine.
I don’t know if this issue has been addressed or mitigated in Deadline 7, but it seemed like it was worth mentioning. It seems like Deadline may be aborting some database operations midway through based on some kind of a hard transation timeout limit (which I believe Laszlo mentioned having some issues with in another thread).
We are aware of this issue, and we hope to either address it or mitigate it during the v7 beta. It’s due to the tasks being individual documents, and having to be updated one at a time, so the performance slows way down over a high-latency connection. Our main DB developer is currently out of the office, but once she gets back, we’ll have her start looking into this.
It would be because the internal task counts for the job object are out of whack. This is also something we’re looking at trying to address in v7. At the very least, we think we found a way to ensure that available tasks for jobs that are out of whack like this still get picked up.
That’s also something we want to add to v7 - a way to right-click or something to “fix” a broken job. Currently, the only thing you can do is modify the *Chunks job properties directly in the database.