Hi all,
I am working on making a deadline plugin for an internal piece of software. The smallest unit of granularity the software can work on is one camera-cut at a time when rendering, as each frame depends on the (in memory) result of the last.
In our basic implementation, a job is created which contains a path to the required data and the application is launched via command line to process the entire sequence (potentially many camera cuts). A pre-job task syncs the content on the local machine via Perforce and then either builds or deploys the correct version of the software (which changes many times a day).
An obvious way to parallize this would be to render one camera cut per machine, and submit each camera cut as a separate job. Unfortunately, there is somewhat sizable overhead to start up a job. In total, we’re looking at say 5-8 minutes overhead to launch the application. This seems like an ideal case for an Advanced plugin.
Unfortunately, I can’t find a good way to set up multiple tasks for a job. My understanding is that for Deadline, task is the smallest distributed unit of work. Because we can only go for a single camera cut, I would need to break the tasks into say:
Frames 0-30, 31-50, 51-60.
Unfortunately when I create a job using the REST API and provide it that frame range, Deadline turns it into units of ChunkSize, so either 60 individual tasks, or if I set the ChunkSize to a really large job (say 10000) then I get one task for 0-60.
I could work around this by sending the tasks the frame range of 0-29, 31-49, 51-60 but this seems kind of silly.
Is there any way to prevent this auto-combining behavior? Or should I lie to deadline and pretend each frame/task is a separate camera cut, and then use the extra metadata fields to embed any extra information needed so that it can determine that Frame0/Task 0/<application specific data for Camera Cut 0>?
Or am I thinking about the deadline plugin system wrong and there’s a better way to do this?
Thanks!