I’m often exporting Redshift archives for stand-alone rendering from Houdini, it’s really neat that I can have Deadline execute the process and then submit the files when the export is done, but what would be even neater is if it could submit finished files as it goes along, so the farm doesn’t have to wait for the entire export to be done before it can begin crunching the stand-alone files. For example submitting after every 10 frames, and use batches to group the jobs. Or better yet if it could submit them into the same job.
I kind of solved it by creating 10 Redshift ROPs with staggered frame sequences and connecting them all do a Deadline ROP, but it seems this could be a proper feature. Or are there any clever ways of doing this using Houdini’s existing tools?
Also, I can’t figure out how to specify a batch name in the Deadline ROP, seems like that field is missing in most submission dialogues.
Yeah, same here! For pyro/fluid sims as well as where one node has to creat all the VDB files before the farm stars rendering.
A more granular dependency would help, so it’s not only job being dependent on other jobs but individual frames being dependent on other frames.
What currently works but isn’t very pretty is to just start all jobs simultaneously and have the failing jobs (due to missing rs files) retry indefinitely until all rs files are present
Well, there are asset dependencies on the job. Depending on farm size this could have scaling issues as I think that every Slave will scan for all files every time it looks for a new task.
Now, to use that from the submission, check these docs:
RequiredAssets=<assetPath,assetPath,assetPath>: Specifies what asset files must exist before this job will resume (default = blank). These asset paths must be identified using full paths, and multiple paths can be separated with commas. If using frame dependencies, you can replace padding in a sequence with the ‘#’ characters, and a task for the job will only be resumed when the required assets for the task’s frame exist.
We’d run into some trouble with users though if jobs were created that would never finished because the assets were never written. It’d also be tricky to diagnose why this happened as we don’t log the submission process anywhere permanent.
I see the appeal, it’s just not the Thinkboxy way to do it. I’ll run the idea past some of the developers though to see what they think.
Sounds good, please do! I don’t mind if you find another way to approach it.
Here’s what I was thinking; how about the Deadline ROP submitting an empty job (ie no tasks) as soon as you press ‘Submit To Deadline’ in Houdini, and the Deadline ROP were standing in contact with the job during the export phase, adding frames to it as they become available. This could be a local watch-folder type setup on the machine which is doing the export and submission (as opposed to the slaves), running in parallell to Houdini, checking for *.rs every minute or so. Or it could wait for a certain number of new files to be added so it can do a batch.
I’m guessing this setup could be implemented by anyone as a completely stand-alone Python script outside of the export process, but it would probably be a new job per batch of frames rather than adding to an existing job.
We do have an API option to add tasks onto the job these days, but the overhead of calling out DeadlineCommand is about 3 seconds and the calls should not overlap… I’m not entirely sure how well that would work out. I’ve asked on the the developers to take a look over here and make any suggestions.
They found some issues with my plan such as the fact that some renderers create a file immediately but they are not ready to go for some time later which would throw errors if the Slaves did try to pick them up. I can’t remember which renderer did that, but we would want to make sure the solution would be clean for every one of them.
One angle that was mentioned in my conversation was to have Houdini do the export on the farm and have the Redshift (or other) be a frame-dependent job. This would mean that as tasks completed on the Houdini export, the rendering tasks would pick up. The downside here is that Houdini is restarted between tasks so there would be performance losses and you would want to make sure you’re caching your SIMs to disk…