How can i define these dependencies at submission time? Im trying to integrate deadline into our existing file dependency pipeline in max right now, but can’t find any info as to what property name i should use for this in the info files
And also setting the Per Frame Dependency to true. <- not necessary
However, its a bit confusing how these tasks actually get released.
I created a testAsset.1000.txt file, and then waited a couple of minutes. Nothing happened, so i double checked to make sure pulse is up to date, which it was not. So i updated it to 6.1. While i was doing that, one of the slaves queued frame 1000 correctly. Then i created testAsset.1005.txt, expecting that frame to be queued as well. But nothing happened for 10 minutes now.
I submitted another job with the same dependency, and even though frames 1000 & 1005 existed already at submission time, none of its frames got queued, they still stand at pending.
At submission time, the submit file info keys you need are:
RequiredAssets=
ScriptDependencies=
The values are comma-separated file name strings.
I tried submitting with two file dependencies where the files did not exist on disk and the job was automatically marked as “Pending (Dependencies/Assets)”. Then I went to the folder where the files were expected to be and created new files with the expected names and the job started running…
After some experimentation, i think i know whats going on… our pulse is running on linux, and its server mount points do not match the UNC path of the windows renderfarm.
Thanks Bobo! Do you know if the ScriptDependencies just have a script name, would then the script be relative to the job’s repository path? I’ll have to find a way to work around linux/windows pathing differences :\
Do you know if the script dependency has access to the current frame being rendered, and if yes, how?
That data seems to only be part of the Deadline.Plugins.DeadlinePlugin class, which i think only works from within a plugin script, not a standalone script
I’m not sure how i can debug these dependency scripts, as they are ran by random slaves, and if they fail, nothing shows up in any logs. Is there a way to run them locally in the environment that deadline would run them in?
I found a workflow to debug them, a bit clunky but sorta works:
create a job with a script dependency
since the job is created on windows, our linux pulse wont find the script (mount point is named differently), so on the pulse machine i edit the dependency to point to the local mount. I can only edit this directly on the pulse machine, because the monitor script dependency editor brings up a file browse dialog instead of a regular textbox to enter any custom text
edit the script, add lots of ClientUtils.LogText calls
then manually trigger a repository cleanup on the pulse machine, which will then run the script
monitor the pulse log for printouts
pulse crashes from the scripts every 5-10 mins, so every now and then i restart it
When the script dependency is run on a job that’s fully pending (all tasks are still pending), it gets -9999 as the TaskID argument in its main function.
If the script returns True, the whole job & all tasks get queued. If it returns False, nothing gets queued…
In other tests i have done where i started the job by setting up a file dependency and releasing a couple tasks by creating the dependency files THEN i switched the job to use a scriptdependency instead of a filedependency, the script would be properly getting the TaskID argument (0,1,2 etc).
So my question would be, how can i get the script dependency to be called with the proper taskID right out of the bat? Or is the -9999 taskid something i should special handle?
Seems like turning on the per-frame dependency property under Dependencies does this trick. Now the only question is, how do i set that at submission time in max