Deadline integration in pipelines

Hey,

it would be awesome if Deadline would be extended with variable path settings to search for additional plugins, events, etc.
I would prefer to use system environment variables to to that, for example: some additional plugin paths from the variable “DEADLINE_PLUGINS”, which contains some network paths and Deadline searches inside those paths for additional plugins.

This would make integrating Deadline in a pipeline much easier. Most pipelines having their own VCS system (like git) to track changes and brachne versions. But for now, Deadline expects it’s content only inside it’s repository structure. We want to have our scripts, tools, and plugin in our structure and just telling the application where it will find it’s content for use.
Best example for using environment variables is Maya, which also allows the user to run a setup script. Would be awesome to get something similar like Maya has for setting up it’s environment.

I know this is not easy to realize, because the repository is running constantly and changing env variables to change it’s search path is maybe not the best way to setup/extend the repository. But this could work for slaves, so a slave could change it’s active pipeline via env var.

Cheers,
Michael

Sounds like our ‘custom’ folder.

I’m from a web-development background so my workflow is to design and develop off in a ‘staging’ area, then deploying from there. Internally here we develop things in local repos then add it to our source tree when changes are to be reviewed/committed.

Do you think symlinking/junctioning the ‘custom’ folder would suit your needs?

Currently our workflow is similar to yours.
We can’t use symlinks, so we have a deploy tool to push our data into 3rd party folders.

This workflow is nice and works good, but it is much harder to track locations of your data. For example: if you want to bundle the pipeline and transfer it to a different location for some reason (localize on a laptop for external freelance artists or to take it to a client,…)

Of course we could track those external location with our deploy tool. I was just asking if this could be implemented for the future.

For now we have all pipeline scripts and data in our pipeline repository (for all of our used 2D, 3D, CAD and Dev apps) on the server,
only a few apps like deadline needs the scripts at their own location.

Don’t misunderstand me, I would never get rid of the deploy step.

Cheers,
Michael

This +1000. The workflow with an actual pipeline is pretty clunky right now.

@Nathan. Can you describe what ideal setup would look like here as well. Or is it identical to what Michael suggested here? For example, I’m not sure if Michael is aware of the additional Python search paths configuration in Repo Options:
docs.thinkboxsoftware.com/produc … n-settings
or the slave centric event plugins such as “OnSlaveStartingJobCallback”:
docs.thinkboxsoftware.com/produc … ing-events
Both things above are not exactly what we are talking here and may be what your referring to as clunky, but I’d like to clarify and understand your needs here. Also, all the work that has gone into what we are about to show at Siggraph next week, will only help here as well. :wink:

I’ll save my description of an ideal setup for another time, because that would be a very long post and cover a lot more ground than what is being discussed here. :wink: However, Magsec’s suggestion of using environment variables to locate additional job and event plugins, scripts, etc. is something I support wholeheartedly, and I’ve suggested it previously in betas long past.

I use already both methods to include the python pipeline.
But this only works for my “core”. Each project could be customized while patch the core or extend it. These each project customization has it’s own location (tracked and versioning by the database) and needs to be added dynamically. If I would add all to Deadline, only the first loaded will be used, because the patches doesn’t have different names (like your custom folder inside the Deadline Repo).

This is only one pipeline reason.

We do this for application plugins using the job’s “Custom Plugin Directory” attribute, but it’s less than ideal because it requires every job to hard-code it at submission time, rather than allowing for dynamic discovery (e.g. a script that runs before a job is loaded that allows these proposed environment variables to be set dynamically, or for the plugin search process to otherwise be directed). There is also no analog for event plugins.

For now I created a custom event which uses the “On Submission” event trigger. This event is adding another environment entry with the specific path to the “patched” pipeline directory. This is defined in an additional job InfoKey which I define within my custom submission tools for the Apps (like Maya, Nuke,…).
In the end, the “core” pipeline is checking (while loading) if those environment variables are set and appends its paths to the sys.path of python.

I still trying to make it more efficient…