The behavior of the slave’s Python environment is currently pretty worrisome.
I have a Plugin.py file in the Deadline repository that imports an external module and returns a class from it. However, changes to this external module are not properly “seen” by a running slave process unless I restart the slave or insert a “reload(module)” call after the import (which is a terrible idea for many reasons).
Similarly, the environment that is set up by the PluginPreLoad.py seems to be persistent between invocations. That is, the environment variables I set in the PluginPreLoad already exist the second time a task runs.
Is there a reason for this, and can it be changed so the Python environment (environment variables, imports, etc.) are all flushed out between every task? Being able to sandbox tasks this way and rely on immediate propagation of changes is very important to us.
The issue is that we can’t reset the python environment without restarting the slave process. It’s similar to running a python shell - you can’t reset the shell without restarting it (as far as I know anyways).
For Deadline 7, we will be exploring the idea of having the slave spawn “renderslave” processes to render tasks, instead of the current system where “renderthreads” in the slave do this. This will allow the render process to be completely sandboxed from the slave, so if something causes a crash during rendering, it won’t bring the slave down with it. It also adds the added benefit of completely wiping the Python environment.
For now, it looks like you have two options. The first is to use the “reload” module, which we understand is not ideal. The second is that after you commit a change to your python libraries, you can use the Remote Control menu in the slave list to send out a “restart slave after current task completes” command to all of your slaves. This way, they finish their current task, and restart to reload the new libraries for the next task.
I see. That’s pretty unfortunate, but I guess I’ll have to see how much of a problem this is, and whether it’s worth it for us to skip Deadline’s Python environment altogether and just do everything from within our own interpreter subprocess.
This raises another question for me though: If the environment is never cleared, what does the “Reload plugin between tasks” option actually do?
It just forces the slave to unload and reload the job’s plugin between tasks. This is standard behavior between jobs, and enabling this option just does the same thing between tasks.
It’s really only useful for advanced plugins that keep the rendering application and scene loaded in memory between tasks. When enabled, it will force the rendering application to be restarted for each task, which can be helpful for debugging memory usage issues in the rendering application or scene file.
Ok, so that really just means that a new instance of the plugin is created, but in the same Python interpreter environment (same sys.modules, globals, etc.), correct?
Thanks for the clarification, this means we probably can not use the pre setup python scripts at all.
Without the clean environment start, there is absolutely no way to ensure that one setting from a previous render is not going to contaminate an independent job later on. Very messy to troubleshoot.
For environment variables, using the per job environment seems like a good workaround, or are those settings sticky as well?
(this is a critical issue for us, since jobs define their current configuration, plugin versions, projects, shot etc using variables, and some startup python scripts)