Slaves not getting Python Path

We are having issues with the PYTHONPATH in OnJobFinished events. Sometimes everything works just fine, and other times we get import errors. If I understand the documentation correctly, the paths set in Tools > Configure Repository Options > Python Settings are supposed to be passed to the render slaves. When I checked the environment on the deadlineslave.exe process, none of the paths were in PYTHONPATH. Which makes me think that they are added dynamically after the process is started.

For now I’ve added all the paths manually right before I need to use them, like this:

import sys
sys.path.append(r'\\server\path\to\code')

From what I understand, this should not be necessary because it should get the paths from the Python Settings in the Deadline Monitor. But my question is why does it work sometimes and not other times?

Hello,

In talking this over with the dev team, it looks like the code should be applying this to event plugins, as well as pre and post job scripts.

Ok, so the paths specified in Python Settings are supposed to apply to events, that’s what I thought. The weird thing is sometimes it works and sometimes it gets import errors. It looks like the import errors are due to the deadlineslave process not having those paths in it’s environment. Could there be something wrong with how our repository is set up? Or is there an issue with the code?

Hello,

Can I ask what version you are running of Deadline? If you aren’t running 6.2, is an upgrade an option? Thanks

We are currently using 6.1 but I just noticed that 6.2 was released a week ago. Once we get our download link I will upgrade and check if this is still an issue.

As we have not received the 6.2 download yet, do you have any other information about this issue?

Hello,

So as far as I can tell, this should be resolved in 6.2. Were you waiting on someone in your company to obtain the download links? If you need them, you can always email either sales or support for those links and an updated license file, if needed.

Cheers,

We just got 6.2 installed and I can confirm the issue has not been fixed. To reiterate, the deadline slaves do not correctly get the python path from the deadline monitor.

To give a specific example, nuke points to an init.py file based on an environment variable. This file is loaded properly, but in that file we import some of our custom scripts. These are the ones getting the import errors, but it only happens sometimes. So the paths in the Python Settings are not getting passed to the deadline slaves every time. The only idea we have is that maybe the slaves can’t get the paths if they are run as a service. Do you have any insight on this issue?

Hello!

This is pretty puzzling. We haven’t seen this much in the support circle here, so either it’s working, or people are working around it and not telling us, or it works. Thanks for being vocal about this stuff.

So, considering the import errors are the symptom here, maybe we can watch to see both that the environment comes through and that the files that Python should be importing are available (maybe a network blip is causing this).

The first part of that check is super easy. Just drop in something like so:

import pprint import sys pprint.pprint(sys.path)

That should let us see if/when that path isn’t being set.

If that seems all right, then the next step is trying to open something from those paths to see if the network fails. I’d think something gross like this might work:

with open("some/file.py") as file: data=file.read()

Hopefully that gets us some sort of results so we can start focusing furthur down on this.

I just did your first suggestion and none of the paths were in the… path. I’m positive the network stays connected because our nuke library is on the network. Our workaround is to hardcode the path at the top of the init.py file and everywhere else it fails to import. But that’s less of a solution and more of a trial-and-error patch.

There doesn’t seem to be any pattern as to which module fails to import or where. The only thing that remains consistent is that it happens to the deadline slaves running as a service.

Well, that sounds pretty conclusive. Time to test on our side.

Hi,
Just a quick thought here, but you mentioned that you are running slave as a service. Drive Letter based paths are not accessible when running as a service. However, UNC paths do work.
msdn.microsoft.com/en-gb/library … 43(v=vs.85.aspx
Would this explain the behaviour your seeing?
Mike

Blarg. Thanks Mike.

He’s totally got a point here. If the user the service is running under doesn’t have drives auto-mount, this would explain the problem. Testing with UNC paths would prove it.

Sorry I haven’t replied in a while, I changed all the paths to use UNC and it seems to be ok now.

Here’s a related question: are the mapped drives supposed to automatically resolve so that slaves running as a service don’t have to use UNC paths? I noticed in Configure Repository Options > Mapped Drives there is a checkbox to map drives when slave is running as a service. Do those maps only work for auxiliary files and job output? Or are they also supposed to be used for the python paths?

Hi,
Enabling this checkbox will mean the network drive mappings will be made at the beginning of each job using either the user credentials if provided in the configuration of each drive mapping or in their absence, by whatever user account is being used to run the slave as a service. This should then allow you to access your drive mappings, whilst running as a service.
Mike

We seem to be having a similar problem here although we’re operating in a slightly different manner.

Would seem that since upgrading from 6.0 to 6.2 on our windows based render farm with Linux hosts additional Python Search paths (not using UNC paths) aren’t making it to the env of the spawned thread on the nodes when set on repo config page.

Mappings to network drives is done once at time of boot and out pre-task script here checks for ‘existing and active’ mapping and remaps if missing.
(not using Deadline’s mapping drives tools at this point, does it still disconnect and re-connect ahead of each task?)

Pretask scripts defining PYTHONPATH are not making it through.

Only way we can push a new PYTHONPATH env to render thread is if we put it into the Environ Variables page on the job properties.

IncludeEnvironment = True doesn’t take over PYTHONPATH at time of submission so we have to use EnvironmentValueKey0=PYTHONPATH=U:\python\264x64\ etc at as well as IncludeEnvironment.

Our nodes aren’t running launcher nor slave as a service nor daemon.

Submitting a python job script with:

import pprint import sys pprint.pprint(sys.path)

returns

U:\applications\python\264x64 U:\applications\python\264x64\python26.zip U:\applications\python\264x64\DLLs U:\applications\python\264x64\lib U:\applications\python\264x64\lib\plat-win U:\applications\python\264x64\lib\lib-tk U:\applications\python\264x64\lib\site-packages U:\applications\python\264x64\lib\site-packages\PIL U:\applications\python\264x64\lib\site-packages\win32 U:\applications\python\264x64\lib\site-packages\win32\lib U:\applications\python\264x64\lib\site-packages\Pythonwin

when we need it to be:

U:\scripts\python\common U:\scripts\python\common\3rdParty U:\scripts\maya\2014-x64\shelves U:\scripts\maya\2014-x64\pipeline U:\scripts\maya\2014-x64\3rdparty U:\scripts\maya\2014-x64\3rdparty\python U:\scripts\maya\2014-x64\rigging U:\scripts\maya\static\rigging\python U:\scripts\maya\static\3rdparty\python

Have you played around with the fancy Python Settings in the Repository Options folder? We should be sticking that stuff in when the Slave starts up.

If any scripts are modifying the path, those settings are going to stick around until the Slave is restarted…

If the settings from the Repo Options don’t stick, maybe try doing a quick audit to see if any scripts are playing around with sys.path.

Have you played around with the fancy Python Settings in the Repository Options folder? We should be sticking that stuff in when the Slave starts up.

If any scripts are modifying the path, those settings are going to stick around until the Slave is restarted…

If the settings from the Repo Options don’t stick, maybe try doing a quick audit to see if any scripts are playing around with sys.path.

Hi Edwin,

I’m setting a path in the repository options gui dialogue and it’s not taking even when I reboot the slave. Is there a folder/file on the repo that can conflict with this?
Cheers!

Those settings are taken from the database, and there should be nothing conflicting with it.

If you run “c:\windows\system32\cmd.exe /c set” through as a CommandScript job, it should show what the PYTHONHOME variable was set to.