AWS Thinkbox Discussion Forums

python libraries cached on slaves for post-job execution

Hi team,

We have a python post job which runs after a render job completes. The post job reads the JobOutputDirectories and JobOutputFileNames to derive a list of all output paths. It then performs database insertion on each path.

Yesterday, we updated one of the python libraries the post job is importing: we added two new functions to the library.

When the post-job called these new functions, some slaves claimed:
‘module’ object has no attribute ‘isBeautyJpg’ (Python.Runtime.PythonException)

Some slaves picked up the new function without issue.

By adding debug statements, we were able to see that although the jobs on the failing slaves were sourcing the proper .py file, they did not list the new functions when the dir() cmd was run on the lib.

We ultimately resolved the issue by adding a reload() call succeeding the import call in the post-job.

Is this a known issue? If so, is it limited to postjobs, or do other task types share this limitation? Is there a better workaround than to explicitly reload all imports?

Thank you for your help!
Best,

Sally Slade
Scanline VFX

Hi Sally,

Using “reload” is unfortunately your only option here. This thread explains why you see this behavior:
viewtopic.php?f=86&t=8935&p=37854&hilit=reload+module#p37854

The way we can avoid this is to sandbox the python execution in a separate process to ensure it always starts with a clean environment. I know we had mentioned in that thread that we might be doing this in v7, but it’s currently been pushed back to v8 on our roadmap due to other priorities that have come up.

Cheers,
Ryan

This is still an important issue for us across the board (not just in the case Sally just mentioned). We are now getting into the (bad) habit of doing:

import mod
reload(mod)

combos in all our scripts, and deadline is the only reason for this.

Privacy | Site terms | Cookie preferences