Resubmitted job not using latest event plugin code

I’m trying to test an event plugin and am finding that whenever I resubmit a job with my event plugin enabled, it’s not using the latest code on machines that have ran the event in the past. The event plugin makes a call to add to the sys.path which points to where the latest code lives like in the example below, however, when the resubmitted job runs and makes the call to the “latest” code, it’s outputting information that no longer lives in the classes being called.

My code:

[code]import re, sys, traceback, platform

from System.IO import *
from System.Text import *

from Deadline.Events import *
from Deadline.Scripting import *

Switch python version libraries

if “2.6” in platform.python_version():
sys.path.append("\\ladev\bb\site-packages\bb\libs_26")
else:
sys.path.append("\\ladev\bb\site-packages\bb\libs")

import pymongo

sys.path.append("\\ladev\bb\site-packages")

from bb.pipeline import Pipeline
from bb.constants import *

######################################################################

This is the function that Deadline calls to get an instance of the

main DeadlineEventListener class.

######################################################################
def GetDeadlineEventListener():
return bbPipeline()

def header():
log.info("-"*80)
log.info("%s" % (“bbPipeline”.center(80)))
log.info("-"*80)

######################################################################

This is the main DeadlineEventListener class for bbPipeline.

######################################################################
class bbPipeline (DeadlineEventListener):

def __init__( self ):
    self.OnJobSubmittedCallback += self.OnJobSubmitted
    self.OnJobStartedCallback += self.OnJobStarted
    self.OnJobFinishedCallback += self.OnJobFinished
    self.OnJobRequeuedCallback += self.OnJobRequeued
    self.OnJobFailedCallback += self.OnJobFailed

def OnJobSubmitted(self, job):
    pass

def OnJobStarted(self, job):
    header()
    log.info("initializing pipeline object")
    p = Pipeline()
    log.info("continuing...")
    
def OnJobRequeued(self, job):
    pass

def OnJobFailed(self, job):
    pass


def OnJobFinished( self, job ):
    header()
    log.info("initializing pipeline object")
    p = Pipeline()
    log.info("continuing...")[/code]

When p = Pipeline() is initialized, originally I had it outputting debug information on the current user. I had taken that information out long ago, however my log is still displaying that info on resubmitted jobs (only machines that have rendered with that event plugin before).

example output from two machines rendering the same job and initializing the same classes:
Machine 1 (OnJobFinished) – Expected output

=======================================================
Log
=======================================================
PYTHON: [    INFO] --------------------------------------------------------------------------------
PYTHON: [    INFO]                                    bbPipeline                                   
PYTHON: [    INFO] --------------------------------------------------------------------------------
PYTHON: [    INFO] initializing pipeline object
PYTHON: [ WARNING] User account 'render' has been marked as hidden.
PYTHON: [    INFO] continuing...

Machine 2 (OnJobStarted) – Old output

=======================================================
Error
=======================================================
Event Error (OnJobStarted): TypeError : not all arguments converted during string formatting

=======================================================
Type
=======================================================
PythonException

=======================================================
Stack Trace
=======================================================
['  File "none", line 53, in OnJobStarted\n', '  File "\\\\ladev\\bb\\site-packages\\bb\\pipeline\\__init__.py", line 267, in __init__\n    if not self._user:\n', '  File "\\\\ladev\\bb\\site-packages\\bb\\pipeline\\__init__.py", line 220, in _check_user\n    a = self._user.get_name()\n']

=======================================================
Full Log
=======================================================
PYTHON: [    INFO] --------------------------------------------------------------------------------
PYTHON: [    INFO]                                    bbPipeline                                   
PYTHON: [    INFO] --------------------------------------------------------------------------------
PYTHON: [    INFO] initializing pipeline object
PYTHON: {'_MongoRaw__data': {u'hide': 1, u'description': u'', u'extra': {}, u'mobile': u'', u'visibility': {u'mobile': 1.0, u'phone': 1.0, u'description': 1.0, u'email': 1.0}, u'phone': u'', u'photo_url': u'', u'uid': u'render', u'_id': ObjectId('52e074d91bcfc5ca4dcb57f0'), u'email': u'', u'name': u''}}
An error occurred in the "OnJobStarted" function in events plugin 'bbPipeline': TypeError : not all arguments converted during string formatting (Python.Runtime.PythonException)
['  File "none", line 53, in OnJobStarted\n', '  File "\\\\ladev\\bb\\site-packages\\bb\\pipeline\\__init__.py", line 267, in __init__\n    if not self._user:\n', '  File "\\\\ladev\\bb\\site-packages\\bb\\pipeline\\__init__.py", line 220, in _check_user\n    a = self._user.get_name()\n'] (Deadline.Events.DeadlineEventPluginException)
   at Deadline.Events.DeadlineEventPlugin.HandlePythonError(String message, Exception e)
   at Deadline.Events.DeadlineEventPlugin.OnJobStarted(Job job, String[] auxiliaryFilenames)
   at Deadline.Events.DeadlineEventManager.OnJobStarted(Job job, String[] auxiliaryFilenames, DataController dataController)

I’ve even tried removing the compiled byte code (*.pyc) files from the location that is being added to the sys.path and machines that ran this code once before are showing old output, however, machines that have not ran this code yet are outputting what I expect. Is this for some reason cached on the machines?

This has to do with the global scope being persistent in Deadline (for the current Monitor/Slave/Pulse session), and the way Python deals with imports. Python only imports modules once, because that’s a lot more efficient and 99% of the time, the libraries don’t change often. If you’re anticipating the Pipeline library to change often over the lifetime of a Monitor session (or just to make development easier), you’ll have to explicitly reload it using the built-in reload function.

It’s made a bit trickier with the “from ________ import ________” syntax, but still not too bad:

import bb.pipeline
reload( bb.pipeline )
from bb.pipeline import Pipeline

I believe that should do the trick, let me know how it goes!

Okay, that seemed to have fixed it. After starting this thread I tried running

import bb
reload(bb)
from bb.pipeline import Pipeline

and that wasn’t working, so I abandoned the idea of using a reload. Didn’t think to do it the way that you suggested. Thank you!