AWS Thinkbox Discussion Forums

global functions unavailable at py compile time

Hi,

In 5.0.0, global functions, like GetScriptsDirectory(), have already been available when the python script was compiled and executed. This means that a call to GetScriptDirectory() in the plain script (outside of any function) would succeed.

In 5.1 Alpha 2 (tested on linux 64), the global functions only start to be available at the time main(*args) is executed, so apparently your script execution code changed from something like this:

assureDeadlineGlobalsArePresent()
import userScript
userScript.__main__()

to something like

import userScript  # no deadline globals at source/compile time
assureDeadlineGlobalsArePresent()
userScript.__main__()

To reproduce the issue, try to execute GetScriptsDirectory() in any scripts body, outside of main .

The problem is that my integration sets up its environment at source-time, outside of main, and it relies on the GetScriptDirectory() method which doesn’t exist. Sure, I could workaround this, but I think the actual culprit is that you don’t put your globals into the builtin module right away, which would make it available everywhere in the python interpreter.

A little side node: The job scripts don’t work on linux as they try to execute a script on “submission\Vrimg2ExrSubmission\Vrimg2ExrSubmission.py”, although it needs to be a capital S (“Submission\Vrimg2ExrSubmission\Vrimg2ExrSubmission.py”).

Thanks for your help,
Sebastian

The problem is that now the script gets executed before our globals are added to the python environment. We did this because that’s the only way it would work with Python.NET, and we wanted the environments for the two options (IronPython and Python.NET) to be as similar as possible. We had never actually intended for these global functions to exist outside the scope of main, so it’s more of a fluke that it worked that way in 5.0 and earlier.

Maybe it might be a better idea to remove these global functions, and instead replace them with Utils.* functions. That way, they should always be available because the namespace they’re in will always be available. Then we don’t have to worry about messing with the python globals.

Also, thanks for reporting the bug with the job scripts. We’ve logged it and it will get fixed before 5.1 is released.

Cheers,

  • Ryan

Hmm, just realized that removing them would break a lot of existing scripts. Maybe we’ll just add Utils.* functions that do the same thing so that both options are available. Then if you’re outside the scope of main, you can just use the Utils.* functions…

Making the globals , whichever it may be, available in their own modules is a very good idea. Currently you add confusion about which global functions are actually available as they change with the type of script. If you put them in globals, this way people could just import what they want, and use it, in most cases.

What could need some improvements as well are the development docs. The API described there is not complete, e.g. the Job instance has many more properties than there are documented, and the ‘JobID’ property actually doesn’t exist, its a typo of ‘JobId’. When working with the deadline API, I constantly have to switch between different html pages (or keep many open tabs) - I didn’t find a searchable generated API documentation yet, ideally available offline.

Something I don’t really understand about the API is its low-levelness when it comes to submitting jobs. Deadline is software whose main business it is to deal with jobs, yet I have to build an argument list with key-value files I write on my own which get passed as arguments to a separately started process, which in turn does some magic, without returning the created JobId (which could be interesting if you want to setup some dependencies for example).

When making some changes to the existing submission scripts, I also noticed that all of the scripts I looked at were amazingly similar. Even though job-options for instance are very much (if not) the same for all submission dialogs, its all hand-coded (or copy-pasted) between the scripts. When the submit button is pressed, suddenly each and every submit script writes simple key-value files, dealing with all the required formatting themselves, which I would consider an implementation detail.
Considering that all job scripts (i.e. 'submission/…) instead of 'Submissino/…) in a path) have the same bug in them, copy-pasting has happened a lot.

All this introduces an incredible amount of uncontrolled redundancy, which increases maintenance costs, as bug fixes or features now have to be applied to all the files where the code was duplicated into.

Although I can understand that the GUI-API you provide is meant to be as simple and concise as possible, it ends up being quite time consuming after all as you leave it to the user to set up each and every width by themselves. When I decide to put some controls into a groupbox afterwards, I have to deal with the related width changes myself. This is quite painful to me (as it reminds me of the bad old times without ‘modern’ layouts), and I am not sure how portable this can possibly be (but maybe mono handles this).

All the aforementioned things just came to my mind, and I put it here so I can make myself heard in that matter. These things are not neck-breakers to me after all, just things that could be improved to improve the overall development experience.

By the way, do you have a public (maybe beta-only) bugtracker of some sorts ? I don’t know how good a forum can be to do that job.

Thanks again for listening,
Sebastian

Thanks for the feedback. My replies are below.

Deadline’s scripts are context-based, which is why we originally went with the idea that functions that should only be available in certain contexts should be global functions, and context-agnostic Utility functions can be added to a module that could be imported. We can’t go back on that now because that would break backward compatibility. However, adding new modules that can be used instead works around this issue.

For functions that are used out of context (ie: GetJob()), I guess we should raise an exception? Or maybe return “None”?

We only document the properties of Job that are meant to be accessed by scripts. This way, we have the flexibility to modify the Job class while guaranteeing that the API stays the same. Anything undocumented is unsupported, so this is done by design.

Thanks for catching that typo. We’ll fix that right away.

Maybe we could look at having a PDF/CHM guide that just contains the script API. We’ve added this to the wish list.

The submission procedure existed long before the script API was added, and honestly, no one has asked for a different submission procedure at this point. :slight_smile:

It wouldn’t be hard to skip the file creation part and just build up a couple of dictionaries. That’s probably the ideal way? If you have any suggestions, let us know!

Note that ExecuteCommandAndGetStdout can be used to get the stdout of the submission, from which you could parse out the Job ID. Not elegant I admit, but something you can use for now.

Each script is meant to stand on its own, which is why there is redundancy. I’m not sure how we would avoid that. We’ve thought about having a separate script that builds up some common group sections like Job Description, but even these aren’t the same across all the scripts.

We appreciate you reporting the job script bug. It will be fixed during the 5.1 beta.

There is a balance that must be found between automation and manual control. I’ve used some auto-layout UIs that have driven me nuts because they make assumptions that don’t always make sense, and I’ve also used UIs that require me to define all layout dimensions with no alternatives. I wasn’t happy with either approach. That’s why we offer functions to add controls that don’t take a width and height, and those will just use the default sizes of those controls. It won’t autosize them to fit the width though, but if you’re just creating a UI with a bunch of settings (one per line), it doesn’t take anything complicated to do so. But if you want full control over your entire layout, you can do that too.

The current system seems to work fine across all platforms, but I’m sure there is room for improvement. Maybe a start would be to autosize controls that are added without a width or height, or perhaps use a weight system so that you can control how much % of width each control in a row gets, and that would allow everything to scale when the size of the dialog changes.

We greatly appreciate your feedback. A lot of our development is client-driven. Often we don’t realize something is wrong until you guys tell us. We will definitely consider all of your concerns and suggestions when planning the road map for future Deadline releases. Some of the more simple ideas might even make it into 5.1. :slight_smile:

Currently, no. We’ve been using the forums for betas for many years, and so far they have worked fine. Anything can change in the future though. :slight_smile:

Cheers,

  • Ryan

I think the saving grace of Deadline development has always been that the turn arounds on bug fixes are often less than a week so there is no need to track bugs. By the time it’s reported it’s usually fixed in the next build. :smiley:

I would go for the exception. Clients of GetJob() will expect to get a Job instance, so getting None would lead to harder-to-debug exceptions as someone tries to access something on None as if it was a Job. A proper exception which states the issue explicitly would make the error clear.

Now that I rechecked I realized that the property I thought was not documented, JobPlugin, actually is documented. I remember that I was missing that one, which is why I studied the members of a job instance myself in the first place. As a little anecdote: Before I checked for some ‘hidden’ properties, I thought it would be best to simply parse the job file myself. I tried xml.dom.minidom (I added the python standard library to my installation of 5.0, 5.1 might already have it), but had to learn that pyexpat doesn’t operate in ironpython (linux 64). Good, I thought, and tried to use the XMLDocument .Net provides, which provoked a popup from my mono installation, stating that this is only available on Windows … :slight_smile:. Luckily I didn’t need it in the end.

If you wanted to allow something like dictionaries, you would also have to provide built-in submission functionality, as you wouldn’t want to start a separate process anymore - I would clearly prefer this, as there is a considerable overhead with starting up iron python.

Dictionaries are not particularly suited to simulate your key-value files as they would not record the order of keys (even though that shouldn’t matter). Nonetheless, I implemented a little list-based ordered dictionary-file hybrid which allows me to easily access and change any key-value file, to make my life easier.
A huge problem I still have to figure out is that IronPython has giant deficiencies built-in when it comes to the encoding of text-files, on linux at least. Hence I cannot decode or encode utf8 as the evil underlying .net codec type just can’t handle or recognize it, no matter if I use it (indirectly) with standard python or with .NET readers/writers. This will break my neck eventually. Its not deadline’s issue though, its just some issue with mono.

Yes, this would work. Fortunately I don’t need it, as I just attach additional info to the job on submission, and process these once the job is finished. This might generate new dependent jobs, but what’s important is that I have a proper job instance at that time, that is in the script event handler.

The first thing I did is to provide a way to share my code among multiple scripts. For now, this is as easy as adding the scripts directory to the path, which allows me to import modules with common functionality.

Even though the Job options for instance may vary in some scripts, the two jobs I picked for comparison, Nuke and Maya, had exactly the same job-related options. If you would try, I am sure you could end up with a little function taking a few arguments only, to generate the typical Job options section for all submission scripts, shaving off a huge amount of code from each of them. Less code mean less maintenance, which is good.
This could be a good first step. What would remain an issue is that all the control names are either unknown to the caller, or would remain to be magically known to him in one way or another. What you would really need is to abstract your controls from being some strings (whose names can easily be messed up) to something more ‘formal’.

Design wise, what I see is a class instance which has some specifically declared attributes. If these are set on the instance, they can easily be used programatically, i.e. myinstance.myattr = 5. In submission dialogs, your instance wants to present itself to the user, and builds a UI showing its attribtues. Each attribute it wants to display has a corresponding control in the UI. The name of the control is derived from the attribute’s name it corresponds to. In the next step, it can transfer all values from its instance to the UI, which is like a default value.
Now the user can change the value through the UI, which can subsequently and fully automated, be read back into the instance. Usually, at submission time, the instance could now write its attribute values into a key-value file. Once again, this can be fully automated as the instances knows all its attributes.

When a plugin is called to process a job/task, it has to read back its values to use them. In my case, as everything is formally known to the scripts, all I have to do is to create an instance of the appropriate type, and tell it to initialize its attribute values from the given key-value file. Now the programmer can just do what he has to, based on the instance attribute’s values.

In that design, the declaration of your plugin class/type is the central hub, and as I build, query and set everything through instance and class attributes, python itself would tell me if I tried to access an attribute which doesn’t exist (if you have only strings, this might just be unnoticed).

I implemented this using metaclasses and descriptors, and admittedly I wrapped the whole scriptdialog and each and every control (all using a metaclass implementation and dynamic type generation) to make it more ‘object’ oriented and pythonic.

Yes, that is exactly what I am missing. Something like a stretch-factor could be good (which would just be another way of specifying a percentage). That is, 3 controls with a stretch-factor of 1 would cover 1/3 of the layout. If one of them has a stretch-factor of 2, it would be twice as large as any of the other controls, so the coverage would be 1/2, 1/4, 1/4 respectively.
Non-resizable user interface feel very outdated, today’s users are used to much more. Maybe it would help already if you would make the modal submission dialogs not to be resizable. Perhaps they are supposed to be static, but on linux at least I can resize them, which just looks odd of course and doesn’t really make sense.

One more thing pops into my mind: Can it be that the ‘modal’ argument of the ScriptDialog.ShowDialog(modal) method doesn’t do anything ? It always is modal, no matter which option I choose. I noticed this when implementing a job script.

That is great to hear. Actually I was keen to take my chance of shaping the upcoming release, in order to make my life easier. Lets see how much easier it will end up being :wink:.

I can only recommend redmine for this - when I have first seen it, the next day I dropped my mantis installation and made the switch. That is how great it felt, and I still feel about it that way. The main benefit is that you provide more information to your clients, keep them more informed. Maybe that is not what you intend, but to my mind the developer-user interaction greatly benefits from a system like that.

In your case, some sort of dual-project scheme could be the right way to go. That is you have one project for internal use, and another one for public/client use, which is actually a subproject of your internal one. This way, you can nicely limit access, while your developers still see whats going on in any of the projects ( provided you add them to be part of both of them).

I definitely appreciate your quick and open responses :slight_smile: !
Cheers,
Sebastian

Yup, that seems to make the most sense. We’re really liking this idea, and it’s currently targeting the 5.1 release.

Actually, the ExecuteCommand and ExecuteCommandAndGetStdout functions don’t start up a separate process. They just pass the arguments to the internal function that interprets them.

We’ve logged this as a bug, and will look into it during the 5.1 beta.

The ScriptDialog stuff probably won’t see any changes during 5.1, nor will the submission scripts (to reduce redundancy). I like the idea of taking another look at how we do this stuff, and it makes sense to do it all together as part of the future road map. I will make sure to bring this up at our next road map meeting! :slight_smile:

Cheers,

  • Ryan

Maybe it would be good to make this clear in the documentation of the respective Functions, as the term error code (and the sluggish performance) tricked me into believing it spawns a process which initializes a whole iron python environment.
The following is a copy of the method documentation, but maybe its just me who didn’t understand the intended meaning:

I also have the feeling that ExecuteCommand is somewhat asynchronous. For instance I have a loop which spawns plenty of new jobs based on some input. To do this, it writes some job files to a temp directory and submits them. Now it can happen that there are duplicate jobs, as if the same job file would have been submitted twice. This is theoretically possible as I overwrite the same local files on each iteration, but this wouldn’t be a problem if ExecuteCommand would wait. I tried using ExecuteCommandAndGetStdout as it would expect it to wait until stdout is closed (and the operation is finished), but that didn’t work either. What worked is when I waited a second, or when I wrote unique files during each iteration.

Thanks for taking the ‘reducing redundancy’ improvement onto the roadmap - maybe you will have time redesigning your job-mechanics as well. To give you an idea on how it could be, here is a method which is run as job-finished callback to spawn new jobs based on data the user attached in an adjusted submission dialog. The instance we are using is a DeadlinePlugin which at this point had all its persistent attributes loaded from the job info file. These are now available through its instance attributes. The same mechanism works the other way round as well, which is what happens once it prepares new jobs by settings some values on the respective instances. I implement this using meta-classes and descriptors. The ‘job’ instance being put into the method is a Wrapped version of a normal job which remaps all properties to make them more pythonic (access through the original names is still possible of course).


def job_finished_callback(self, job):
	"""Create a new job dependent on the given one, one for each available layer"""
	import Deadline.Scripting
	
	layers = self.layer_ids()
	out_dirs = job.outputDirectories
	if not out_dirs:
		raise ValueError("Job %r did not specify any output directories" % job.id)
	#END assert output directories
	
	# required to unify the resulting filenames
	count = 0
	for out_dir in out_dirs:
		out_dir = make_path(out_dir)
		layer_info = self.filter_to_available_layers(layers, out_dir)
		for layer_ids, (range, paths) in layer_info:
			in_file = paths[0]
			if range.delta < 2:
				print "Skipped single file: %s" % in_file
				continue
			#END skip single files
			for layer_id in layer_ids:
				jd = JobDescriptor()
				jd.plugin = "DJV"
				jd.name = "DJV Mov-Gen: %r for job %r" % (layer_id, job.name)
				jd.comment = self.job_comment() + (" on input file %s" % in_file.substituted_frame("####"))
				jd.department = self.department
				jd.jobDependencies = job.id
				jd.onJobComplete = jd.k_onjobcomplete_values[2]	# delete
				jd.set_frame_range(range.start, range.last)	# inclusive !
				jd.chunkSize = (1<<30)-1 # lets play it safe
				jd.outputDirectory0 = self.outputDirectory or out_dir
				out_file = in_file.substituted_frame("%s%03i-%03i"  % 
										((layer_id != self.beauty_id) and layer_id + "." or '', 
											range.start, range.last) )
				djv_out_dir = make_path(jd.OutputDirectory0)
				out_file = djv_out_dir / (out_file.splitext()[0] + ".mov").basename()
				jd.outputFilename0 = out_file.basename()
				jobfile = jd.write_job_file(str(count))
				
				djv = DJVDescriptor()
				djv.version = "082"
				# as deadline or iron-python claims to be 32 bit on a 64 bit system, we just hardcode
				# it ! In fact we shouldn't have to care ...
				# djv.Build = str(oms.environ.int_bits()) + "bit"
				djv.build = "64bit"
				djv.inputFile = in_file.path()
				djv.outputFile = out_file
				djv.frameRate = self.inputFrameRate
				if in_file.path().ext() == '.exr':
					djv.InputLayerBox = self._layer_name_to_exr_order(in_file.path(), layer_id)
				#END handle exr files
				djv.inputStartFrameBox = range.start
				djv.inputEndFrameBox = range.last
				djv.outputFrameRateBox = self.outputFrameRate or 'default'
				if self.outputResX > 0:
					djv.imageReSizeWBox = self.outputResX
				if self.outputResY > 0:
					djv.imageReSizeHBox = self.outputResY
				djv.QTcodec = self.qtcodec
				djv.QTquality = self.qtquality
				djvfile = djv.write_job_file(str(count))
				
				# finally submit the job - this feels so async
				Deadline.Scripting.ScriptUtils.ExecuteCommand(sequence_to_collection((jobfile, djvfile)))
				count += 1
			#END for each layer id
		#END for each entry
	#END for each output directory
#}END plugin interface

I had to fix up the DJV plugin to work on linux - it used process arguments which don’t exist on linux, but its relatively easy to fix it. I could submit a patch if you like.

Unfortunately I also have massive quality issues with deadline on linux, but as I am still using deadline 5.0, I will post it into the normal forum as soon as the time is ripe.

Thanks for the great support,
Sebastian

I’m not noticing a performance issue when using ExecuteCommand or ExecuteCommandAndGetStdout. The submission process itself takes a bit of time, so maybe that’s what you’re seeing? What if you run commands like “-pools” or “-slaves”? Note that the processing of the job files isn’t the “expensive” part of submitting a job - it’s the process of creating the job in the Repository. So refactoring the job submission process won’t provide any gains in terms of submission speed.

We checked the ExecuteCommand code to confirm that it’s not asynchronous. In fact, calling ExecuteCommand a bunch of times inside a script would have the same behavior as calling deadlinecommand a bunch of times inside a batch file.

We probably won’t be refactoring the job property names. This is because we’re exposing the Deadline API as .NET modules/classes. IronPython and Python.NET don’t wrap the existing .NET modules/classes, so for consistency, it makes sense that we don’t either.

That would be great if you could upload the DJV patch!

Finally, feel free to post bugs in 5.0 here as well. If there are bugs in 5.0, odds are they still exist in 5.1. :slight_smile:

Cheers,

  • Ryan

I knew DJV would be a useful addition to Deadline. :slight_smile: Always interesting to see what directions other studios take the various plugins in Deadline.
FYI.
I never tested DJV in a Linux environment, so I’m not surprised there might have been a few tweaks to make. However, I did fully test the Win OS setup and also give the Mac OS a good once over as well :slight_smile:
Mike

With a delay, here is the said patch. It really does nothing more than making it work on linux, but this doesn’t help the fact that the gui is designed for the quicktime library available on windows (and maybe osx). If one wanted to do it properly, one would provide additional options for libquicktime, which is the library used on linux.

Cheers,
Sebastian
djv_linux.patch.zip (912 Bytes)

Thanks for the patch!

Not worth mentioning.

By the way, the Forum said I may not upload .patch files, so I had to zip it. Sure, this would reduce its size, but it added an extra step I had to perform, right after telling me that I may not upload patch files.
It would be useful to get information about the allowed extensions in advance, or to allow patch files right away.

Thanks,
Sebastian

Privacy | Site terms | Cookie preferences