AWS Thinkbox Discussion Forums

AWSAssetFile Job Parameter

Hello!

I see in the Nuke and Maya submitters that they set “AWSAssetFile” job parameters. I’m assuming this is what informs the AWS Asset Server which files need to be synced/transferred, but I can’t find any information anywhere about how this parameter is used or interpreted.

Is that a true assumption, that the only way the Asset Server can know about what to sync is through these parameters?

Thanks!

The AWSAssetFile% parameters are completely optional, and are only supported by a few integrated submitters (The Maya and 3ds Max submitters have a checkbox “Pre-Cache on AWS” that controls whether to include these keys in the submission).

The AWS Portal Asset Server does NOT require these values to function correctly. AWS Portal supports two approaches to getting files to AWS - a default pull mechanism, and an optional push mechanism.

By default, when the application (say, Maya, Nuke, 3dsMax, AE etc.) loads the scene or comp file, the relevant input and output paths get remapped by the region-specific Path Mapping entry for the AWS Portal Infrastructure. When the application attempts to access the asset from the EBS volume attached to the EC2 instance, this triggers a callback in the FUSE file system within the Infrastructure, and informs the Asset Server Controller running on the Gateway instance that an asset is required in the cloud. The Asset Server Controller then asks the Asset Server running on the on-prem network to push the file from the original network path to the asset cache S3 bucket, and from there to the remapped path on the EBS volume. As result, the application ends up pulling all remapped assets on demand without any pre-existing knowledge what these assets would be. If the correct version of an asset happens to be on S3 already, the object is copied to the EBS volume immediately, skipping the sync.

In the case of applications whose integrated submitters perform a scene introspection and include the AWSAssetFile% keys, the submitter also calls DeadlineCommand -AWSPortalPrecacheJob <JobID> after it finishes creating the job. DeadlineCommand grabs the AWSAssetFile% entries from the job, and starts pushing the assets immediately from the local Asset Server service to the S3 bucket even if the job is still queued and not active. Thus, once the job starts rendering, chances are most if not all assets will be already on S3 and won’t require the pull mechanism to kick in, saving a lot of startup wait time…

I hope this helps.

3 Likes

Hey @Bobo, that’s remarkably helpful, thank you very much! And that’s a very slick system.

1 Like

You are welcome!

Some additional notes:

Any application that supports editing of the source scene can support the pull mechanism as long as the Path Mapping is applied in the Deadline plugin script. Some specific cases:

  • The Adobe After Effects submitter has the option to use the XML form of the comp when submitting, which allows it to work with AWS Portal’s pull mechanism despite not being officially supported.
  • Renderer intermediate formats like V-Ray’s VRSCENE or Arnold’s ASS file are also text-based and thus support Path Remapping.
  • Maya offers the .MA ASCII file format
  • 3ds Max stores assets in an uncompressed block in the binary file’s header, allowing external access and editing of the asset paths even without loading the scene, which we use to integrate with AWS Portal. We make a copy of the MAX file, repath its assets in place before opening the file, then we load it and all paths are already pointing at the AWS EBS storage. For anything not included in the Asset Metadata (e.g. strings storage instead of filename paramBlock entries), we run a post-load script to repath (this includes a bunch of V-Ray stuff for example).
  • Redshift’s RS format is binary and thus not human-readable. So the Redshift Technologies team implemented ENV variables (REDSHIFT_PATHOVERRIDE_STRING and REDSHIFT_PATHOVERRIDE_FILE) that can be set by Deadline to define the path mapping rules for input assets as string or an external text file. The Redshift stand-alone renderer applies the path mapping defined there so we don’t need to edit the RS file ourselves.
  • In the case of MAXON CINEMA 4D, Path Mapping only works with absolute paths, so scenes submitted with relative paths would not work.
  • Any custom plugin you might want to integrate with AWS Portal would work as long as the scene description is ASCII-based or offers an API to do asset path mapping, and you include the path mapping call in the custom plugin’s Python script.

If you are implementing a custom plugin and submitter for a 3rd party product Deadline does not support yet, you could include the AWSAssetFile% entries if you have the ability to introspect the scene, and then call the DeadlineCommand -AWSPortalPrecacheJob <JobID> CLI to perform your own pre-cache push.

1 Like

Hi @Bobo ,

One more related question – does the same mechanism apply to pre-job scripts? We have a GlobalJobPreLoad.py script sitting in our repo (at <repo_root>/custom/plugins/), but it appears it isn’t getting sync’ed across. The job is attempting to access the script at a proper AWS path, but is saying the script doesn’t exist. So path-mapping appears to be working, but I’m assuming this means it wasn’t sync’ed.

We’re checking our path-mapping and doing our due diligence here, but wanted to verify that we can assume these scripts should sync the same way as assets accessed during the plug-in execution?

Or do we need to do a portal pre-cache (as you described) for these scripts?

Thanks!

Privacy | Site terms | Cookie preferences