AWS Thinkbox Discussion Forums

SMTD: Auxiliary file names must be unique

We are trying to render a scene with several thousand caches loaded. The cache files are ordered in a logical directory structure according to purpose, etc.
However, some of the cache files in different directories have the same names. It turns out submitting with SMTD, and pre-caching on, fails.

Attaching an SMTD log of the event:
smtd_duplicate_assets_error.zip (52.7 KB)
Any advice? Is this a hard limitation? Could the pre-cached files preserve their original directory structure?

This is a semi-hard limitation. If Max can’t find assets in their original paths, it looks for them next to the scene file. We exploit that by dumping everything into the job folder as a flat collection of those files.

Now, I believe we have some workarounds where we dynamically rename files we see that collide with each other, but I don’t have much detail on the situation. I’ll ask @Bobo for some background here as he knows best.

The problem does not seem to be the Pre-Caching option, but the SMTD setting for collecting external files and sending them as part of the job, which you should NEVER do if submitting to AWS.

According to the log, the value of the relevant property is
SubmitExternalFilesMode = 3

As Edwin explained, when the “Copy ALL External File References to Repository” option is selected from the drop-down list, all files are collected and made auxiliary files of the submission. As result, the paths are stripped and the file names end up copied to the Job folder, and 3ds Max resolves them by the name, not by the full path.

This feature was added mainly to allow users coming from Backburner who had resources scattered around their desktop and local drives to pack everything into the job and submit and render without having a rigid file storage structure accessible by all render nodes and workstations on the network. It can also be useful to get a snapshot of the current version of external resources, as modifying an asset on a network path while a job is rendering could cause discrepancies between frames, while packing all assets with the job ensures they are encapsulated within the Job folder.

However, this feature is a Very Bad Idea when rendering on AWS. It would preclude the AWS Pre-Caching, and since all assets will be part of the job, EVERY SINGLE render node will have to copy the WHOLE folder with all assets over the SSH connection from the Repository’s drive to the EBS volume of the EC2 instance! In other words, it will cause a congestion, and a VERY slow rendering, if it ever starts, even if your file names were all unique…

The EC2 Pre-Caching requires your assets to be located in a folder structure that is defined in the Tools > Configure Assets Server panel of the Deadline Monitor. We look for the path of the asset and if it matches one of the paths in that Root Directories list, we automatically copy the asset to an S3 bucket, and the object stored there is named as a hash that reflects the source file. Thus two files named the same but having different paths, size and modified dates would create unique objects in the S3 bucket, and will not collide.
On the EBS volume attached to the render node, the respective files will appear with the original names AND paths, just prefixed with a different Root according to the automatically-generated Path Mapping rules for the AWS Region. So again your scene file will have no issue accessing two assets with the same file name, but different paths.

So why do we allow the two settings to be active at the same time, you might ask? This is a good question. Basically we don’t know whether a job is really going to be rendered on AWS, locally, or on both. We could assume that if you checked “Pre-Cache on AWS”, you intend to render on AWS, but that is not always a given. We could probably issue a warning if we detect that External References mode is set to copy files with the Job, AND you have Pre-caching on AWS enabled… But AWS rendering would work even if “Pre-Cache” is unchecked, as the AWS Portal will copy files on demand if requested by an EC2 Spot instance, so we really have no way to know if you intend to render in the cloud or not.

For now, just switch the drop-down list under the Assets tab to “Do NOT copy External File References to Repository” and try to submit again.

You should never touch that option anyway, unless a scene has resources scattered on a local drive, you are in a hurry and don’t want to move them to a network location and repath, and the job won’t be touched by EC2 instances…

1 Like

Thank you for the wonderful level of detail.
This makes a lot of sense.

Not sure why this option was activated. We are going to try with file references supplied as network paths and see.

The option is hard coded in the line 5675 of SubmitMaxToDeadline_Functions.ms.

SMTDSettings.AssetsResolved = SMTDFunctions.resolveAssetsFromAssetTracker SubmitExternalFilesMode:3

1 Like
Privacy | Site terms | Cookie preferences