SMTD is submitting a scene with ~300 texture assets over a network to Deadline for rendering. The submission time for this scene is on the order of about half an hour (~25min).
That is an impressively long time to submit a project over the network, in fact, in this case, it is actually faster to render locally!
This is a nontrivial issue - what else might be going on here?
If the issue is the time cost associated with packing and submitting the texture assets in the scene over the network, then is it at least possible to do this upfront, then for subsequent iterations, tell Deadline to just use those same assets from the previous submission?
I believe the issue that you are submitting the scene and the assets to the repository which is causing delay. Does render node have access to the network paths where the scene and the texture is saved?
You can simply let the Workers to access the textures and the scene over the network path to save the coping time. You can do it under SMTD > Asset Tab > Scene And Asset Files rollout > SAVE And Use Current Scene’s ORIGINAL NETWORK PATH
(3ds Max - Integrated Submitter — Deadline 10.3.0.13 documentation)
SAVE And Use Current Scene’s ORIGINAL NETWORK PATH - The current scene will be saved to the original file it was opened from, and that file will be referenced by the job.
The machine submitting the job is in Texas, the Repository (server) is in California - SMTD is submitting the scene file and all the textures, etc., to the Repository, and that is what is taking so much time.
If we pack and submit a scene with SMTD and it takes 30min, that is a significant delay, but if for subsequent submissions of that scene, if we could point to that previous job submission for Deadline to reference the textures, etc., then we should only have to send the scene file and not all the associated textures, assets, etc., which should be much faster.
Does that make sense? Is it possible to use the Dependencies rollout to do this?
We should be able to:
Assets → Save scene file
Assets → copy all external
Submit the job; it takes time, but now the files are in the repository
This method is not working, but that may be due to a permissions issue with the Repository, because if I point to a CUSTOM path on the network that the machine submitting jobs has permission to write to, and I copy the assets that were originally written to the repository to that location, the job does render the associated files - although I am seeing some sort of gamma correction issue.
Computer_01 submits Job_01 from 3ds Max using SMTD to the Deadline Repository (server), which then manages several workers. Computer_01 has textures (etc.) stored on local disk, and so packages all local file references; packing and submitting those resources takes 15-30min.
SAVE and Submit Current Scene File with the Job to the REPOSITORY
Copy LOCAL External File References to Repository
(Computer_01 CANNOT point to textures on the network because the textures are not there - that is why Computer_01 has to pack all the textures and submit with the scene file for rendering.)
Computer_01 makes some lighting changes to the exact same scene for Job_01 and then submits Job_02 to Deadline for rendering. How can Computer_01 point to the textures that were JUST sent in Job_01 to avoid copying all of the same textures again?
The question is how to reference previously submitted assets so that Computer_01 does not need to repack the assets every time a job using those assets is submitted(?)
We tried:
SAVE and Use Current Scene's ORIGINAL NETWORK PATH
Do NOT copy External File References to Repository
We tried:
SAVE and Submit Current Scene File to USER-DEFINED NETWORK PATH
and literally pointed to the exact temp cache folder on the Repository where Deadline submitted the job, but that did not work.
What are we missing here? Is this just some scene name issue, where Deadline is looking for ($scene_name) and finding the scene names do not match between Job_01 and Job_02, or what?
I think we need to come at this from another angle - SMTD isn’t going to be the best way to get assets from your artist machine in Texas to the farm in California.
This SMTD feature is designed to be used as a change management feature - in the ideal the Repository directory and the asset storage are on the same local network, and the files are tightly associated with the submitted job.
The idea being that you’d submit your job, then carry on working on the same files you just used. The files associated with the job will act as a snapshot of the scene file and assets as they existed at the time. So if you want to go back in time to when the car in your scene was painted green instead of blue you’ve got that a copy of the scene and the assets at that time.
Computer_01 makes some lighting changes to the exact same scene for Job_01 and then submits Job_02 to Deadline for rendering. How can Computer_01 point to the textures that were JUST sent in Job_01 to avoid copying all of the same textures again?
This isn’t possible in the SMTD, since it tightly couples the files for Job_01 and Job_02.
The question is how to reference previously submitted assets so that Computer_01 does not need to repack the assets every time a job using those assets is submitted(?)
Instead I suggest you work from a central shared storage mounted on both the Texas and California machines. In general no files are local to Computer_01 instead they’re on a network share available to all computers on the farm.
If that would create too much latency for work on Computer_01 to be smooth you could look at creating an asset transfer script as a part of your job submission process, or more simply you could copy the needed files over manually before the job gets submitted.
In the case you decide to copy local files to the storage on the farm’s network before rendering you’ll want to set up Mapped Paths to convert the file paths in the scene from paths that are valid on Computer_01 to paths that are valid on Computer_02-05.
That’s a pile of writing - let us know what your questions are!
Yes, I have tried both of these methods, the transfer script not included.
copying files over to a shared network location is just as tedious as submitting through SMTD
not practical to convert all of our artists at remote locations to a shared repository in CA
I have done this with SMTD, twice now, but I don’t totally understand how it works(?) When SMTD submits, it sends everything into a folder at the network location, so if the scene file is using ALL of the same assets, why copy them over AGAIN(?) So, I specified:
SAVE and Submit Current Scene File to USER-DEFINED NETWORK PATH
Do NOT copy External File References to Repository
and pointed to the folder on the repository where the initial submission (with the scene file and assets) were written to, and the job submitted quickly and without error.
Another thing I tried was:
Copy LOCAL External File References to Repository
and if submitting after the initial submission for the same instance of Max, the job submitted quickly.
I don’t totally understand either of these results and that’s why I raised the issue here.
Had the same issue where it took forever to submit a scene.
Turns out there is a new hidden default feature called
“Pre-cache asset files on AWS”
For some reason this is check on by default and it doesn’t appear to be in the documentation, so whatever this feature does slows down the submission process as it checks every asset in your scene.
Uncheck it, or set it to false in your submission script and everything submits quickly.
hmm, maybe we don’t have the same issue then.
i narrowed down the slowness in the asset resolving function.
if you look at your submission log you can figure where in the process the slow down occurs
for me it was between the lines
+Asset Tracker Resolved ### Files in #### seconds.
← time between these lines for me was about 8 min
+Resolved ### Files in ##### seconds.
There is also setting that I found called SMTDSettings.AssetsIncludeFilesInJob
By default it is always set to true and it looks like its hardcoded into the submitter, it also doesn’t seem to have a way to set it to false in the UI , though i may be wrong.
Looking through the script it always goes hand-in-hand with the assets pre cache setting i mentioned already.
setting SMTDSettings.AssetsIncludeFilesInJob = false while the submitter is open and then submitting is also worth a try
From what I understand all it does is collect a list of linked assets (maps, xrefs, proxies, etc) into a list for use in AWS services. My guess is if you’re not using that service its of no use. Looking into the actual function that performs the collection, I have no idea why it takes so long. Turning off both those flags will prevent this file metadata collection pre-submission.
This is a snipped from the script referring to the two properties, pre-cache and asset collection metadata. you can see that by default they are set to true
#("Assets", "AssetsPreCacheFiles", #AssetsPreCacheFiles, true, "Precache Asset Files", "Pre-cache all Asset Files to the AWS Portal Infrastructure."),
#("Assets", "AssetsIncludeFilesInJob", #AssetsIncludeFilesInJob, true, "Synchronize Asset Files", "Include Asset Files in Job Metadata for AWS Portal Asset Sync."),
100% correct, that builds a list of files that get added to the job and handed to the Asset Transfer System. But if you’re not using the AWS Portal it’s doing nothing for you.
Some folks have used it to roll their own asset transfer system, since the file list is readable in the job’s data.
I’m also surprised it takes so long, and that it’s not mentioned in the 3dsMax docs (only in the C4D docs at the moment ).
It looks like the asset transfer is clever enough to not re-upload files that are already at the destination, so if you’re using the same storage path in both options that’ll explain why it’s quick.
The assumption I’d been working under was that asset transfers would be put into a job specific folder inside the specified path. So it’d be //file-server/projects/jobid-1 and //file-server/projects/jobid-2.
Before I write a short novel on what I think you’re asking, could you elaborate on your question?
just want to say that I am not ignoring this thread and I still need to follow-up regarding the details of how this implementation is able to reference previously transferred data, etc.
In the meantime, isn’t their a path mapping system, either in SMTD or in Deadline where we can specify where certain assets live on the network so we don’t have to link directly to the network drive location for a particular asset in the scene - we can just submit a locally sourced asset and Deadline will automatically find the network version at render time(?)