All our projects make use of a mixture of project-specific assets and stock assets that are re-used between projects. For reasons that go beyond our 3D rendering workflow, these have to live on 2 different file servers.
Is there any chance the AWS Portal could be tweaked to allow pointing at more than one directory? Or in the alternative, that SMTD or a supporting script could provide bulletproof asset collection and repathing as part of render submission (Resource Collector is very slow and frequently missses some dependencies)?
I’ll need to talk to the team that’s working on the asset transfer portion of AWS Portal but from what I know, this should be possible. We are basically, mirroring the directory onto the cloud. I don’t see any reason we couldn’t mirror multiple directories. The trick will be to keep track of where all the files are coming from and going to.
Right now, we’re focusing on getting everything polished for release so I wouldn’t expect this feature to be added until after release, as it could be more complicated than I make it sound. But it’s definitely on our radar.
Help me understand - Does the full asset path as defined in the application have to match what the on-prem client sees, or does the on-prem client search its asset folder for files of the same name? Because we use mapped drive letters here that the service can’t see.
If I enter the UNC path to the folder represented by a mapped drive letter, will Deadline find my assets, or do I have to add a repathing step to preflighting any render jobs for AWS?
No, the path goes through a couple translations before we access it on-prem:
First, the path is translated into a path suitable for the AWS Deadline Slaves using Deadline’s path mapping system.
Then that path is translated to an on-prem path relative to the AWS Portal Asset Server Directory that you set in the “Configure AWS Portal Settings” dialog in the Deadline Monitor.
I understand you use a drive letter to access your asset files on your workstations? Let’s say the UNC path to your asset files is \fileserver\share, and you have mapped this to Z:.
On the AWS slaves, the paths go through Deadline’s path mapping system. We need to add a path mapping from the path you use on your workstations to the path that we use on the AWS slaves (the AWS slaves always use /mnt/Data/ on Linux, and E:\ on Windows). To do so:
Open the Deadline Monitor.
Activate Super User Mode (Tools menu → Super User Mode).
Open the “Configure Repository Options” dialog (Tools menu → Configure Repository Options…).
In the “Configure Repository Options” dialog that appears, choose “Mapped Paths” in the left-hand column.
Press the “Add” button in the right-hand column.
In the “Add Mapped Path” dialog that appears, set:
Replace Path: Z:\
Windows Path: E:\
Linux Path: /mnt/Data/
Region: Choose the StackName of your AWS Deadline Infrastructure. (You can find this in the “Deadline Infrastructure” part of the “AWS Portal” panel in the Deadline Monitor).
Press OK to dismiss the “Add Mapped Path” dialog.
Press OK to dismiss the “Configure Repository Options” dialog.
Next we need to configure the directory that the AWS Portal Asset Server will use to access the file on-prem. To do so:
Open the AWS Portal panel in the Deadline Monitor. (View menu → New Panel → AWS Portal)
Press the “gear” button in the AWS Portal panel to open the “Configure AWS Portal Settings” dialog.
In the “Configure AWS Portal Settings” dialog that appears, under “AWS Portal Asset Server Options”, set the “Directory” to \fileserver\share
Press OK to dismiss the “Configure AWS Portal Settings” dialog.
Finally, let’s run through an example of how this works. Let’s say you’re using Maya on your workstation. You add a texture file named “Z:\directory\asset.png” to your scene. You submit a render of this scene to Deadline. This job is picked up by an AWS Deadline Slave running on Linux. On the slave, the asset path is mapped to /mnt/Data/directory/asset.png by Deadline’s path mapping system. When the AWS Portal Asset Server accesses this file on-prem, it replaces the /mnt/Data/ prefix with the Directory you set in the “Configure AWS Portal Settings” dialog (\fileserver\share), so it accesses the asset file using the path \fileserver\share\directory\asset.png.
Yes, that’s exactly what the “Region” dropdown is for!
After you have created a Deadline Infrastructure on AWS, you should see a corresponding “stack…” option in the “Region” dropdown. (You can find the stack name of your infrastructure in the Deadline Monitor -> AWS Portal panel -> Deadline Infrastructure -> StackName column.)
I haven’t tried this yet (bogged down at a different step), but that workflow seems like it doesn’t address the needs of people who will be rendering with AWS only occasionally, rather than big studios who will always have a couple of cloud instances going.
We should be able to set up those mappings before the AWS infrastructure is running the meter.
Some kind of abstraction would be nice. One where we could tie regions to the stacks as they’re created.
I’m not sure what part of the system actually allocates those regions, but I’ll see if I can discuss it with the team as well so I’ve got a good feeling for how we could do it. It may be that there is some technical reason we’re creating regions dynamically which would mean we’d need some kind of extra data abstraction.
Is there anything else that we need to do to cater to those who’d be creating and destroying infrastructure as needed?
I’m not totally sure. I’m currently coming at the problem from the point of view of a company with resources to throw at problems, but I’ve been a hungry freelancer in the past. In that life, I passed on Deadline after using it in license-free mode for a year or so once I hit >2 PCs. The lower annual cost for permanent licenses and (especially!) UBL puts Deadline within reach for a whole lot of folks who couldn’t afford it before.
I would say that in general any part of the workflow that adds to the AWS bill without adding rendered frames should be looked at very closely.
I think also that you can expect to see many less-technical users who are fleeing Backburner hitting your support system, and anything you can do to get out ahead of them (wizards, tutorials, example repo configurations) in a way that frees up your resources while making users without an onsite TD feel empowered is a win for everyone.