First off, seasons greetings. I trust you are all off enjoying the holidays and not (like me) still ruminating on the failings of my pipeline. I wondered if anyone had tried the following or had any thoughts on the matter.
Many moons ago we implemented asset localisation for Modo rendering. It worked a treat, slicing scene load times and network traffic. Theory is simple:
A slave is assigned a render task. A pre-render script identifies all the references, images, image sequences, vertex cache files, etc. and copies everything locally on the slave. All the aforementioned assets are repathed before rendering the scene.
The benefit is of course that once an asset is localised it is reused over and over, only ever being pulled from the network share if it’s been modified.
DL allows you to repath based on platform. Clearly you could hijack this process to implement the above. Has anyone considered/tried this?
Hey Jan! I hope you’ve at least found some time to relax in here somewhere!
Funny you mention this, but that’s exactly what we’re doing with AWS Portal, but we’re using different magic to pre-cache and pull at render time those files missed. There’s a local disk of about 100GB that stores assets and re-uses them.
The path mapping these days got a bit of an overhaul in 10.0 in that there are now tokens (including environment variables) you can use as well that could act as a switch/control assuming you play your pipeline properly.
For example, you could use the ${env:X} token to selectively match a project’s path, then swap it with the localized cache path. Some work would need to be done to actually set those variables, and I haven’t had an excuse to play with it yet, but it should unlock some very interesting workflows.
Hey Edwin! Thanks for getting back, hope you’re well.
Of course, I hadn’t considered that you would be addressing this for cloud rendering. I presume this functionality is only for AWS and isn’t something that could be harnessed over LAN?
In the old, pre-deadline days I wrote a custom submission script that repathed assets before sending the scene to the farm. It logged all modified paths to a database. This info was used in a pre-job script to sync content on slaves before loading the scene. Essentially recreating the project content structure locally.
It was pretty crude but it worked and made a massive difference, particularly in the days when we couldn’t afford fancy networking!
Path mapping’s been around since at least 4.2 (as that’s when I came onboard). The tokens are available everywhere.
However, you’re right that the asset transfer and syncing is an AWS-only feature as it’s leveraging S3 and other technologies that wouldn’t make sense to use locally.