AWS Thinkbox Discussion Forums RCS copying files to /var/lib/Thinkbox/

Hi All,

having an issue with Deadline where the path translation copies the files to /var/lib, fills up the hard drive and kills the worker. It’s not set to copy locally so not sure why this is happening, or if this is an RCS specific thing

STDOUT: mel: mel: mel: Running PathMapping on "/mnt/server/project/folder/myfile.ass" and copying to "/var/lib/Thinkbox/Deadline10/workers/worker01/jobsData/12345/thread0_temp1/123/myfile.ass"

eventually ending with

STDOUT: Error: No space left on device : '/usr/tmp/tmpbd8qED.tmp' (System.IO.IOException)

This is being submit from Windows 10 to Linux (Alma9) but the translation copy is from linux to linux

earlier in the task the commands path map from windows to linux correctly

STDOUT: Changing thing_v001_0001:ArnoldStandInShape.dso from Z:/server/project/folder//scenes/thing_v001.####.ass to /mnt/server/project/folder//scenes/thing_v001.####.ass

Deadline, Maya 2023.3 and MtoA

Hello @anthonygelatka

Thanks for reaching out. I looked at the [repo]/plugins/MayaBatch/ you are right there is function CheckPathMappingInFile which is used here below is the help text for it (I got it by running <path_to_Client's_bin_directory>\deadlinecommand -help):

CheckPathMappingInFile <Input File> <Output File> [<Force Separator>]
  Performs path mapping on the contents of the given file. Uses the path
  mappings in the Repository Options.
    Input File               The original file name
    Output File              The new file name where the mapped contents will
                             be stored
    Force Separator          Optional. All path separators in the replacement
                             path will be replaced with this before the
                             original path is mapped.

We have used this function because in MayaBatch we make a copy of all of the .ass files since we are performing pathmapping on the contents of them. This is because the .ass files can contain references to other files that need to be manually mapped or else they will not be found.

Does this happen on every job? I think the folder should be cleaned up once the Worker moves on to the next job. So for you it might be happening per job, maybe?

Is there a way to stop this? As soon as a large job goes through it fills up the disk and chokes. It’s not several jobs that don’t get cleared, it’s one job pulling masses of data, this then needs a manual rm -rf to clear before restarting.

Not sure if they have references, if so is it not possible to write out to the same location? Or to verify this?



You can work it around by changing the SlaveDataRoot or expanding the disk space.

This doesn’t workaround the time it takes a single threaded process to copy over 500Gb>+ of data across to that drive though.

Be good if it had an option to not copy all these files, or to check if the ass files need re-referencing before copying

(TL;DR: Random fix guess at the bottom. The rest is all background research)

Hey Ant! We’ve been talking about this back and forth on the team this week and I need to make some corrections here. It’s clear from the OOD space error that the path we’re using is “/tmp” and not the Worker’s own temporary directoy so changing the Worker’s data root won’t work.

Justin found that Arnold 6 is capable of doing path mapping on its own if running stand alone (if we provide the mappings in a JSON file) and perhaps these days dirmap (Maya’s built-in remapping) is working correctly. Some trivia, we implemented the current process that requires writing to disk because Arnold was one of the few plugins that didn’t follow what dirmap recommends for repathing.

What I wasn’t overly convinced of was that doing this in-memory is the right approach. If we’re remapping files, keeping a local cache could be helpful but not in it’s current form. Looking at how we’ve implemented this, we’re doing the path mapping as part of RenderArgument() so it’s definitely not going to benefit when doing Arnold Standalone (snippet from

        # Check if we should be doing path mapping on the contents of the .ass file.
        if self.GetBooleanConfigEntryWithDefault( "EnablePathMapping", True ):
            localDirectory = self.CreateTempDirectory( "thread" + str(self.GetThreadNumber()) )
            localFilename = Path.Combine( localDirectory, Path.GetFileName( filename ) )

            # The .ass files need the paths to use '/' as the separator.
            RepositoryUtils.CheckPathMappingInFileAndReplaceSeparator( filename, localFilename, "\\", "/" )
            if SystemUtils.IsRunningOnLinux() or SystemUtils.IsRunningOnMac():
                os.chmod( localFilename, os.stat( filename ).st_mode )
            filename = localFilename

Switching over to Maya, performArnoldPathmapping() in DeadlineMayaBatchFunctions() is what’s responsible for this when Maya’s running in batch mode. That’s also part of RenderTasks() so it’s run every task so disk likely won’t help much.

So, at this point… What I’m curious about is whether you can comment out these two lines in and Arnold “just works”:

                if self.Renderer in [ "arnold", "arnoldexport" ]:
                    scriptBuilder.AppendLine( 'catch(python( "DeadlineMayaBatchFunctions.performArnoldPathmapping( %s, %s, \'%s\')" ) );' % ( self.StartFrame, self.EndFrame, self.TempThreadDirectory.replace( "\\", "/" ) ) )

(It should be lines 1219 and 1220 in

My fingers are crossed hoping that Arnold knows what to do from the dirmap directives we’ve given it these days and that this whole adventure of writing to disk has been superflous. It’s a small gamble but I’m hoping it makes a huge impact if it works.

1 Like

Thanks Edwin, I’m on holiday next week but I’ll forward the post onto my team and the client.

I’m not sure /tmp is the problem as checking the folders it’s usually in /var/lib/Thinkbox/… where the huge folders are. Monitoring the process I can see this too.

Privacy | Site terms | Cookie preferences