Temp folder full of old jobs

I am having an issue with the Temp directory filling up with old job data that isn’t being cleaned up. For example:

C:/Users/the_user/AppData/Local/Temp/username_MayaBatch_The_Job_Name_jobID/

This would be a folder with a maya scene, a job.json, and a tasks.json inside it. There are hundreds (sometimes thousands) of these folders taking up disk space. Is there a setting I can enable to clean up these files after the render is finished?

Edit:
I just noticed this thread viewtopic.php?f=11&t=9814 but I think this is a different issue. My workstation has these temp directories, but I hardly ever submit renders. However I usually have my deadline slave running all day.

Edit 2:
I deleted all those folders yesterday because I was running out of hard drive space, and today all of them are back. This is not only annoying, it is repeatedly causing problems on all of our workstations. Why do these folders keep showing up?

Hello,

Can you let me know what version of Deadline you are running? Once we know that we can do some internal testing to figure out the problem.

Cheers,

We’re running Deadline 6. From our own research, it looks like the machines are trying to archive the jobs but erroring. We set the repository to archive jobs after 14 days, but none of them are archived. And all the files in the Temp folder are from jobs that are supposed to be archived.

Edit:
Actually we can’t even archive jobs manually from the monitor. The monitor log is full of errors like “Error exporting job [jobID]: The specified path, file name, or both are too long.” But we checked the path and file name and neither exceed the specified character limit.

That is really weird. Can I have you post that monitor log, or if you feel more comfortable, sending it to us via the support email so we can take a look? Thanks.

Here is a typical deadline slave log when it is not rendering, just trying to archive stuff. And the other file is the monitor log after trying to manually archive a few hundred old jobs. The “Access to path is denied” error in the slave log would be a good hint if it actually gave the full path, not just the file name.
deadlinemonitor-SSLAWS027-2013-11-20-0000.log (156 KB)
deadlineslave_SSLAWS027-SSLAWS027-2013-11-21-0000.log (55.6 KB)

Hello,

So I looked into this and it appears the issue is with the temp files made when archiving, not the original files themselves. When we move the files out, they are put in a folder with the name of the job, and inside is a file with the name of the job. In cases where the path is already long, a long file name causes this error to happen. If you can shorten your job names at the start, or before archiving, that should avoid this error. Hope that helps.

Cheers,

That doesn’t really solve the problem. I checked the list of temp directories and of the 842 that it was trying to archive, 841 were well below the character limit. I deleted that one job and tried archiving some again, getting the same error. I checked the longest filename from that set which turned out to be 248 characters, with a directory name that’s 172 characters. These are both below the character limit but it’s still giving me that error. Also, manually zipping the folder and moving it to the jobsArchived directory works just fine.

Is something else going on in the background that is adding characters to the filename? Is there a way to prevent certain machines from trying to archive? Because these files take up 60+ GB which is causing issues on our workstations.

Are there any spaces or special characters in the filename? A Space for instance could be being stored as %20, so three characters… I’ve had a few issues where filepath lengths have been issues when it seems to be around the 248 mark, I normally make sure ours are < 245.

There are a few spaces here and there, but even if each space is 3 characters the file name and directory name are both well below the limits. I see two issues here: 1) jobs are not archiving when they should be, the file name and directory name are both below the imposed limits but it is still erroring. And 2) when the archive fails, the files are not cleaned up afterwards causing 60+ gb of hard drive space to be taken up. Since both the file name and directory name are below the character limit, it seems like Deadline is adding characters when it is archiving. What is being added to the file name and directory name that is causing it to exceed the character limit? And how can we get it to clean up the files after it attempts to archive?

Do any of the jobs failing to archive contain a username with a “.” period in it?

No, none of our usernames have any periods. So far the only clear answer I’ve gotten is that long file names are causing errors. But that does not make sense because the file names and directory names are well below the imposed limit. Here is an example of a typical directory path, 151 characters long but still errors:
C:\Users\user001\AppData\Local\Temp\longusername__MayaBatch__rsk_il14_config_lgt_ext_v003 - pnt_base1_omwertux_std_base_5284__5255a5c4a7352c1ae046ffc4

Again, there are two issues here: 1) jobs are failing to archive, and 2) the files are not cleaned up afterwards. Is this something that you can/will fix or are currently looking into?

I believe both these issues have now been fixed in the v6.1 beta release.
Are you on the beta? You can shoot an email to sales@thinkboxsoftware.com if you so wish.

Cool, thank you. Do you have an approximate release date for the official 6.1?

The plan is early 2014 :slight_smile: