Hello,
some of the tga rendered are renamed “alt” at the end :
Color_0111.tga becomes Color_0111_alt_3.tga for example
I render a lot of tga sequences and sometime it happens, one of the picture is renamed, and doesn’t seems to be corrupted
And when I rename those tga everything works.
Do you know if DD did this ???
Fabrice
Hi Fabrice,
This is a Deadline built-in safety switch kind of feature.
Lightning adds the “alt#” extension if it’s unable to write to the original output location. You can typically see this in the log with “Access Denied” errors when trying to save the image. We figured this was better then just failing the render and losing the image & the processing time it took to generate that image! Here’s how it works. First, we make 5 attempts to save the image with it’s original file name with increasing intervals between save attempts. If all 5 fail, then we make six more attempts by appending _alt_0, _alt1, …, alt_5 to the file. If those fail, then we just throw an error for that task. There’s also a monitor job RC script which allows easy cleanup of any _alt_x named files by attempting to remove this temporary suffix and rename them back to the original filename.
In terms of why this is happening, then I would recommend a deep-dive into the hardware monitoring of your systems to look for a file server, network through-put, disk I/O, AD access permissions, rendernode bottleneck somewhere in your pipeline. Does this issue only seem to occur when your farm is really busy / under heavy load and/or your network is being hammered? Is your file-server struggling to meet the demands of the I/O on your render wall at peak load?
Regards,
Mike
Ok thanks for this clear answer, I’ll dive into hardware monitoring…
Regards, Fabrice
If it helps at all, the one time I experienced this in production, it was the network bandwidth of the main file server, where the renders were being saved back to. It had (2 x 1Gb) NLB teamed NIC’s connected to an over-provisioned LAN switch. The final solution was to invest in a large enterprise level clustered NAS solution with many 10Gb connections and a good data-centric 10-1Gb switch. However, for the period before we couldn’t afford this, we swapped out the (2 x 1Gb) team for a (4 x 1Gb) NLB team, which was made up of 2 x physical network cards, each with 2 x 1Gb connections and that fixed it nicely for us. YMMV.
The problem is that the dataserver is brand new here…it shouldn’t have bottlenecks !
(To pro switches with fiberchannel on Open-e NAS…)
Sadly, brand new hardware doesn’t always mean there are no problems. We’ve seen this on a few occasions with our own hardware, and that of customers.