Deadline & Cinema4D & ErrorLoadingProject

Hi
Spent most of the day debugging this. Tested the network/switches/servers and could not find any sign of performance issues.
Our server is running Windows 2012 r2 and has a pretty fast (4500mb/s) raid attached using a 2x16GB fiber connection. The server links to a HP FF 5700 switch over a 6x10gb copper team. Most of our slaves are connected to a Netgear GS478T switch which is connected to the 5700 over a 4x1GB fiber aggregation. 1x4gb is not a lot of speed but it has worked fine for many years.

To test if this was a traffic problem i set up two similar jobs. One in c4d and one in Houdini/Mantra. The scene files were just a lot of boxes with around 1GB of tiff textures. Both files pointing to the same texture folder. No other traffic on the network or server and no other jobs rendering.

1. Each job rendered on all slaves separately. Osx and Windows slaves. 32 in total:
1a: Mantra renders all 250 frames with no errors. Pulling ca 32gbs of textures over the network over the first minute or so (1gb x 32 slaves). This tells me that the network is working properly.
1b: C4d reports a lot of “Asset missing” and “ErrorLoadingProject” errors at first but the job settles down after a little while and finished eventually.

2. Both jobs rendered at the same time sharing the farm 50/50. Both using equal shares of Osx and Windows slaves.
2a: C4d errors out as in 1b, but Houdini also errors out on missing textures! Weird! It worked in 1a. And the network load is equal to test 1.

I then made a copy of the texture folder and re-linked the textures used in Houdini to the new folder. So now the two files reads from separate texture folders.

3. Both jobs rendered at the same time sharing the farm 50/50. Both using equal shares of Osx and Windows slaves.
3a: C4d errors out as in 1b. Mantra renders fine. So Mantra works when C4D and Mantra are not reading the same files.

For the next test I swithced back to the scene files that shared texture folder:

4: Both jobs rendered at the same time but with different split.
C4D job has only 3 slaves white listed to try to pinpoint what causes the problem:

  • 1 Windows slave
  • 1 Osx slave using Arconis connect to connect to the Windows server with afp.
  • 1 Osx slave connected to the server with smb.
    Houdini are using the rest of the slaves. Osx and Windows slaves mixed.

4a. Only 1 Windows slave enabeled: Both C4D and Mantra is rendering with no errors.
4b. Windows slave and Arconis mac enabeled: same result as 4a.
4c. The last osx (smb) slave also eabeled for the C4D job: Both C4D and Mantra report missing textures.

So there seems to be some kind of file locking thing kicking in when C4D is running an Osx slave and reading files from a smb share. I have no idea what this can be but I’ll keep digging tomorrow.

To conclude :

  • The fact that the Mantra job renders just fine and C4D job doesn’t points in the direction that they treat the host os they are running on differently.
  • When the Osx C4D Commandline renderer reads files on Smb shares the files seems to get locked or made unavailable for other processes (4c).

So now i know how to replicate this problem. But i have no idea how to solve it. Other than selling all the Osx slaves :slight_smile:
Any ideas are welcome.

Cheers
Bonsak

PS I’m going to post this on the Maxon Beta board as well.

What exact version of OSX are you running, as different versions exhibit different SMB behaviours, much to the anger of most folk who work in VFX in a cross-platform environment.

They are all running osx version 10.10.5

-b

That is very interesting!

So I checked on the server what is actually happening when C4D is reading files and it is indeed applying a lock to the files. Mantra is not. So now we just need an option for the C4D CMD render to read files without locking :slight_smile:

-b


Bonsak,

I came seeking and found this thread for the same error I’m having on our Mac farm. Any additional news on a fix?

Thank you!

-Geoff

I believe this is an issue in C4D, not in Deadline, so I don’t think this is something we can fix.