Error Mapping Network Drives

We have a farm with 8 nodes, rendering from a repository and files located on a central server running Windows Server 2008. Various areas of the central server are mapped with specific drives for artists, and these drives have been added to the Repository ‘Mapped Drives’ area. The login details that the Repository is using for mapping these drives is the same as the login details the Slaves are using to login to the network (user:deadline / pass: ***** ).

We are having regular errors from our Slaves that they are only mapping none, or occasionally only some of these network drives before rendering, and this is causing them to either not pick up assets, or not be able to save to the final path. Example error below.

The slaves are not being accessed by any other usernames either directly or remotely.
Is there a way to solve this? Are we able to tell the Repository to replace mapped drives with their UNC equivalent using the ‘Mapped Paths’ option? Or must mapped drives be used for slaves?

This sounds like an issue with your server. Deadline is unable to map the shares because at that particular time, the server cannot be reached. I googled the error and found this article for Windows Server 2003:
support.microsoft.com/kb/946937

This page looks like it could be helpful too:
tipsntricksforwindows.blogspot.c … ot-be.html

Hope this helps!

  • Ryan

Cheers Ryan, will look into this and report back.

Ryan,

This doesn’t seem to be the issue. I may have show you the wrong error log - please find new below. The error is definitely regarding too many connections. If all nodes are restarted cleanly with no other logins, the slaves pick up the first job without problems, but then start to experience this problem on the second job.

Any thoughts?

I found this article that explains the problem:
support.microsoft.com/kb/938120

There are potential workarounds there too. Hope this helps!

  • Ryan

Thanks Ryan, we’ve tried that and no joy. It seems to be an authentication issue, so doesn’t matter whether we are relying on DNS to resolve the server name or not.

Remoting in to a node using our deadline logon, the 4 (L: P: Q: and W:) drives show as disconnected, which is normal after a period of inactivity, but odd that they should appear at all when first logging on? I attempted to disconnect them, but it stated that the connection didn’t exist. It was possible to browse the drives, but the disconnected status didn’t change to connected when doing that, which is also odd. Is this something to do with the way in which the deadline slave maps the network drives?

Looking at our server, which will keep user sessions open for 15 minutes of inactivity before closing them, there are only active sessions from 3 nodes currently, all using the deadline account. Any connection made from these nodes using the deadline credentials should succeed. Connections made using other credentials within the same user session would fail. But logging on to one of the active nodes using deadline account credentials shows the 4 network drives as disconnected, but they are browseable. This is not what I would expect.

The kb article mentions ‘Some earlier programs may not save files or access data when the drive is disconnected. However, these programs function normally before the drive is disconnected.’ Is that a requirement for Deadline?

Matt

Deadline maps the drives using the Windows API, so it should essentially be the same as using the “net use” command from a command prompt. We do unmap the drive if it is already mapped first, then we map it again, but that shouldn’t be a problem.

There should really be no special requirements for this feature to work (that we’re aware of). I don’t know if the auto-disconnect would be causing issues or not, but again, I wouldn’t expect it to. As a test to try and narrow down the issue, could you disable the auto-disconnect temporarily to see if it makes a difference?

Looks like we have resolved our issue. Seems it might have been a Deadline / DFS quirk.

Our mapped network drives point to shares which are controlled by DFS.

Rather than pointing to the location of the shortcut to the share (\HD-LIVE\Data\Creative) , we used the target path instead (\HD-LIVE\Creative$\Jobs) and all seems to be fine. Is this a known issue with DFS?

M