Repository mirror

Hi,

We want implement de following scenario in our render farm.
Scenario:
Two render farm geographically separated, each render farm have your own server but both render farm’s have the same repository.

It is possible? If yes, how?

Best regards

André bolinhas

Hi André,

The only way I can imagine this working is if you can get the underlying file system to work like this. Essentially, you would need a layer on top of the two servers to make them appear as a single server on the network (we’ll call it the parent server). Then depending on which location the parent server is accessed from, it would reroute to the appropriate local server. I’m no IT expert, but maybe this could be achieved with dfs? Once the file system is set up properly, then it should just work to have both locations connect to the “same” repository.

I should note though that if you’re performing cross-location rendering, the transfer of files over your vpn will act as a bottleneck. Also, any assets your scene files use would need to exist in both locations, as well as the output paths. Between our 3 offices, we have 3 separate render farms, but Deadline has a Transfer Job feature that allows you to transfer jobs from one repository to another. Before we can transfer a job, we get our IT department to copy any required assets to the remote location before transferring the job. Once the job is transferred, our IT department copies the files back to the original location. Having a single repository that all 3 offices shared was not an option, because the time spent waiting for assets to be synced up simply outweighed any benefits of sharing a repository.

So while you can share a single repository between locations, it might work better to keep them separate.

Cheers,

  • Ryan

Hi Ryan,

Thanks for your replay, I will try share the repository between this two locations.

Anyway, you can force with sales department to send me the quotation to renew my support contract?

We have very urgency on this.

Thanks

André Bolinhas

DFS tutorial:

windowsnetworking.com/articl … ation.html

We’re looking at this right now too for plugin sync.

For my experience, dfs doesn’t work as well as you think it would. It’s designed to reduce loads on file servers and make branch offices. So each office gets a replication of the file on the back-end and the user connects to the closest server. And the paths stay the same no matter which server you get connected to. But most people are dealing with servers in the same room.

So benefits:

  1. paths stay the same, even if the server changes
  2. replication means you can distribute to smaller less powerful servers rather than an uber-san.
  3. if a server goes down or restarts, users can connect to the other ones.
  4. reduces need mapped paths.

Cons.

  1. you CANNOT prevent people from connecting to the other share paths. example:
    if your dfs share is \yourdomain.com\namespace\share1\ , then users can also use \namespace_server1\share1
    AND they can also use \fileServer1\share1
    So files end up with paths outside the dfs anyway.
  2. replication works best for smaller office-like files. Production loads might overload the system. Might not work for replicating renders.
  3. doesn’t do live failover. any files open while a server goes down aren’t switched live. only new connections are rerouted.
  4. dfs related errors MESS up many softwares. Deadline in partictular hates any dfs related problem. because error messages might look like file permission problems.
  5. least user privelidges isn’t enough to manually fix redirection problems.
  6. a bunch more problems.

Ben.

What about installing your entire Deadline OS/repository on a HA(high availability) Virtual Machine pair, using for example Xen or VMware(Sphere or ESX)?
You’d be running a single instance of your repository on 2 physical servers with live failover…

Meshing 2 very remote locations into 1 renderfarm over VPN requires some serious bandwith and in general
does not sound like a good idea… You’re looking at heaps of administration work to
(a) keep everything running and
(b) keep your assets and output locations synchronized.

Has anyone succesfully run Deadline on VM/cluster setups?

Cheers!
Dennis