Best Practices of using DR software Optimally

Hi

I know this is a long note, but I would like to know what would be the best practice to use a distributed rendering software on the systems while you are simultaneously working on a scene file. I have been looking at various options on this but have not hit upon any one good solution so far.

My current working setup is as follows:

  1. I have one system which I use for Modeling, Texturing and Lighting - (Lets call this Working Unit)
  2. I have another system in which I have stored all the Proxies, Models and their materials - (Lets call this Server)
  3. I have 3 other systems which I use for Renders (Nodes)

My current working process is as follows:

  1. I work on my Working Unit and copy the general material to the Working Unit.

  2. When I need some proxies, I import them to the scene but link the files to the Server unit (Proxy files/Materials etc)

  3. Render test renders on Working Unit and when satisfied,
    a. Archive the file,
    b. Copy to Server
    c. re-set paths manually
    d. Start Distributed render on Deadline on Server + 3 Nodes (I have license for 4 nodes) and continue to work on my Working Unit.

    While this system works fine, I have following issues:

  4. I am not able to optimally harness the power of the distributed renders for test renders. Sometimes, there might be a need to render the test scenes at decent resolution which takes up time. I currently render these on my Working Unit itself, as the above mentioned 4 steps are time consuming. This means 4 systems are lying idle along with ME.

  5. Even if I follow the above 4 steps and set the renders on the other systems (Server + 3 nodes), the network stalls my work on the Working Unit (especially when I work on the Materials) as it tries to connect to the Server which is under Render.

Furthermore, this is completely a manual process (all the above 4 steps), which means I cannot be using this process every time (which I would like to - given the fact that I have the systems to do this)

My question is:

How can I use the other systems (Server and Nodes) simultaneously when I am working on my Working Unit on a regular basis say even for test renders. Is there a way, I can automate the replication process on another node/server and also completely cut off my system from the rendering process?

Would be curious to know how others handle this.

Best Regards,

GP

how about this

Map S:\ to your server path with your data you arent going to copy locally to every machine - eg

\server\server only data\data

then Map T:\ on your worker machine to local data

\localmachine\data\

and Map T:\on the nodes to the server data

\server\data

run rsync.exe or the equivalent to keep \server\data and \localmachine\data synchronized [so they are the same, same paths etc so that T:\data\image.jpg is the same from the node and your workstation, even though they are pointing to different locations]

and then when you render the nodes will pull data from S as well your machine, but your local machine pulls the other data locally and the nodes remotely.

this assumes you are running windows…i’m sure something similar exists on OSX

cb

Thanks for the suggestion. Yes I am using Win 7 on all the systems. Let me understand this better and revert to you.

Best Regards

GP