Hi everyone,
I come here to perhaps get new ideas of what could be the cause of longer render times in the following scenario.
We have two company branches, one Located in Amsterdam, and one in London, we have a direct 10GBps link between both but most machines are capped at 1GBps.
We mostly render Houdini, Redshift, and Nuke jobs. This happens with all jobs. all jobs and their files are located on the main file server of each branch, so Amsterdam has it’s own file server, and London as well.
When jobs from a specific branch are submitted, the local render farm machines are able to render the jobs from its fileserver just fine, with the expected margins from render time. this happens in both offices, all local renders from each branch run fine.
However, the issue comes when machines try to render a job from a different branch, the render times increase a lot. For example, a render node in Amsterdam picks up a London job, so therefore most of its files and settings will be located in the London fileserver. and a render that takes 1m on a local London machine, can take 8m on an Amsterdam machine. the hardware power is also not a factor in this issue, all machines are very similar.
So, to make it clear
London Job → London Machine = Good
Amsterdam Job → Amsterdam Machine = Good
London Job → Amsterdam Machine = Bad
Amsterdam Job - London Machine = Bad
In regards to transfer speeds, we get about 115MB/s on machines in both ends, so the 1GBps cap is at play here, when using Infiniband network machines, the transfer speeds are way higher, about 1GB/s or so, but the render times remain the same.
Our deadline server is located on a VM in London and I’ve tried both the direct connection and Remote connection server and the render results are always the same.
I really thought the issue was the connection between offices but after making all of these tests I am lost on what this could be… latency? Deadline really is the only thing that struggles when talking about tasks being run on a different branch.
Any help would be useful here, if you have any questions as well, feel free to ask thanks!