AWS Thinkbox Discussion Forums

Small studio network upgrade

I’m posting this on a couple of different forum:

Here at Racecar we have 8 workstations (could grow to 12-15) and 30 render slaves. We’re all mac. We use C4D and After Effects. Our projects are mainly broadcast motion graphics. In crunch periods our network is struggling so we’re in the process of upgrading the whole network backbone and our server.

At the moment we have gigabit ethernet for everyone including render slaves. Our server is a Mac Pro 2010 with an old X-Raid attached over 2gb fiber. The server is attached to the network over 6 aggregated gigabit ports.
The X-raid i peaking at ca 300 mb/s

Ideally we would like to keep the slaves on gigabit ethernet with a dedicated switch, but upgrade the workstations and server/raid to higher speeds on an other dedicated switch. Then stack those over 2-4 sfp port. Workstations and render slaves all read and write to the same share.

So here comes my question:

  1. What protocol do you guys recommend? Fiber or 10GB ethernet? Is there anything else out there?

  2. What kind of server/raid would you recommend? We got a quote on one but it could only deliver ca 1000 mb/s. I’m thinking we need something that can deliver upwards to 3000mb/s?
    (we’re not doing windows, but we might do linux if we have to)

Our budget is approx € 25.000,-

Do you have any thoughts on this? Any advise is appreciated.


Hi Bonsak,

In regards to your question about fibre or 10GB ethernet, I think it comes down to a balance of cost, availability, and various performance differences. Fibre allows for longer runs and lower latency without worry of electromagnetic interference, but copper seems to be dropping in price, and the cabling can be easier to work with. They’ll both provide you with 10Gbps speeds, so either should get the job done.

As far as file servers go, you’d have to take into account that even though you can get 3000 Mbps in a single sequential download, when many machines are making different requests, that performance can still change significantly. To address this, you’ll generally want arrays with plenty of drives. I’ve dealt with a FreeBSD server running a fairly standard RAID5 array of 24 standard 7200RPM drives, and in a performance test with Bonnie++, it can provide sequential download speeds of about 4000 Mbps (note that this is bits, not bytes), and sequential uploads just under 1000 Mbps.

One thing to think about is that if you used ZFS on FreeBSD, for example, you could take advantage of SSD caches for either read or write performance. If you use it as a read cache (called L2ARC), it would cache recently read data so it can be read much faster the next time around. If a bunch of machines tend to use the same few datasets, this can greatly improve read performance. The write cache (called ZIL), will work in the opposite way and quickly cache incoming data, and then write them to the actual array disks afterwards.

These are a few things to look into anyway. Note that we are not a hardware company, so this is just advice from my experience - take it for what it’s worth.


Thanks. Thats valuable information.
We’ve decided to go for 10GBe. Fiber is expensive! :slight_smile:
We’re looking at 12 drive raids with 1TB SSD’s. That config will give us ca 3000mb/s and ca 10TB of storage.
We’ll probably go for a Netgear 10GBe switch and put the slaves on a Netgear 1GB switch with 10GB uplink to the main switch. When we replace the renderfarm we’ll probably put the slaves on 10GBe.


That sounds great! Using all SSDs on the server should definitely be a game changer in the random read/write department. I’d be interested in hearing how that works out for you…

I can’t recommend enough for you to join the SSA group if you need to discuss any technical issues regarding a CG / VFX studio pipeline and everything that goes along with it. This group really is a who’s who of all the technical giants in every major studio you probably care to mention! :slight_smile:
Mailing board available here or sign-up to receive them as emails.
Just searching through their mailing list is a bit like a careers worth of tech ‘gold’ tips’n’tricks :slight_smile:

1 Like

Thats awesome Mike! Never heard of that list before. Looks very cool.


Privacy | Site terms | Cookie preferences