Normal amount of monthly bandwidth per Slave?

Just taking a gander at our AWS bill for the month and our Repository sent 281GB of data last month. Submissions are all saved to our local NAS and our repo should just be for job scheduling. Is 281GB of scheduling data normal?

$25/month in data transfer fees seems a bit high for 15 odd slaves.

That’s a great question. I’m not sure what metrics we’ve pulled for that, but I think that’s a great place to put some efficiency goals.

Just so we can align with the testing here, what plugins do you use most and how are you measuring your throughput?

Measuring Throughput off of our AWS bill. :slight_smile: We’re just running 3ds max jobs and using the monitor. It looks like in the month 281GB reflected a mere 1,200 frames. That seems crazy high. That’s literally more than the OUTPUT of those 10 jobs.

So the only thing that it could be is the monitor. Which looks like it’s sucking down about .1-5mbps just sitting there. Even just 0.1mbps would be 800MB\day * 30 days = 25GB. And that’s with 10s job and 15s slave updates in the monitor config.

EDIT: looking at the “Connection Server” stats it’s pinging between 2KB / second - 183KB / second

Thanks for the details! One of our guys does have some ideas on testing there, I just have to read up on it.

Are you running the Monitor up on AWS? It’s always been kind of bandwidth hungry as it’s streaming most of the changes from the database (Slaves, jobs, other doodads that live in panels). The RCS has a secret caching proxy living on the Gateway which should make file requests to the Repo a little more efficient.