Frame List "calculator" implementation

Hey:

Going through the submission process day in and day out I found one thing that would definitely speed up my work flow - I am constantly pulling out the calculator to manually divide each set of frames in the framelist by the number of machines - It generally takes me 20 seconds or more to do this sometimes and I thought it would be very helpful if you had a “NUMBER of MACHINES” calculator button that would automatically divide all of this up and print the result in the task group size - Over the course of the week this would save me at least 30 mins of time - Alright just thought I would throw that out there, not sure if anyone else agrees but this would definitely hep me out -

-Tony

It would be very helpful to know what software you are using Deadline with and whether you want this in the Monitor submission scripts or the built-in scripts (e.g. Max, Maya, XSI etc.)

We currently use Max, AE, and Cinema 4d, it seems like all of these have the “frame list” or chunk size that you have to enter manually - In fact when I look at other submission scripts it seems this is generally the case that you have to manually enter in the chunk size - I just know that when I’m going through big batches of renders that it would save me time, that’s all, thanks for the quick reply -

It normally isn’t common practice to set the chunk size to (# of Frames) / (# of Machines), and while I understand that this splits up one job evenly across all machines, there are some potential issues that you should be aware of:

  1. If one of your machines fails on a frame in the middle of the chunk, that entire chunk will have to be re-rendered.
  2. If you have a section of the sequence that takes longer to render because something is flying by the camera, then the load won’t be distributed evenly. One slave might be stuck on a chunk longer than the others because those frames take longer to render, so at the end, only one slave is working and the rest are sitting idle.

The main use for chunking tasks is to reduce the amount of overhead required between tasks, but this overhead is really only noticeable if your frames render very fast (ie: a few seconds to a couple of minutes). Anything higher that should probably be left at one frame per task. This will help balance the load better, and reduce wasted time if a frame fails to render.

Cheers,

  • Ryan

Thanks for the quick reply - I did run into the problem you are talking about, the chunks were evenly dispursed and one machine stalled out and I was stuck with having to reque that chunk as a seperate job -

I was just under the impression that every time a new chunk is fired off then the program is re-launched on the slave side, Doesn’t this re-launching of the program slow down the overall render? This is why I try to minimize the amount of chunks per job but I can see how keeping the chunks small means that you have less re-rendertime if a slave stalls out

Do you think it would be helpful for there to be a few more options rather than one button for (# of Frames) / (# of Machines) however? - maybe some sort of divisional calculator that can split up the chunks a few more times than exactly an evenly dispersed number? This number would print a good starting point and then can still be modified by the user etc -

I think it could be great as a starting point at least -

It depends on the application. With 3dsmax, we keep the scene loaded between frames, which almost eliminates that overhead. However, After Effects and C4D don’t support this, so they have to get re-launched for each task. The trick is to find the right balance, because 10 seconds of overhead per task doesn’t look so bad when compared to minute or hours of lost rendering time because a slave stalled out.

We could consider this for a future release. If you have any scripting experience, you could always modify the out-of-the-box submission scripts that come with Deadline to add this functionality yourself. If this is something you’re interested and you need a hand getting started, just let us know.

Cheers,

  • Ryan