I’ve been testing AfterEffects CC 2014 on Deadline 7.1.0.35 R (2a6ca6695).
We’ve been sending comps to Deadline with “Multiprocess Rendering” turned on in AE and also checked when submitting via the SAETD Dialog. In addition to “Multiprocess Rendering” we’ve been setting concurrent tasks to the rendernode’s number of CPU’s.
It is worth mentioning, our studio primarily does rendering from 3dsmax to Deadline, we have about 100 machines, and our Repository settings are skewed towards that environment, and include things like:
- Throttling - limiting the repository so that only 1/3 or 1/4 of the farm machines can pull the same job files at the same time.
- Failure Detection - including task fail on #error limits, and job fail on #error limits
The Problem:
Every job we send we get a ton of “aerender Error: After Effects error: Error (4) reading frame from file” errors, which seems directly tied to different footage files (mostly ProRes .mov files).
The job will render - we just have to keep monitoring the job and cleaning errors.
The Question:
- For Throttling - does the repository consider the number of machines asking for information in the throttling limit, or the number of total cores from machines asking for the information?
So if you submitted an AE render to 2 machines, with concurrent tasks set to 12, does the repository/throttling limit read that as 2 machines, or 24 machines?
- Has anyone else had this problem or could suggest an AE render workflow that leverages machines with high CPU counts (most of our machines are 24 core), rather than just rendering one frame per one machine?
Thanks!
Throttline seems to be by the number of slave instances, so if you run 5 instances of the slave, each would count towards that. If you ran concurrent tasks, though, it likely wouldn’t catch anything beyond the first in the throttling.
Have you looked at the multi process rendering option in the submitter?
ok - so slave instances = throttling number. got it.
Can you elaborate on “Have you looked at the multi process rendering option in the submitter?” and how you would use this for troubleshooting?
Right now we have Multiprocessing turned on in AE’s preferences, and enabled in the Submit AE to Deadline Dialog.
The Slave Panel indicates that more than one CPU is used when submitting with multiprocessing turned on, and concurrent tasks enabled. My question is, if Throttling uses the number of slave applications running, it would seem to eliminate throttling as the cause of the read error. If that is true, what would be the next logical process for troubleshooting this?
Thanks!
So it sounds like you have multi processing already enabled, so that rules that out.
I think we would probably want to verify the traffic on the network isn’t causing read issues. Do you have any way to verify how many requests the server is getting when these jobs are being done? If you have 24 requests for the same file per machine, and they are all looking for a number of files, this could lead to significant issues. Do you have throttling set to a low number so that it will have an effect in this case? How big are your pro res files? Could they be congesting your network? I am still hopeful that one of our users with experience in AE will pop in and offer some advice, as we here in support have limited experience with it.
I’m with Dwight on the network bandwidth issue, but also realize that each concurrent task is one copy of AE. Multi-process rendering will also start one AERenderCore per core on the machine, so you’re creating WAY too many instances of AERenderCore there. I usually recommend concurrent tasks or multi-processing, not both.
Just for fun, try flipping the two options. One test with CTs higher than one and MP off, then one with MP on and CTs set to 1.