We use cookies and similar tools to enhance your experience, provide our services, deliver relevant advertising, and make improvements. Approved third parties also use these tools to help us deliver advertising and provide certain site features.
Customize cookie preferences
We use cookies and similar tools (collectively, "cookies") for the following purposes.
Essential
Essential cookies are necessary to provide our site and services and cannot be deactivated. They are usually set in response to your actions on the site, such as setting your privacy preferences, signing in, or filling in forms.
Performance
Performance cookies provide anonymous statistics about how customers navigate our site so we can improve site experience and performance. Approved third parties may perform analytics on our behalf, but they cannot use the data for their own purposes.
Allowed
Functional
Functional cookies help us provide useful site features, remember your preferences, and display relevant content. Approved third parties may set these cookies to provide certain site features. If you do not allow these cookies, then some or all of these services may not function properly.
Allowed
Advertising
Advertising cookies may be set through our site by us or our advertising partners and help us deliver relevant marketing content. If you do not allow these cookies, you will experience less relevant advertising.
Allowed
Blocking some types of cookies may impact your experience of our sites. You may review and change your choices at any time by clicking Cookie preferences in the footer of this site. We and selected third-parties use cookies or similar technologies as specified in the AWS Cookie Notice.
This is really odd… noticed that a job is taking way longer to render than it should, and looking at the logs, it seems like each task was rendered multiple times:
Hmmm, I don’t think we changed anything in our requeued task detection…
The slaves definitely check one last time to see if the Task its been working has been requeued before it completes it. What’s your stalled-slave detection window, and your Slave info update interval?
Given that this happened three times on a single job, I don’t think it would have been stalled slave detection, but you never know… And there were no requeue reports, right?
The machines partaking in the rendering are all using 7.0.1.1. Not the entire farm is updated yet (~99% of it is), but all other slaves have no pools or are disabled.
No requeue reports at all. For this job it happened on every task 2-4 times:
Stalled slave detection is set to 15 minutes (these frames finish around an average of 5-6 minutes).
Slave info update is set to the default 20 seconds.
There was a bug in 7.0.1.1 that causes this behavior, and it’s been fixed in 7.0.1.3 (the current release, which was made available on December 30th). Upgrading to this newer version should solve the problem.
I know it’s a PITA, especially since you’ve already upgraded 99% of the farm, but once they’re all running this new version (including Pulse), you should be good.