AWS Thinkbox Discussion Forums

restart stalled slaves

Is it possible to disable this feature per machine? We need it ON for our farm, but not for our workstations that render overnight… we keep running into an issue where artists simply starting the deadline monitor get the slaves start up (because their machine is marked as stalled in some cases due to forced exits). The farm starting on the machine triggers cleanup actions, things like killing active max/nuke etc processes to provide a clean environment for the slave.

Obviously, this is causing some friction … Would be great if we could disable the restart stalled slaves feature per machine

Could you use Pulse Auto-Configuration and set up a rule just for your render nodes?

Sounds exactly what we would need! Thanks

It would be nice if some of the auto config settings related to slave startup would be refreshed by the launcher periodically. That way, to avoid auto-starting of the slave, we would not have to start the slave :slight_smile:

Yeah, we definitely want to refactor the auto config system for issues like this. We’re just not sure where this fits in the roadmap yet. Another example is for auto-configuring the repository root on a workstation. The user would have to run the slave before they run their monitor to get the auto config.

Yeah good point… Sounds like the launcher might just need to periodically update?

Or actually, when the auto config settings are changed, that could mark that collection changed. And the launchers could update when it detects that the local cache of that collection is out of date. Or something mongo. hah

Privacy | Site terms | Cookie preferences