We’re noticing that if our limit stubs are maxed out, an AWS job won’t pick up, even though it’s a higher priority. It seems that since the fleet machines don’t exist yet until the job starts, they don’t reserve a limit stub so they don’t start and lower priority jobs continue to use the stubs.
The terrible workaround to fix this is to suspend all locally running jobs, wait for the spot fleet to pick up, and then resume the other jobs. This is obviously untenable.
Maybe this has been fixed? I’m on:
Deadline Client Version: 10.1.13.2 Release (4c7391f76)
Repository Version: 10.1.13.1 (4c7391f76)
Any suggestions are appreciated. Upgrading the repo during a delivery is a bit tricky