deadline 7.2.0.7
win 7 x64
google cloud provider
I have several instances running, and showing in my cloud panel. I select them all, click ‘destroy instances’ and…nothing.
They are still there.
-ctj
deadline 7.2.0.7
win 7 x64
google cloud provider
I have several instances running, and showing in my cloud panel. I select them all, click ‘destroy instances’ and…nothing.
They are still there.
-ctj
select all, ‘stop instances’ stops them. Once they are stopped and terminated, I can destroy them. They are destroyed.
Now I’m waiting for them to disappear from the slave list, because they show up gray and offline there.
They go away.
BUT WAIT
THEY"RE BACK!
As soon as they are gone from the slave list, another ten VMs spin up, and are now sitting idle.
Thoughts?
-ctj
now, I select all instances in the cloud panel, right-click select ‘stop instances’, and the bottom instance on the list stops. the rest continue to live.
Christopher,
Are you launching the instances manually through the Cloud Panel, or are you using Balancer? If Balancer, then if you kill them manually Balancer will just create more if it determines there is need for them.
that makes sense. however, the reason I was killing them, besides being evil, was because there were no jobs running and it shouldn’t have spun them up to begin with.
That is odd. The Balancer algorithm is fairly conservative about starting VMs. I suggest watching the Balancer Log for a few cycles to see if any errors are showing up or any other messages that might offer clues. If that doesn’t reveal anything, we can set up a remote session to see what’s going on.
found it.
We had a couple jobs queued that had the wrong pool assigned, but the right group, so the ‘all’ pool, for obvious reasons doesn’t include cloud rendering, but the ‘cloud’ pool does. The ‘cloudgroup’ is indeed a cloud group. So the balancer sees the ‘cloudgroup’ assignment and spins up instances. Deadline doesn’t see the ‘all’ pool including cloud instances, so it doesn’t start the renders. Net result is jobs queued on the farm, not rendering and instances burning cash, idle.
Can we configure the balancer to NOT spin up instances when the pool does not include VMs?
-ctj
Pools are meant more as a priority mechanism, whereas Groups are meant to specify machine configuration (which translates to hardware and VM image on the cloud via the Group Mapping). So the short answer is that the Default Balancer Algorithm does not use Pools in any way to determine instance count targets.
I was about to suggest that you might copy the default algorithm and modify it to consider Pool assignment in its target weighting. This would require inspecting all Group Mappings for a Region to see if any mappings include assignments to the Pool in question. I’m not sure this is worth the trouble.
Nice. I’ll look into that. I think the solution I’m going to go for is a “cloud submit” button for the users. Then I will stop trying to serve to masters with my submit script.
Thx!