Disabled slaves registered as Idle in WebService?

Please Note, I haven’t updated to the last Beta yet, but didn’t come across this bug in the release notes.

Currently I have this situation in my farm:
2 slaves rendering
1 slave idle
1 slave stalled
1 slave disabled
5 slaves offline

when I Request the Pulse:8080/GetFarmStatisticsEx overview I get back the following:
2 slaves rendering
2 slaves idle
1 slave stalled
0 slaves disabled
5 slaves offline

How can it be that the Disabled slave is added to the idle machines?
In the monitor itself it is listed as disabled, not idle.

Thanks for reporting this! Attached are the fixed scripts. Just unzip the attached file to \your\repository\scripts\WebService.

This will also be fixed in the next beta release.

Cheers,
Ryan
GetFarmStatisticsEx.zip (2.67 KB)

that did indeed fix it…
but I seem to run into another “issue”

Situation: a slave gets stalled while rendering a job and afterwards is set to “offline”
Odd result: when requesting the GetSlaves information that particular slave still has the JobID assigned to it’s “CurrentJobId” value.
Which seem to be a bit misleading as it’s currently stalled/offline thus no longer working on a job.

This makes it more difficult to get a list of slaves that are actively rendering a specific job…

Is this by design ?

I’m looking to create an overview of the cpu & ram usage per job and compare this to the total values of the farm.
The problem is that in the current situation there is not only a ghost-active-machine rendering the job, it’s last values will also stick around.
This there is non-existing cpu and ram usage added to the job.

If it is by design could the last statistic values be reset?

Are you thinking that when you manually mark a slave as offline, that it just clears out the job information? That seems to make sense to me, and it sounds like it should help your problem here. What do you think?

I was indeed thinking that when a slave gets Stalled and / or is manually set to Offline that all job information would be removed as it’s not actually rendering that job anymore.

When it’s stalled though, it’s can be useful to know what job that slave was working on when its stalled.

Could you not just check the state of the slave in your code before pulling it’s job information?

I’m guessing it’s then best limit it to those with a render state as the stalled nodes can also be offline…

—edit…fixed it :

[code]
for i in range(len(slaves)):
if slaves[i][“CurrentJobId”] == jobInfo[“ID”] and slaves[i][“SlaveState”] == “Rendering”:

            SlavesOnJob += 1
        
            CPUs = slaves[i]["CPUUsage"]
            JobUsedCPU += float(CPUs)  
            [/code]

I now also notice that the offline slaves still are listed with an X amount of RAM still being used.

–edit-- also fixed that.