We noticed several jobs that dont respect their limits:
In these cases, thats bad bad, as they are consecutive frame sims…
We noticed several jobs that dont respect their limits:
In these cases, thats bad bad, as they are consecutive frame sims…
This issue, combined with the one where the jobs would not render at all (which also seemed related to machine limits somehow), tells me that there is something real fishy going on with the limit stubs
Yeah, this is bad. We’ve logged it as a bug and will look into it.
If possible, could you send us a dump of the job’s machine limit? You can use the instructions here:
viewtopic.php?f=86&t=10801&start=10#p47064
Thanks!
Tried but im getting this response:
[root@deadline bin]# ./mongoexport -d deadline6db -c LimitGroups -q {’_id’:‘529e3732162dfe156c61489c’} > /var/log/limitGroups.txt
connected to: 127.0.0.1
assertion: 16619 code FailedToParse: FailedToParse: Value cannot fit in double: offset:5
Typo somewhere?
In the limitgroups.txt you will see that it looks as if only lapro0523 was rendering the job, when lapro1260 is also on it…
H,mmm
What happens if a machine is marked as stalled, when its not? Would the limit groups stubs be returned for the job? But then it carries on rendering?
Yeah, the limit group would eventually get returned as part of housecleaning. I wonder if the false positive on a stalled slave could be the result of your database being overwhelmed by too many connected Monitors. We’ll still be looking into the limit problem though, but it sounds like a lot of the issues you posted about last night were directly related to the monitor problem.