job not starting despite higher priority

Hi,

Our farm has 10 machines/slaves 5 of which belong to pool A and the others to pool B.
I’m currently encountering the following problem:

  • Job 1 is currently rendering and it has primary pool A and secondary pool B. It is currently rendering on all 10 machines.
  • Another user submitted Job 2 which has primary pool B and secondary pool A.
  • Our scheduling is set to “Pool, Priority, First In First Out”

I would assume that once a “pool B” slave has finished rendering a task of Job 1, it’ll start working on Job 2 since that job has pool B as its primary pool. But that doesn’t happen. Slaves will continue to dequeue tasks of the earlier Job 1 even if I assign a higher priority to Job 2.

Is this expected behavior? What’s the proper setup to achieve my goal of having a Job take up the whole render farm if there’s nothing else to do but restrict itself to only some machines while a second job has been submitted?

Hi fxtilt,

If I’m not mistaken, the preferred Job should be re-evaluated after each Task (someone correct me if not). There may be other factors, such as Groups, Limits, Black/White Lists, etc. that can cause jobs to avoided when they might otherwise seem to be the right choice. It’s a good idea to thoroughly look at the job settings to see if this is the case. If this does not reveal any clues, let us know.

I believe you are right James.

fxtilt, I would recommend looking over thinkboxsoftware.com/demysti … ps-limits/ which is a great article by Bobo which should help work this problem out.

Thank you, that document explains things clearly. But after reading it I think I have set up everything correctly yet my issue remains in some cases: when I change a job’s pools while it’s already rendering.

Referring to my example: when Job 2 gets submitted with primary pool B and secondary pool A, everything works fine. But when both jobs have the same pool configuration and I just swap Job 1’s pools while it is already rendering, the slaves don’t seem to notice.

I was hoping that those slaves that prefer jobs of another pool would abandon their old job as soon as possible and start picking up tasks from another job. That doesn’t seem to be the case. Is there a delay until slaves pick up changes in a job’s pool? Or does a slave that has just been rendering on a job dequeue further tasks without re-evaluating that job’s pool setup?

While a slave is rendering a task, it will not look for changes to those job settings, but once it’s done it’s task and looks, it should see that there is a more favorable job available.

I was also reminded of the interruptible flag for a job which will allow a job with a higher priority to interrupt the job:

Interruptible=<true/false> : Specifies if tasks for a job can be interrupted by a higher priority job during rendering (default = false).

You can set this from the job list as well by right clicking the job and chose Modify job properties, or hard code it as dwallbridge pointed out.

Some types of jobs like Maxwell Render for example, also need to have ‘Resume render’ set in the submit job dialogue when added, or the job will start from the beginning when the job is resumed for a single frame render. Animations will generally start from the last frame rendered but the current frame rendering will be lost.

Tim.