Interrupt jobs for other jobs

Hi all,

I have a farm and a certain amount of racks just for rendering fusion files. We are not always rendering fusion files so when not rendering fusion the racks render other jobs.
The problem with this is that when I submit a fusion job it has to wait for the rack to finish rendering what it was working on before picking up the fusion job.

I have set up a pool just for fusion jobs so the fusion jobs will take priority but they still have to wait for whatever is on the rack to finish. Is there a way to set the pool so that it interrupts the jobs on those racks?

Or is there a way to kill the job the rack is working on so that the fusion job can start right away?

Thanks in advance

Les

I guess you can activate “Job is interruptible” on your others jobs (3D ?). If a prior fusion job is queued they will stop immediatly rendering & go on fusion.

thanks, am trying that but it still does not interrupt the job.

I have all the other jobs in the “none” pool because I do not have any other pools set up. Is it better to have all the racks in a pool? Any pool should always override “none” correct?

yes none is considered as a pool, and has always the lower priority.

By the way, “Job is interruptible” option, should have worked. Do you have a good network reactivity, a pulse running ?

Network is ok, not great. Not sure what you mean by “pulse” though

docs.thinkboxsoftware.com/produc … pulse.html

Pulse is not working on job scheduling, so it shouldn’t impact your problem, but it is frequently recommended.

Yes, I was reading the documentation on it and it seems like interesting option to look into as we are experiencing server network bottlenecking as the jobs get loaded and unloaded.

Even without pulse though the job should be interrupted though right?

also a problem of setting the jobs to be interruptable is that any time people send a job of higher priority it will interrupt those jobs. I just need them to be interrupted by the pool “fusion” on fusion nodes.

Is there a way to set that up?

Thanks

Hum you’re right with higher priority standards jobs.
I don’t see any ‘easy’ solution for your problem.

1- You can simply accept that you’r fusion slave finish a 3D job before switching to Fusion job. You loose some render time reactivity when you submit a fusion job, but you don’t loose overall render time (3D job is not interrupted)

2- You can build a custon script / event that will be activated at every Fusion submition and that will restart slaves on you’r fusion slaves. This is a bit more complicated & extrem, but you’ll have an instant kill & relaunch slave.

Yes this is what we have been doing and it is just not ideal as we usually need the fusion renders quickly. We have had to manually reassign the rack groups and it is not very efficiant.

This may be an option, yes it is extreme but may be an option alright.

Thanks for your help, really appreciate it.

Les

I can’t help but butt in :smiley:

The interruptable jobs definitely should work, but you’ll need to make sure you stop and start them so Deadline reads the new job state stuff I believe.

What I’d do so you’re not losing too much render time would be to create two pools “fusion” and “long_render” (rename as you like).

If you assign half the farm to pull from “fusion” first, then they will be interrupted when a new fusion job comes along but only half the machine (or however you split them). Here’s the layout (assuming 20 machines named RNXX):

RN01 to RN10: fusion,long_render,none
RN11 to RN20: long_render,fusion,none

Here, RN11 to RN20 will be able to keep working and won’t lose everything. This is assuming your job dequeue policy is pool, priority, submit date

What do you guys think?