Multiple Deadline Workers Launching seemingly Random – Task Failures and Process Conflicts

Hi all,

We’ve been encountering a frustrating issue with our 3ds Max jobs submitted to Deadline (version 10.4.1). Everything seems normal on submission, but once the job starts rendering, multiple instances of the same worker (previously called slave) start launching seemingly at random.

Typically, it’s 2 instances, but we’ve seen up to 5 running on a single machine. This results in process conflicts, where one instance ends up aborting the others. The following error is shown:

Error: RenderTask: Unexpected exception (Monitored managed process “3dsmaxProcess” has exited or been terminated.)

We’ve already followed the documentation on limiting to a single worker per machine, including:

Setting MultipleSlavesEnabled=False in the deadline.ini files across all machines.

Verifying and limiting concurrent tasks to 1 per worker.

Reference to the guide followed:
https://docs.thinkboxsoftware.com/products/deadline/10.4/1_User%20Manual/manual/multiple-workers.html

Despite this, the issue persists even after a fresh install of Deadline 10.4.1 across the render nodes.

At the end of the render task, we consistently see the following log:

0: WARNING: Monitored managed process 3dsmaxProcess is no longer running
0: Done executing plugin command of type ‘Render Task’

We’ve checked all machine settings, task limits, and configuration files, but nothing seems to prevent these duplicate worker instances from being created.

Has anyone else experienced something similar or found a workaround? We’re open to any suggestions or troubleshooting steps we may have missed.

We are on Win 11, 3ds Max 2026

Thanks in advance!

which render engine are you using, was this something previously working on 10.4 /10.3 or with an earlier 3ds?

Are you using UBL? This fails with multiple instances as the UBL fails to connect on the port.

Are you using DR on top of the submissions?

Did you clear out any older instances they may have been run at some point? (C:\ProgramData\Thinkbox…)
image

1 Like

Hi Anthony. Thank you for your reply.

We ran 3ds max 2025, with vray 7 without major issues on 10.3.x.
So it happend when we upgraded to 10.4.x. however it still worked when we ran 3ds max 2025 with vray 7 on 10.4.x.

We are not using UBL to the best of my knowledge. However i never explicitly told it not to. So i’ll have to check up on that Monday.

We do have DR enabled on top, but i tried to kill the spawners to see if they were the issue. However the errors persisted afterwards. As it could be something with permissions with the new version of vray.

Could be something left over from earlier. I’ll have a check and see if anything is left in the folder. However with fresh installs of everything i would not think. But worth checking. I did unistall before installing the new versions and not upgrade.

Really appreciate all the possible errors. Thank you for taking your time.

This is not helpful, but we saw this once or twice when we upgraded. This was with Maya Redshift so it might be unrelated to Max/Vray. We have worker instances with CPU and GPU affinities set.

There was a worker instance having trouble. On the machine, there were 2 instances of the same worker (plus the other instances) working on different tasks.

I thought this weird state was caused by a heavy render that had caused some worker crashes/stalls. I haven’t seen this issue with the last few jobs though.

1 Like

HI Jason. Thank you for the reply.

It could be helpful to someone else, so no worries.

Our renderfarm is CPU only. So there is no GPU’s in any of the machines.
CPU Affinity is off, as we are the only ones using the machines.
Could be due to stalls, but i have not setup any Idle Detection. So if it crashed it should either go down or give an error.

This is on all our machines (5) where the problem occurs.
But it does give me an idea, since our workstations dont look to do the same. Something to look into…
Cheers

We found the issue with the multiple instances. The problem was that when a machine stalled, Deadline would start a new worker on that same machine. This new worker would then cancel the job that was currently being rendered, resulting in two active worker instances—even though the original one wasn’t actually a problem.

The machines were stalling because they weren’t pulsing back to the server, even though they were still accessible through the Monitor. This led to a loop where more and more machines fell into the same issue, causing a pile-up of errors.

The solution was to increase the stall timeout from 10 minutes to a value higher than the scene’s total render time. The specific scene causing the problem took over 10 minutes just to open, so Deadline was incorrectly marking those machines as stalled since the timer is set to 10 minutes as default.

2 Likes

Hey,
Do you have more information with the DR enabled ?
We do have problem with our renderfarm with DR.
We are using Deadline 10.4, 3Ds Max 2025 et Vray 7. On the machine where the Vray Spawner are enabled, our Deadline job give us general error and cannot render at all. If we close the VraySpawner on these machine, it renders fine without error.

Do you know why is this happening ? thx

are you launching DR on multiple workers? sounds like a recipe for disaster as I think the first thing Deadline does is to shut down existing workers or fail because existing workers are running.

Ideally Deadline manages the DR workers so it shouldn’t be running before deadline is

In fact, the job we sent on Deadline have not DR activated.
But the machine do have VraySpanwer launched so we can run DR rendering when working and testing.
But when we send the job to render on Deadline, we must shut down the VraySpawner otherwise the worker do not work and gave us error (Even if the VraySpawner are not use at that moment).