AWS Thinkbox Discussion Forums

How to change GPU used in Deadline Worker

Hi All,

I’ve recently added my laptop to my pool of workers for Deadline tasks. It has an Nvidia GPU onboard but for reasons I can’t figure out, the Deadline Worker is using the Intel UHD Graphics instead.

I’ve tried changing the graphics settings for Deadline Worker in Windows to High Performance; Nvidia GPU but it still reverts to the Intel one when I restart the Worker. I’ve also tried restarting the laptop and have made sure I’m using the latest drivers, but still the problem persists.

Anyone here experienced a similar issue or have an idea as to why this is happening?

Thanks,

Andy

Which application and or renderer are you using? how are you configuring the GPU allocation in the app settings, submission settings and ‘Worker’ settings?

I’m sure i read somewhere recently about HDMI connections forcing the Intel chip, but more likely the gpu settings somewhere along the line

Hi Anthony,

Thanks for the quick response.

My current workflow is rendering 3DS Max files with VRay to Vrimage files. I then denoise them using the VDenoise tool. It’s the VDenoise task I want the laptop to help out on.

I’m submitting the jobs via the monitor (Submit>Processing>VDenoise). Unfortunately, there aren’t many options to choose from in the submission settings. There is only a ‘Use GPU’ checkbox, which I have set to ‘On’.

I’m not sure if I need to set something up in the Worker Properties>GPU Affinity tab? I’ve tried overriding the GPU affinity and changing the GPU from GPU 0 to GPU 1 but that doesn’t seem to change anything either :face_with_raised_eyebrow:

this maybe a vray denoise issue, which version(s) are you using?

can you check the logs and find the full command, and try running this outside of Deadline?

if it’s the way the denoiser picks the gpu you may need to check with Chaos.

I’d love to see the denoiser built into the V-Ray Deadline Submission process

Hi Anthony,

I think you may be right. I’m running the latest version of VRay, which must mean the denoiser tool is the latest version too. When I run it outside of Deadline, it picks up the Nvidia GPU fine and runs as it should.

Interestingly, VDenoise identifies the NVidia card as GPU 0 but when I set the Worker GPU affinity to 0 in Deadline Monitor, it still uses the Intel UHD instead. Looks like I’ll have to run the process manually on that machine, which is a shame.

Thanks for all your help on this. Even though I couldn’t get it working in Deadline, I can still use the laptop to do individual jobs. It’s just a shame I couldn’t add it to the pool of machines I have to make the whole process automated.

Cheers,

Andy

1 Like

Hi there, I’ve the exact same question. In my current setup, I try use a regular workstation as temporary render client. As far as I understand, the local worker (when launched) picks up the wrong graphics card, in my case the onboard intel card, instead of the powerful NVidia card. I’ve temporarily disabled the intel card inside the windows device manager, and voila the worker then uses correctly the NVidia card, as seen inside the second tab of the local worker ui. But, I’m afraid disabling the onboard card can’t be the permanent solution.

I was looking through the docs on how to force the worker to pick up the correct card, with no luck so far - just this post came up.

What I already did: I’ve been trying to configure the new power saving setting inside windows, that puts all our 3d apps per default to use the onboard intel, leading to missing gpu accelerated view ports, but that has no effect on the said configuration of the worker. I guess only the “UI” of the worker is then treated by windows and has GPU acceleration, which is not what I’m looking for.

Any ideas? Thanks.

@Karsten_Mehnert Deadline doesn’t control the GPU usage on the worker node by default and let the DCC (Digital Content Creation) application or render to control the usage of resource from the node. If you have configured deadline worker with GPU affinity, then only Deadline controls to the GPU usage on the render node as an override.
If the worker is starting up with a different GPU than you would like to, I would check the machine Graphic setting or in the registry. You can also check what is set as the default GPU for the application process. Here is a forum from Windows on changing the default GPUs.

Hey, thanks for the input. The forum article tells us to change values of DirectX, but this value is not even present in my registry. Let’s assume I also don’t like to mess around with it.

I can confirm, that controlling the settings via NVidia Conrol panel is not possible anymore (the alienware has an older driver with the setting still present), but for the nevest NVidia drivers for A5000 the option is missing. What I’m able to do is, adding the DCCs in the newly introduced MS Windows Graphics Settings and put theses once added to “High performance”. I’ve done this for Cinema 4D 2023 executable already, for all workstations. Additionally, since I want to use it as a render node, I also added commandline.exe to this list. Ans also kept the deadline workers binary here juts in case. But, I’m only able to force Deadline to render with the NVidia GPU, as soon as I disable the onboard Intel card completely in Device Manager. So, I can also confirm, it works then as expected, and the computer renders frames successfully on this card. I’ve no idea about potential side effects this disabled card might have. I think it should be enabled for cases, when Redshift crashes the NVidia card for us to be able to remotely recover the the machine, right?

Okay, and just to make this 100% clear. The intel card returns no frames. It is stuck. Just the OptiX denoiser process at the end of the rendering process alone requires the CUDA capabilities of the NVidia card. Looks like it is something wrong with the power saving approach of MS when it comes to workstations with highend 3d cards.

I forgot to mention, that I have configured the Deadlines GPU affinity settings to Override it and force the use GPU 1. Before I have tried GPU 0 before. But as seen inside the Task Manager Tab > Performance, my Intel(R) UHD Graphics 750 is GPU 0 and the NVidia is GPU 1. So, that is also not doing the trick.

If you go through the troubleshooting guide to get the full command from one of your C4D tasks and run it outside of Deadline (so you can effectively emulate the Deadline Worker) how does it behave?

The reason I ask is our GPU affinity only works for renderers that take in a GPU flag, like Redshift standalone. If Cinema is wrongly picking up which GPUs are available to it, it could be that the flag is getting misinterpreted or possibly ignored.

So it should be possible to remove the Deadline Worker from the equation so we can prove if its one of:

  • Deadline Worker issue
  • C4D issue
  • How Deadline tells C4D which GPU to use

If this behaviour persists with in the test from the troubleshooting guide we’ll know its not the first issue.

Thanks!

Hi,

If you are using Redshift to render and want to test this, you can also use the environment variable REDSHIFT_GPUDEVICES, so in your case REDSHIFT_GPUDEVICES=1 (redshift will use GPU 1 to render).

If you look in the C:\ProgramData\Redshift\preferences.xml file it should have the GPU devices that redshift recognizes.
e.g.
"AllComputeDevices" type="string" value="0:NVIDIA GeForce RTX 3090,1:NVIDIA GeForce RTX 3090,2:Standard CPU Device (0),"

I look at the AllComputeDevices because that is what is available to the system; the SelectedComputeDevices is what the app (e.g. Redshift in Houdini) has for its preferences.

1 Like

Okay. I’m back to this issue. When having a look at this other forum post below, it looks like the video card displayed is just the first video card on the system, and is not necessarily used for rendering at all? The GPU Affinity Override seems to be the must have setting here, to exclude the intel card, with slot 0 disabled.

https://forums.thinkboxsoftware.com/t/slave-info-videocard-selection/24109

Hello @Karsten_Mehnert

Thanks for revisiting this. You are right that is its own issue. The video card displayed in Deadline Monitor Worker’s panel is the first one in the list shown on the OS. There is an internal ticket to fix that. I cannot share an ETA on when will that be fixed.

So the issue for you, if I understand this correctly, that the render does not use the correct GPU if GPU affinity is not set?

When you uncheck the GPU affinity, let it render how does it behave? Check the task manager (Windows) performance tab do you see GPU being used and which ones?

Privacy | Site terms | Cookie preferences