AWS Thinkbox Discussion Forums

Houdini Redshift - have to restart PC after using renderview for all GPUs to be used

Since a few month now I have to restart my PC after using the redshift renderview in houdini before submitting a job otherwise only one GPU will be used on my machine instead of 2. After restarting the PC both GPUs are used to render the job again.
Its just very annoying to always have to restart the PC to be able to used both GPUs.
Any hints as to why this might be would be welcome. :slight_smile:

Is this the same if you render without Deadline? or does the renderview use both cards when updating? Did you check with Redshift/Maxon to see if this is a known thing?

If I render without deadline both GPUs are used for a single frame normally. If I render with deadline after using the redshift renderview in houdini only one GPU is beeing used while the other will sit at 0% usage but a temp file for a frame is generated.

I’ve just come across a similar issue elsewhere, I’m going to put a ticket in to Thinkbox about this.

renders fail, but if I use the full command outside of Deadline then the render goes through.

Which version are you using, this was on 19.5.368 and 3.5.05/8 with Deadline 10.1.23.6

ok, I was doing some testing. It seems like my problem was one of the nodes was set to have the affinity using 15/16 cards. instead of having 1/2 cards selected, the first card was unchecked and the others left checked.

My guess is RS is trying to render to 15 cards and failing, because when I switched the affinity to only the 2nd GPU card it went through, and turning off affinity also worked, but leaving 15 cards check made it fail, so have a look at this.

We’re using H19.0.657, RS 3.5.08, and DL 10.1.23.6 (windows) and the renders via deadline are utilizing all GPUs. However, we are forcing to use all GPUs avail using REDSHIFT_GPUDEVICES (JobPreLoad.py for Houdini with RS, on windows, on-prem)

For on-prem rendering, the workstations double as GPU render nodes after work hours. The 3D artist usually uses/dedicates one GPU for working and leaves the other for renders (in Houdini → redshift options).

Rendering in Redshift seems to respect the setting SelectedComputeDevices in preferences.xml, so it would only use that GPU(s) listed to render when the workstation was used as a render node after hours. Once we set REDSHIFT_GPUDEVICES, Redshift utilized all of the GPUs on the workstations. I can’t remember if they had tried using GPU Affinity (deadline) or Redshift_setGPU (houdini), but this was the workaround we went with.

Thank you so much for that!
It seems to work when I reset the settings in preferences.xml.
Does anyone know how to keep those settings from changing?
It always sets them back to one GPU for me

Hi,

I’m glad you got it to work!

I would suggest that you still open a ticket with Thinkbox, since you stated that:

The people on the Redshift forum are responsive – again, I 'd suggest posting there as well to see if anyone else has experienced this issue. It seems odd that it wouldn’t (deadline) render with both GPUs after using the RS renderview in Houdini.

Hello @Pascal_Wiemers

Sorry for late reply here. The only thing (with regards to Deadline) I can think off the top off my head without looking at the logs is what @anthonygelatka has mentioned. Check if the Render node has GPU affinity turned on. To check it open Deadline Monitor (I have assumed you are running only one Worker on the node in question), right click the Worker> Modify Worker Properties> GPU Affinity> Turn it off if it is turned on their. I have seen Redshift failing to utilize GPUs fully if the GPUs are more in number (more than two).

Or perhaps, it is related with the preference you are talking about.

This issue I am having keeps persisting even after editing the preferences file from redshift and having gpu affinity turned off on the worker in question. I am at a loss atm. I am hoping the issue will go resolve itself after a fresh install when I get my current projects done.

1 Like
Privacy | Site terms | Cookie preferences