AWS Thinkbox Discussion Forums

REDSHIFT HOUDINI multi GPU (gpu affinity) setting

Hi everyone! new to deadline, coming from backburner. loving versatility of deadline!

i have read few posts regarding this, dating back to 2017, and i assume those fixes in 2017 would of have made into the current deadline, so wasn’t sure if issue i am having is my settings.

I am currently using Houdini + redshift + deadline. my hardware setup is

pcA with 3090 x 1 (concurrent task limit to 1)
pcB with 3090 x 2 (concurrent task limit to 2)

tried submitting job with [GPUs per task] (under GPU affinity overrides) set to 0, pcB both 3090 is activated.

however, when i submit the job with [GPUs per task] (under GPU affinity overrides) set to 1, only single 3090 seems to be activated (even though deadline do take 2 concurrent task, and do render them both, albit slowly than expected, leading me to think that both task is using a single GPU.

my understanding was that deadline will be using one GPU for each TASK?

i do apologize in advance as i am new to deadline, so even a direct “how to do this right” would be really appreciated!

Thanks!

Hi sorry to answer my own post, but seems i had made a mistake, and wanted to share the info incase anybody else finds it useful.

i guess i was thinking too much, and had setup the pcB (with 2 GPU) affinity override. and think it caused deadline to use a weird setting. when i turned the override off, deadline seems to have figured out the GPU and associate it to each task.

below is the screen capture of what i have done WRONG, so when i just left it blank (default) deadline seems to be working better

to re-iterate, when i had the override enabled, like above screenshot, deadline was not able to deligate one GPU per task (more like it felt it was just using a single GPU for 2 task)

that being said, my thoughts were that by doing this, i am specifically telling deadline that this computer only has two GPU, but guess that wasn’t the case?

can anybody please help me understand what this setting do?

Thank you in advance

Here is the Python code used by the Deadline integration plugins to decide which GPUs to use:

def GetGpuOverrides( self ):
    resultGPUs = []
    
    # If the number of gpus per task is set, then need to calculate the gpus to use.
    gpusPerTask = self.GetIntegerPluginInfoEntryWithDefault( "GPUsPerTask", 0 )
    gpusSelectDevices = self.GetPluginInfoEntryWithDefault( "GPUsSelectDevices", "" )

    if self.OverrideGpuAffinity():
        overrideGPUs = self.GpuAffinity()
        if gpusPerTask == 0 and gpusSelectDevices != "":
            gpus = gpusSelectDevices.split( "," )
            notFoundGPUs = []
            for gpu in gpus:
                if int( gpu ) in overrideGPUs:
                    resultGPUs.append( gpu )
                else:
                    notFoundGPUs.append( gpu )
            
            if len( notFoundGPUs ) > 0:
                self.LogWarning( "The Worker is overriding its GPU affinity and the following GPUs do not match the Workers affinity so they will not be used: " + ",".join( notFoundGPUs ) )
            if len( resultGPUs ) == 0:
                self.FailRender( "The Worker does not have affinity for any of the GPUs specified in the job." )
        elif gpusPerTask > 0:
            if gpusPerTask > len( overrideGPUs ):
                self.LogWarning( "The Worker is overriding its GPU affinity and the Worker only has affinity for " + str( len( overrideGPUs ) ) + " Workers of the " + str( gpusPerTask ) + " requested." )
                resultGPUs =  overrideGPUs
            else:
                resultGPUs = list( overrideGPUs )[:gpusPerTask]
        else:
            resultGPUs = overrideGPUs
    elif gpusPerTask == 0 and gpusSelectDevices != "":
        resultGPUs = gpusSelectDevices.split( "," )

    elif gpusPerTask > 0:
        gpuList = []
        for i in range( ( self.GetThreadNumber() * gpusPerTask ), ( self.GetThreadNumber() * gpusPerTask ) + gpusPerTask ):
            gpuList.append( str( i ) )
        resultGPUs = gpuList
    
    resultGPUs = list( resultGPUs )
    
    return resultGPUs

Let’s look at what it does, and the various cases:

  • It defines a list to return - resultGPUs.
  • It then gets the Job Settings for GPUs Per Task, and Select Devices.
  • If the Override GPU Affinity of the Worker is checked in the dialog that you showed, then the list of allowed GPUs is reduced to the ones that are checked, in your case the first 2.
    • When it is not checked, all 16 are fair game, but only GPUs that actually exist will be used.
    • So if you have 2 GPUs, but the logic of this function requests a non-existent GPU, then that will be ignored and all GPUs will be used on the Task instead.
  • If the GPUs Per Task is 0 (not set), but the list of Devices is defined, then we get that list and try to match them against the list of GPUs with overridden Affinity. We collect the valid matches into the resultGPUs list, and the ones that don’t match into the notFoundGPUs, then report any problems via a Warning log entry, or even by failing the Task if none matched.
    • So if you asked for GPUs 2 and 3 in the Job, but only 0 and 1 were enabled in the Override GPU Affinity dialog, the Task will fail.
  • If the GPUs Per Task value is actually specified, then we check to see of the value is greater than the number of enabled GPUs in the Affinity Override list:
    • If yes, we take them all and return as the result, so the Task will render on all checked GPUs from the Affinity Override (in your case, 0 and 1)
    • If no, we take the first N GPUs from the Override List, where N is the GPUs Per Task value. So if you set it to 1 and checked two GPUs, only the first one will be used. This is what you experienced in your case - both Concurrent Tasks ended up using the same GPU index 0, and GPU index 1 remained unused.
    • If your GPUs Per Task value were set to 2 or higher, then both Concurrent Task would be sharing the two available GPUs.
  • If the Override Affinity was not checked in the Worker, the GPUs Per Task was 0, and devices were specified explicitly, then we just set the result to those devices, and every Task will always use them as specified.
  • If however the GPUs Per Task property of the Job was set to a value greater than 0, then we collect a list of GPUs based on the current Thread (the Threads represent the Concurrent Tasks).
    • So if you have GPUs Per Task set to 1 and Concurrent Tasks set to 2, the first Task will get GPU 0, the second Task will get GPU 1, like you hoped. This is what happens when you uncheck the Override GPU Affinity option!
    • On the Worker with a single GPU, if you don’t set a Concurrent Tasks limit, you would get two Concurrent Tasks asking for GPUs 0 and 1, but only GPU 0 would exist, so both Tasks would end up sharing the GPU 0.
    • If you had machines with 4 GPUs (like the EC2 g4dn.12xlarge I usually test with on AWS), you could then have Concurrent Tasks set to 4 and GPUs Per Task 1, or Concurrent Tasks set to 2 and GPUs Per Task set to 2, and either render 4 Tasks on 1 GPU each, or 2 Tasks with 2 GPUs each…

In other words, if you want to use Concurrent Tasks + GPUs Per Task to split several GPUs across multiple Tasks, you should not enable the Override GPU Affinity of the Worker. When that checkbox is checked, the logic takes those GPUs and gives them to any Task that comes along, without splitting based on the Thread index.

1 Like

wow! this is an amazing response, really appreciate the detailed response, it immensely helped me understanding deadline better.

thank you!

Hey there, thanks for the detailed explanation,

I’m currently in communication with support via mail, because of the GPU affinity handling for Redshift3D. The Houdini.py Plugin of deadline currently uses the -gpu argument to specify the GPU for the task, but according to the Redshift Staff, this shouldn’t be used, because it will alter the “preferences.xml” file of redshift, to use a single GPU (this will interfere with other instances as well) and on top of that, when submitting a job for all GPU ( 0 ) the preferences.xml won’t get changed back to use all GPUs but will stay on a single GPU (from the last task it was rendering).

So to prevent that from happening, the REDSHIFT_GPUDEVICES environment variable should be used instead for handling the GPU affinity of the submitted tasks.

So far so good, Support sent me a patched Houdini.py file, which did exactly this (after I removed some minor formating error of the Device list, which was joining the list twice with commas)

But now I stumbled upon a strange behaviour I really need help to understand, why this happens and how to fix it.

Basically, if “Override GPU affinity” of the worker is turned on in the worker configuration, the handling of the GPU affinity for tasks that should use a single GPU isn’t working.
For a worker with 2 GPUs and Override GPU affinity set to 2 (0, 1) a job with the setting:
Concurrent Task = 1, GPU per task = 1, will start 2 Tasks, but both if them render on GPU Device 0, so the render gets slowed down, worst case it crashes because it renders 2 tasks on the same GPU at the same time.

When “Override GPU affinity” is unchecked, the Device allocation is working just fine,
First task, Device: 0 , second Task, Device: 1.

But leaving “Override GPU affinity” inactive for the workers, brings two problems:

First, when the Houdini.py Plugin checks for available GPUs, the list variable resultGPUs stays empty, thus when setting REDSHIFT_GPUDEVICES it will not get the full list of GPUs but an empty string. And i don’t see a way to query available GPUs of the worker because the DeadlinePlugin class method GpuAffinity(), will return nothing if “Override GPU affinity” is not set for the worker.

The second issue is that when leaving “Override GPU affinity” inactive, we have to prevent the workers from getting assigned more tasks than their GPU count so we are using: Override Concurrent Task Limit for the workers to make sure there can’t be more concurrent tasks than available GPUs (has to be manually set). But should a CPU job be run on the machine, this of course will also limit these Tasks, which is not intended.

I would really really appreciate help in this regards because not being able to transition from single GPU tasks to all GPU tasks smoothly is affecting our rendering output speed, and we have a crucial project that would benefit from having this fixed.

I’m a 3D generalist with an affinity to python, It may well be that I have misunderstood some concepts. But i can deal with in-depth answers to this topic i think.

atteached you find the slave log screenshot for the described cases (PS I added some warnings for debugging purposes):

thanks a lot,
all the best, Martin

Concurrent Task = 1, GPU per task = 1, will start 2 Tasks,

I assume you meant Concurrent Task = 2 ? Why would a CT 1 result in 2 Tasks?

As was discussed in the previous post, if the Override GPU Affinity checkbox in the Worker is checked, then anything that is checked is given to all Tasks, without by-thread distribution. So that mode is not what you want if you want each Task to get a GPU.

When the Override GPU Affinity is unchecked, Deadline is supposed to execute the last elif of the GetGpuOverrides(). It does not look at what GPUs are available, it simply collects IDs based on the ThreadNumber value.

Let’s say gpusPerTask passed with the Job was set to 1.
If the current ThreadNumber is 0, then you get range( (0*1), (0*1)+1 ), or range(0,1), and the list will contain only 0.
If the current ThreadNumber is 1, then you get range( (1*1). (1*1)+1), or range(1,2), and the list will contain only 1.

If gpusPerTask passed with the Job was set to 2, then Thread 0 produces range(0,2), so the list will be [0,1], and Thread 1 will produce range(2,4), so the list will be [2,3].

You said

But leaving “Override GPU affinity” inactive for the workers, brings two problems:

First, when the Houdini.py Plugin checks for available GPUs, the list variable resultGPUs stays empty, thus when setting REDSHIFT_GPUDEVICES it will not get the full list of GPUs but an empty string.

There must be something wrong with the list, because as described above, if the Job provided a non-zero GPUs Per Task value, the list would not be empty. So this is unexpected, and I would like you to look into it and figure out why the list is empty. It should be running

    elif gpusPerTask > 0:
        gpuList = []
        for i in range( ( self.GetThreadNumber() * gpusPerTask ), ( self.GetThreadNumber() * gpusPerTask ) + gpusPerTask ):
            gpuList.append( str( i ) )
        resultGPUs = gpuList
    
    resultGPUs = list( resultGPUs )

What I think you and other customers are asking for is the same type of Task thread based distribution when the Override GPU Affinity is checked. It is not in the code, and I don’t know if there is a reason for that. You can try to implement it in the GetGpuOverrides() logic and see if you can get it to work the way you want it.

I might try to do that myself, if I can find the time :slight_smile:

yes sry, i was sitting at home typing.
Of course Concurrent Task of the Job is not 1, we have it at 8, because there are 2 worker nodes with 8 GPUs and 3 worker nodes with 2 GPUs.

I will try to check this again on Monday, but I could only see the GpuAffinity() method checking, which does come back empty, when i leave “Override GPU affinity” inactive.

I will definitely check the code snippet and get back to you.
Thanks for helping me on this one, and for the patience.

have a good weekend.

The GPU Affinity is not even considered when the Override GPU Affinity is not checked. So it would not contain anything in that case.

    if self.OverrideGpuAffinity():
        overrideGPUs = self.GpuAffinity()

This is the only place where it is being queried. If self.OverrideGpuAffinity() returns False, it is never queried, because False means it should be ignored :slight_smile:

Let’s discuss this part:

If you set the Concurrent Tasks to 8 for the job, and you have Workers with 8 GPUs and Workers with 2 GPUs, then setting the Worker’s Concurrent Task Limit Override to 8 and 2 respectively is actually the only sensible way to limit how many Threads will be created by the Deadline Worker application when rendering Tasks of the Job.

A possible approach to avoid the limitation when needing the CPUs would be to launch two Deadline Workers on the machines with 2 GPUs. Set the one to “Concurrent Task Limit Override” of 2 and put it in a Group called “GPURendering”. Set the other Worker’s “Concurrent Task Limit Override” of 8 or 16 or whatever, and put it in a “CPURendering” Group. Workers on the same machine share a license, so you don’t waste resources. If you have a GPU Job, submit it to the “GPURendering” Group. If you have a CPU Job, submit it to the other Group… For the 8 GPU machines, you could add them to both Groups, as their Concurrent Task Limit is higher, and you probably don’t need two Deadline Worker instances on them.

This way, the desired number of Threads will be created in both cases.
In the case of a GPU, with the “Override GPU Affinity” set to False and the Job asking for GPUPerTask of 1, you will have two Threads, with one GPU for every Thread (Task) dequeued by the Worker. A CPU job asking for more than 2 Concurrent Tasks would run on the second Worker instance, and won’t be capped to 2.

Would that work for you?

Hey first off, thanks for the help and your input, I really appreciate the quick response time and the detailed information.

Good to know, yes, creating two pools would work i think, but i have to test it to see if it brings any issues.

Ahh I’m sorry i completely missed this part where you already stated this behaviour. Just a question though. It seems like this behaviour is not anything anybody would really want right, because, if 2 GPUs are available, why render 2 task on device: 0 while device: 1 stays inactive, although the “Overrdie GPU Affinity” setting has device 1 also ticked.
Wouldn’t it make more sense to assign the tasks according to GPU per task setting to use all “ticked” GPUs from override GPU affinity, by using threadnumber to create a Device list with the correct Device IDs (with a check against the GPUAffinity list)

As far as I understand it wouldn’t break any bevaviour and would make it possible for example, to have a 5 GPU worker node only use 3 GPUs specific GPUs for deadline rendering either 1 GPU per task, 2 GPU per task or all GPU per task.
But i can imagine I’m missing something where this logic doesn’t hold up.

Anyway, I’ll test tomorrow, and get back to you with results :slight_smile:
have a nice sunday

As I mentioned previously, I am only speculating why the implementation is what it is. I might try to find the developer who made the decisions and check if I am correct, but here is what I think they were trying to do:

Deadline allows you to parallelize the processing on the same machine in two different ways - Concurrent Tasks which was there from the very beginning, and multiple Worker instances, which was added a few versions after the first release of Deadline.

When rendering with Concurrent Tasks, a single Worker is running on the physical or virtual machine, and it likely has access to all resources on the machine. So if the machine has 2, 4, or even 8 GPUs, there is no obvious reason to use GPU Affinity to tell the single Worker what to use - in that case, the Thread selection mechanism in the last elif statement of the GetGpuOverrides() function would be used. The number of Concurrent Tasks is specified in the Job, so at some point in time we added a checkbox/parameter that was originally “Limit to the number of CPUs” (GPU rendering wasn’t a thing in 2004 when Deadline was released). Then we added a Worker property to override that, so instead of the actual number of CPUs, you could enter any number for the Worker to cap the number of threads / Concurrent Tasks the Worker would run. A Worker cannot run more than 16 Concurrent Tasks (Threads).

The multiple Workers approach lets you run different Jobs with different applications/plugins, which usually have dissimilar resource requirements. For example, you could run a Houdini fluid simulation on the CPUs, render with Redshift on the GPUs, and composite with Nuke which is mostly IO bound and uses less CPU resources, but most of the network / disk bandwidth.

When rendering with multiple Workers, you need a way to split the resources of the single machine between two or more Deadline Worker instances. This is why we have the CPU Affinity and GPU Affinity controls at the Worker level. If you have 8 GPUs, and you plan to run 3 Workers where the first uses GPUs 0 and 1, the second uses 2 and 3, and the third uses 4 through 7, you can check the respective checkboxes in the Worker configurations. When you run a single Task rendering Redshift on the first Worker, it will use both GPUs on that Task. If you run a single Task of another Redshift Job on the second Worker, it will also use both GPUs on the Task. And if you run a single Task rendering with V-Ray GPU on the third Worker, it will use all 4 remaining GPUs.

I believe that the implementation did not expect the mixing of Multiple Workers and Concurrent Tasks in this context. Since most GPU renderers tend to scale pretty well when given more than one GPU to work with, you either run Concurrent Tasks without GPU Affinity enabled, or you run multiple Workers with GPU Affinity enabled. But in the latter case, you don’t expect each of the multiple Workers to also run Concurrent Tasks, so we don’t have code to spread N GPUs per Task in case GPU Affinity is on and Concurrent Tasks are also > 1…

In my yesterday’s reply I suggested you might want to implement that case and see if it helps. But I believe that using Concurrent Tasks on a single Worker with GPU Affinity turned off to use the thread-based distribution of resources, with the Concurrent Task Limit set to the number of GPUs on the machine, and maybe a second Worker without a Concurrent Task limit to process any CPU jobs you might need to run on the same machine, would be the best approach.

1 Like

thanks for the insight, have a great week.
I’ll try out the solution you suggested.

cheers

Now I remember why this was an issue for me, sry I had to pull out most of the stuff from memory because i couldn’t access the server from home.

So basically when submitting a job to render on all GPUs and having “Override GPU affinity” inactive. There is no way to generate a device list of the worker GPUs, so there is no option setting the REDSHIFT_GPUDEVICES variable to include all of them. leaving it blank comes with the same issue thats causing problems with the preferences.xml, same like not passing (or a blank) -gpu argument. Redshift defaults back to use the preferences.xml for GPU assignement. This is very unlucky because, if an artist or any other process has altered it for a session to only have one SelectedComputeDevice, a Deadline Job submitted as GPU per Task: 0 , will still render only on this single GPU thats in the preferences.xml.

That’s was my initial thought when wondering why this list is empty.
Basically, having a list with all available worker GPUs would be great.

Thats was also the reason why i wanted to turn on “Override GPU affinity”, because it would actually give me a list of all ticked GPUs, but as you explained, it is meant to do something different.

I guess another option would be to Query the workers concurrent task limit, i have to take a look at the scripting reference if there is a method for it in the DeadlinePlugin.py

I might try to alter the condition you mentioned.

PS:
Still I wanted to mention, how I could imagine, having GPU rendering, more user friendly and dynamic:

as an example:

workerA: 2 GPUs (GPUs to use, worker config: 0,1)
workerB: 7 GPUs (GPUs to use, worker config: 0,1,2,3,4,5,6)
workerC: 8 GPUs (GPUs to use, worker config: 1,3,4,5,7)

on job submission:
Max GPUs per task: 2

submit:
workerA gets 1 task:
task 0 (device:0, decive:1)

worker B gets 4 tasks:
task 0 (device:0, decive:1)
task 1 (device:2, decive:3)
task 2 (device:4, decive:5)
task 3 (device:6)

worker C gets 3 tasks:
task 0 (device:1, decive:3)
task 1 (device:4, decive:5)
task 2 (device:7)

how do i deal with my Limits.
It seams that my houdini license limit caps out and is interfering with the multiple workers approach. Is there a way to tell Deadline that my worker (that are just instances on a single workstation) do share the same license. (not the deadline one, but the houdini limit)?

another issue is, if one user submits a job to a pool that uses the workers with all gpus enabled, and another user submits for the 2 or 1 GPU worker instances, how do I prevent Deadline from starting task on both the “All GPU job” and the “single GPU job”, is there a way, to tell Deadline, that the workers are dependent on each other (only run WorkerA - All GPUs, when WorkerA-GPU1 and WorkerA-GPU0 are idle)
Because, if they are started at the same time, it means, Redshift will crash in 99% of the cases, or will get extremely slow, due to double allocation of devices

The problem is that the number of Tasks that a Worker dequeues is dependent on both the Job’s Concurrent Tasks value, and the Worker’s Concurrent Tasks Limit Override. So in addition to checking the GPU Affinity checkboxes, you need to set the values 1, 4, 3 for the Limits in the three Workers, then submit the Job with at least Concurrent Tasks 4 (which will be then clamped to 1 and 3 for WorkerA and WorkerC due to the Limit). That is the only way to control the number of threads that will be created by the Worker.

Then you would have to modify the function we discussed already to read the list of checked devices in the GPU Affinity Override list, and assign 2 to each Task based on the Thread ID.

There is no way to prepopulate the list of devices for all Workers, because the list is different for every Worker, and the list is a Job property, not a Worker property.

I have not looked at how Redshift handles the list, but VRay GPU does RegEx, so providing the whole name of the device, or a subset of it, including just the index, e.g. “index0, index1” worked. Specifying “0,1” turned out to be a bad idea, because “0” could match any part of the string, including the GPU model (e.g. “2080Ti”) :slight_smile:

But using the device list would force the first two GPUs to be used on every Worker, so the local Worker affinity overrides are a much better place to start.

1 Like

Hi, where do I create the .py for deadline to recognize it with the code you have provided?

The code I posted was taken from the Houdini.py script shipping with Deadline - you can find in the Repository/plugins/Houdini/ folder.

I just posted it so I can explain step by step what the existing logic is.
You can always take the whole Houdini folder, copy it over to the Deadline/custom/plugins/ folder, and then make changes and adjustments to the GPU code without modifying the original script. Anything under /custom/ will override the defaults and you can have your own behavior if that is necessary.

2 Likes

Hi, This all seems interesting. We have similar issue where GPu affinity on local workers worked well when using maya batch plugin but doesnt work when using rez environment. Can someone enlighten me on how to go about it. We have 2 to 3 workers on a single machine with 8gpu and they are set to use 4 gpus on one worker, 3 gpus on second and 1 gpu on third. We use the 4 gpu and 3 gpu workers for maya with redshift and the single gpu instance for cache, sim nuke renders. But every time a render comes on the 4gpu or 3gpu instances they use all of the GPU’s available (inspite of local worker gpu affinity).

Unless your rez environment is overriding the GPU affinity set on the Worker, there shouldn’t be any extra setup required.

There will be a line like:

This Worker is overriding its GPU affinity, so the following GPUs will be used by Octane/Redshift/IRay:

Where the current GPU affinity gets printed out. Are you seeing that in the logs where all GPUs are being used and shouldn’t be?

Hi @Justin_B
Thanks for the reply. No we do not have any message like that. Just uses all gpu’s. We are using rez with our own pipeline commands. Is there a variable that we need to add to use the gpu’s that selected in worker settings. When we use default mayabatch plugin it works fine but doesn’t work in REZ environment. I am afraid we are not setting it properly.
this is the variable that i see in submission parameters for default mayabatch
GPUsPerTask=1

GPUsSelectDevices=
Maybe this is what we need to somehow setup?

log from one of our multi gpu node
“2023-09-13 18:05:17: 0: STDOUT: Warning: file: C:/Program Files/Autodesk/Maya2022/scripts/others/makeCameraRenderable.mel line 45: Found camera ep005_sq02_sh049:shotCamShape.
2023-09-13 18:05:21: 0: STDOUT: [Redshift] redshift_LICENSE=
2023-09-13 18:05:32: 0: STDOUT: [Redshift] Redshift for Maya 2022
2023-09-13 18:05:32: 0: STDOUT: [Redshift] Version 3.5.12, Dec 7 2022
2023-09-13 18:05:34: 0: STDOUT: [Redshift] Rendering layer ‘rs_SHD000’, frame 1084 (1/1)
2023-09-13 18:06:04: 0: STDOUT: [Redshift] Scene translation time: 31.26s
2023-09-13 18:06:04: 0: STDOUT: [Redshift] License acquired
2023-09-13 18:06:04: 0: STDOUT: [Redshift] License for net.maxon.license.app.redshift~commercial valid until May 23 2024
2023-09-13 18:06:04: 0: STDOUT: [Redshift] =================================================================================================
2023-09-13 18:06:04: 0: STDOUT: [Redshift] Rendering frame 1084…
2023-09-13 18:06:04: 0: STDOUT: [Redshift] AMM enabled
2023-09-13 18:06:04: 0: STDOUT: [Redshift] =================================================================================================
2023-09-13 18:06:04: 0: STDOUT: [Redshift]
2023-09-13 18:06:04: 0: STDOUT: [Redshift] Device 0 (NVIDIA GeForce RTX 2070 SUPER) uses Optix for ray tracing
2023-09-13 18:06:04: 0: STDOUT: [Redshift] Device 1 (NVIDIA GeForce RTX 2070 SUPER) uses Optix for ray tracing
2023-09-13 18:06:04: 0: STDOUT: [Redshift] Device 2 (NVIDIA GeForce RTX 2070 SUPER) uses Optix for ray tracing
2023-09-13 18:06:04: 0: STDOUT: [Redshift] Device 3 (NVIDIA GeForce RTX 2070 SUPER) uses Optix for ray tracing
2023-09-13 18:06:04: 0: STDOUT: [Redshift] Device 4 (NVIDIA GeForce RTX 2070 SUPER) uses Optix for ray tracing
2023-09-13 18:06:04: 0: STDOUT: [Redshift] Device 5 (NVIDIA GeForce RTX 2070 SUPER) uses Optix for ray tracing
2023-09-13 18:06:04: 0: STDOUT: [Redshift] Device 6 (NVIDIA GeForce RTX 2070 SUPER) uses Optix for ray tracing
2023-09-13 18:06:04: 0: STDOUT: [Redshift] Device 7 (NVIDIA GeForce RTX 2070 SUPER) uses Optix for ray tracing
2023-09-13 18:08:31: 0: STDOUT: [Redshift]
2023-09-13 18:08:31: 0: STDOUT: [Redshift] Rendering time: 2m:26s (8 GPU(s) used)”

Affinity is set on worker setting

Privacy | Site terms | Cookie preferences