Hello everyone,
I just wanted ask if this is a known issue, bug or a limitation. Or maybe I am doing something wrong?
Batch submitting a Cinema 4D Job render with redshift always uses GPUs that are selected in the Redshift options inside of Cinema 4d. The GPUs per task or the GPU Affinity does not seem to change anything about this.
I am using Deadline 10.0.7.0. Cinema 4D r18, Redshift 2.6.14, Batch Plugin
What I do is I set the GPUs per task from 0 to 1. I have two GPUs. And I set the Concurrent tasks from 1 to 2. This should now render 2 tasks, one on each GPU. Instead it renders 2 tasks on 2 GPUs simultaneously, clogging the VRAM and ultimately crashing the renderer on heavy scenes.
Out of curiosity I tried setting the GPU Affinity in the Slave properties to 1. But The Jobs still render with 2 GPUs. Now when I deselect one GPU in the Redshift Cinema 4D Options inside of Cinema 4D, now it only uses one GPU no matter what GPUs per task or Affinity I set.
I seems like Deadline does not override the Redshift Cinema 4D settings at all?
Would you mind grabbing the render logs from the Cinema4D job and the Cinema4DBatch job? The code for redshift gpu affinity is very similar across the two plugins, but there might be a minor difference that could be causing some issues.
in the mean time I was wondering if there is a way to control a slaves in way of bliocking an slave if other slave in specific group is active.
Lets say this scenario
4GPU render node, running 2 slaves each with 2 GPU affinity,
SlaveA-GPU0_1
SlaveB-GPU2_3
Now if you have running standard redshift job where all this works you wanna have both slaves working each using their assigned GPUs
But if octane job start with SlaveA-GPU0_1 assigned to octanegroup for examaple then wanna have SlaveB-GPU2_3 stopped, so it cant take any other redshift tasks for exmaple until octane task is done
That way octane wont use all GPUs on taht salve and at the same time taht slave trying to take 2 GPUs as well.
Makes sense?
I mean having proper GPU activity working would be best solution ofc but as temp solution…
Hi,
I attached the renderlogs. Batch and non-batch. Both concurrent tasks=2 and gpus per task=1.
Does it have anything to do with this:
2018-07-21 09:55:21: 0: STDOUT: Redshift Info: Using explicit GPU Devices:
2018-07-21 09:55:21: 0: STDOUT: Redshift Info: GPU Ordinal:0 BusID:10 Device:‘GeForce GTX 1080 Ti’
I meant to reply earlier but seemed to have lost my earlier message. I’ve made a fix for the plugin based off the Deadline version you’ve indicated (10.0.7.0). It now matches the same format on the command-line as the Cinema4D plugin. If you’ve got some time, could you give this a test run?
Back up your current DeadlineRepository\plugins\Cinema4DBatch\Cinema4DBatch.py
Hi Mepp,
Thank you this seems to be working! I rendered with GPUs per Task = 1 and concurrent tasks = 1 and checked it with gpu-z and there only is one gpu rendering. The Task Reports though have now become very short. No more info about Buckets rendering and the like.
I am on Deadline 10.0.17.7 now, maybe this is why the plugin isn’t reporting stdout from redshift anymore?
Date: 08/02/2018 10:09:49
Frames: 2
Job Submit Date: 08/02/2018 09:53:09
Job User: alex
Average RAM Usage: 1100620288 (4%)
Peak RAM Usage: 1577984000 (5%)
Average CPU Usage: 8%
Peak CPU Usage: 18%
Used CPU Clocks (x10^6 cycles): 300897
Total CPU Clocks (x10^6 cycles): 3761210
=======================================================
Slave Information
Slave Name: Rendernode1
Version: v10.0.17.7 Release (b00c030fe)
Operating System: Windows 10 Pro
Running As Service: No
Machine User: Rendernode01
IP Address: 2a02:908:171:9920:4861:f175:cd7a:b693
MAC Address: 30:9C:23:5F:2D:4E
CPU Architecture: x64
CPUs: 16
CPU Usage: 33%
Memory Usage: 9.6 GB / 31.9 GB (30%)
Free Disk Space: 302.249 GB (38.069 GB on C:, 264.180 GB on Y:)
Video Card: NVIDIA GeForce GTX 1080 Ti
it didn’t work for me, still running all GPUs even I did select 1 concurrent task 1 gpu and turned off use all GPUs in octane settings in C4D
so it doesn’t work with concurrent task and use number of GPUs when submitting but is it maybe intended to affect only GPU affinity in slaves?
I haven’t tested that yet but if that work that would be progress too.
Unfortunately, to get the logging from redshift we had to do some hackery (they have to be added on the command-line). If you take a look at the plugin settings for Cinema4D Batch, we’ve added an option to specify the different allowable verbosity’s for Redshift logging. It might take some fiddling to get the verbosity that best suits you.
It shouldn’t be dependent on GPU overrides, it takes all of those into account all of those factors when deciding which GPUs to use. I do recall you had this issue with the Octane Renderer and gpu affinity. Are you still using Octane, or are you testing with Redshift? This matters because each renderer has to choose to support gpu affinity (unlike cpu affinity) and I’m currently still waiting on the Octane for C4D developer to provide me the fix for gpu affinity there.
aaaaaargh sorry my bad I mistaken this thread for the other one
I thought it was octane fix. But I didn’t notice that I have issue with redshift on c4d at all… gave me something to thing about and test