Ok, here is my WIP implementation of V-Ray GPU Affinity.
I have done only rudimentary testing, so please give it a try and let me know if anything does not behave as expected.
It uses the same basic code and UI as the Redshift implementation, with just a bit of V-Ray specific code to set the VRAY_GPU_PLATFORMS environment variable to the correct GPU indices.
VRayGPUAffinity_WIP_20201022.zip (117.1 KB)
The ZIP file contains 3 updated script files you need to deploy as follows:
-
MayaBatch.py
goes into the Repository/Plugins/MayaBatch/
folder - this is the integration plugin
-
MayaSubmission.py
goes into the Repository/Scripts/Submission/
folder - this is the Monitor Submitter.
-
SubmitMayaToDeadline.mel
goes into Repository/Submission/Maya/Main/
folder - this is the integrated Maya submitter
Please BACK UP the original versions of the files before replacing them!
To test, I submitted a V-Ray GPU scene from Maya using the integrated submitter set to frames 1 to 8, 4 Concurrent tasks, 1 GPU Per Task. I run this on AWS using a g4dn.12xlarge instance which has 4 x T4 GPUs. The result was 4 Tasks rendered in parallel, each using only 1 GPU.
I repeated the test with different combinations of Concurrent Tasks and GPUs per Task using both the Monitor Submitter and the Integrated Submitter.
I also submitted a job with Selected GPUs, entering the indices by hand, e.g. “0,2” to render on the first and third GPUs with 1 Concurrent Task. As expected, the Task rendered on only the two specified GPUs.
Unfortunately, V-Ray indexes the devices in the log in consecutive order, so 0,2 reports as Device 0 and Device 1. But the correct physical GPUs end up rendering, so it seems to be working as expected.
Note that if you try to render more concurrent tasks than there are GPUs, some tasks will use the GPUs Per Task value, and the excess ones will render on all GPUs. For example, 6 Concurrent Tasks, 1 GPU Per Task will render Tasks 0,1,2,3 on GPUs 0,1,2,3, while Tasks 4 and 5 will both render on all 4 GPUs. This is As Designed.