Worker instances, GPU affinity and error code 134

I had this setup working fine till recently.
2 workers per machine, 4 GPUs per machine with 2 GPU per worker assigned.
But from recently wheenver I enable all workers to render I’m having this 134 error:

Error: Renderer returned non-zero error code, 134. Check the log for more information.

If I Leave a single worker instance working, it is fine, also single worker with all 4 GPUs enable, again renders fine.
Any idea what coul dbe the issue?
I was checking worker logs when rendering with both workers and seems like redshift is properly reading and using assigned GPUs, 0 and 1 on one worker isntance adn 2 and 3 on another worker instance.
I will try to actche redshift log as well but figured maybe someone else run into this rencetly too?
Deadline version 10.1.10.6 and redshift 3.0.31, all on linux machines, renderin gboth with maya and houdini produces same error.

editL:
Found log:
Redshift cannot operate with less than 128MB of free VRAM
If you’re using multiple GPUs, please ensure SLI is disabled in the NVidia control panel (use the option ‘Disable multi - GPU mode’)

Seems like some redshift and nvidia issue but it was working fine till recently… there are no SLI bridges at all on any of the machines…

Ok, found some answers, seems like GPU affinity is broken in recent redshift versions. Should be fixed in version 3.0.32 so will be waiting to see if that is the case.