The Nuke plugin is going to need to get smart on GPU. If you submit a Nuke plugin that renders on GPU nice and fast locally but then submit it to the farm it’ll still try to use the farm’s GPU but poke along on the integrated graphics.
I don’t quite understand what you mean.
Is the problem that it renders slow on the farm? Is the solution to disable GPU rendering for that plugin? Could that be something that is handled by a sanity check?
Cheers,
Ryan
I think whenever Nuke v7.0+ encounters a node which can be accelerated by GPU and a compatible graphics card isn’t detected, then Nuke should be falling back to using CPU, unless the “–gpu” flag is being forced, which we recently exposed as an option. (This is my understanding of the Nuke user manual description of this option).
Was talking to Nathan yesterday and he said there is a new Env variable:
FN_NUKE_DISABLE_CUDA
The problem is that the rendernodes do have CUDA capable GPUs. But they’re so terrible that the CPU is faster. But I still want it to use the GPU on workstations. Can we setup Machine group specific variables? For instance if a computer is a member of Rendernodes could we setup custom environment variables?
You could add a PreJobLoad.py script to your Nuke plugin to do this:
thinkboxsoftware.com/deadlin … -_Optional