AWS Thinkbox Discussion Forums

Redshift is rendering slow and seems to use more cpu power than gpu

I have deadline 10 setup on 4 pc’s with redshift on each. One computer is converting a maya file to redshift and then all the pc’s are rendering from that. The render times seem very slow compared to what it could render from one pc just using maya and redshift. Another odd thing is that my cpu’s are hitting about 85 - 90% useage and the gpu’s are around 15 - 20%. Shouldn’t that be the other way around using redshift? Maybe there is a setting I am missing. If anyone has any insight into this or knows a solution, it would be appreciated.

Hey Chris,

If you set the render log verbosity to 2 in the render globals, you should be able to see what is occuring during the render.
for example, redshift allows for out of core memory, geometry and textures , that do not fit in the gpu vram, can be cycled between the system memory and the gpu.
Wicked amounts of geometry can also cycle between the pgu and system ram, if it cant all fit per bucket.
thou there are other things like the conversion of textures to the native rstexbin format that require cpu cycles and unfortunately are only single threaded exports.

Turning on the the log verbosity, will allow you to see where in your logs the time is going.

happy to debug posted logs , once you obfuscate the data.

Cheers
Kym

1 Like

Thanks for replying kwatts. I am new to using redshift and how exactly it renders. We have used Octane in the past and I was used to the gpu’s taking the workload of the render and seeing the gpu usage very high. We tried rendering a final output with progressive turned on and it was taking 40+ mins to render a frame. Then we tried bucket and the time went down to 14 min. I guess I was thinking that bucket just meant cpu rendering like in C4D, but it looks like it does both. Is there a best practice we should utilize when rendering with Redshift?

Yeah we do not use the progressive render for final frames, just quick tests etc.
when i talk about buckets for rendering in general is how the frame is carved up in to manageable chunks , in this case for the cuda cores to process.

If you have more than one gpu in a slave, i would suggest using concurrency, but that said im not sure if cinema4d has the same kind of scripted/ headless mode that maya does. (we are a maya/houdini shop).

If you have final textures, or a process that allows you to get an asset and its textures into the production pipeline. Then you can preconvert all the textures to rstexbins, that way your machines are not constantly reconverting them. they need to live in the same folder as the original texture file. check the docs for the command line converter works.

We currently do not precache all our textures, we use the redshift cache dir , so we can reuse at render time conversions. the env varible is: “REDSHIFT_CACHE_DIR”

Our farm nodes have massive hard drives, so we set them to have local cache dirs, so if we dont precache, it allows the first frame to take the hit of the texture conversion, but ever other frame that runs on that node, uses the precached textures.

we cheat a little, its great, but i dont recommend it, for our artists we would use a shared network folder. this allows you to not precache the textures ahead of time, but then allows everyone to share the rstexbin files cached at render time , with everyone else. :wink:

hope this helps a bit.

Kym

That’s awesome info, thanks. For this project we are using Maya. This should help alot. Do you also know a good deal about houdini? I started using that a few months ago for a project and would love some insight on that as well.

Privacy | Site terms | Cookie preferences