AWS Thinkbox Discussion Forums

Houdini 18.5 keep scene open option

At the moment seems like, contrary to Maya, Houdini needs to load the scene all over again for every task in a job. That leads to a huge waste of time and has a big impact on overall render times.
Is there an option in Deadline to keep the Houdini scene open so after loading it once it keeps it open and just keeps rendering after that?
From what I saw royalrender seems to support that but after investing so much time and $$$ into Deadline and getting too used to it… I would love to stay in the Deadline universe as well. But this has such a huge impact that if not possible will have to get me to thinking…

On the 2nd issue, I recently rented an 8 GPU server, connected to my Deadline, and tried using but all renders kept failing as GPU affinity with Houdini and redshift was in a complete mess. WIll test this latest version 10.1.14.5 to see if it is fixed, but in version, before it was a complete mess.
Any help here, please??

1 Like

Really no one else sees this as a huge issue?
Did some testing with simple scene and royal render and what took 9 minutes on deadline took 3 in royal render… just due to load scene thing…

Hi
That sounds strange. Is task size the same in Royal and Deadline?

-b

As to your other question about Houdini/Redshift/Deadline GPU affinity:

The way we have set this up is for all RS jobs (regardless of DCC) to use 2 GPUs pr task.

  • Worker: Most of our workers have 4 GPUs so we set the “Concurrent Task Limit Override” to to 2 in the “Worker properties”. We also have workstations with 2 GPUs. On these workers we set the “Concurrent Task Limit Override” to 1.

  • Deadline Rop: In the submitter in Houdini we set “Concurrent Tasks” to 2 and “GPUs Per Task” to 2. But we enable “Limit Tasks to Workers Task Limit” to make sure each worker uses its own “Concurrent Task Limit Override” value.

So when the tasks are picked up by the workers, DL knows that all tasks are going to use 2 GPUs (because we have that set up on the DL rop) but checks with the worker whether it can render 1 or 2 concurrent tasks.

All of our 4 GPU slaves have 128gb ram and all 2 GPU workstations have 64gb, and the balance between ram and concurrent tasks (2 GPUs gets 64gb) have worked very well for us. If you have a slave with 8 GPUs and set it up to do 4 concurrent tasks and 2 GPUs pr task, i suspect that you will need a lot of ram to keep RS happy.

You haven’t described what kind of set up you have, but maybe try with 2 concurrent tasks and 4 GPUs per tasks and see how that goes.

GPU affinity works fine for us with this setup using DL 10.1.13.1 / H 18.5.462 / RS 3.0.39

-b

2 Likes

Yes, 1 frame per task on both. I specifically wanted to test the loading scene. So what seems to happen is that royal render does support Houdini to keep the scene loaded. When in deadline Houdini needs to load the scene every time again for each task. In the case of Maya that isn’t like it, it loads once, 1st task takes longer others are simply rendering. With Houdini at the moment, each task is loading the scene all over again. Recently I had projects that tare a bit more complex and loading was taking 10 minutes or so. Imagine 10 minutes load time for each task for tens or hundred tasks… Most of the time was spent on loading. So I’m wondering if there is any option for Houdini as wlel to laod scene on frist task but for every other task to just keeps scene loaded and keep rendering frames? SO time is spent only n rendering and not loading scenes every time over and over again?
It helps to sned bigger task sizes but still having 10 frames per task when the frame takes 20minute or so… it brings another set of problems. So having 1 frame per task and just to keep scene loaded is perfect. It works fine with Maya batch rendering. But not with Houdini. Rendering with redshift.

On GPU affinity, most of the time it works fine. WHere I did run into issues is with this rented server.
I set up 4 Workers to run on that rented server. Each worker, let us say A, B, C, and D are set up to use 2 different GPUs:
Worker A: 0, 1
Worker B: 2, 3
Worker C: 4, 5
Worker D: 6, 7

But when rendering, Workers were using GPUs that are not assigned to them.
Like, I pause all other workers and let only WorkerA, and I see that it is using 3 GPUs or even 4 instead of 2 that are assigned. So multiple workers running on a single machine didn’t use only GPUs that are assigned to them on worker settings in the monitor.
Makes sense?

1 Like

I find @bonsak’s reply interesting, On an 8 card box I’ve always gone with 4x Worker instances with 2 GPU affinity assigned, to ensure that any Worker renders the job on those specific cards, rather than being dictated by Redshift.

I did also think that submitting a job with 2 card limit meant that the task would take any 2 cards, so a task with 2 concurrent tasks with 2 gpu per task could end up taking any available card.

I’m not sure if this changed, something definitely changed with the GPU assignment / affinity, would be good to get someone from Thinkbox to confirm this, or making a ‘best practice’ article

We only have one worker running on each slave. Have you tried that? One worker handles multiple tasks just fine. We tried for a while to have multiple workers on the same slave but we found it much easier to configure one single worker.

-b

Will try using that maybe it will handle GPU assignment better.

On more important task at hand. Houdini keeping scenes open. Right now only somewhat of the solution is to export the scene to .rs sequence and then render.
For example 500 frames. on 5 machines, first export .rs sequence on Deadline 5 machines each exporting 100 frames so the scene is loaded only once on each machine. And then rendering .rs sequences goes fine.
The problem with this is that with big scenes each frame can take up to 200-500MB of space which easily adds up…
I’m also testing command line rendering for houdini.
In this case after sourcing houdini env, using hbatch command to load scene after that it can render and it seems that for all that time scene is loaded… Even after rendering selected ROP, cseneiis still laoded, sourced at the start and starting render on another rop from same scene is right away.
Does Deadline use hbatch command to read scenes or is it using something else?

checking back on this and I’m really surprised that noone else sees this as an issue?
Just as an exmaple. simple scene test. 48 frames sent to deadline. rendering option A wihth 1 frame per task and option B rendering 12 frmes per task (2 render nodes, 2 GPUs each, rendering 1 UP per frame)
Option A runing time: 16minutes
Option B running time:5 minutes

When salves loads scene and just keeps rendering it is multiple times faster compared to rendering frame by frame. BUT rendering in such big chunks is not always a good idea. And rendering frame per frame i a couple of times slower.
Is it really possible that none else runs into this?
I see the same behavior with all deadline redshift and Houdini versions for the past months… years… more.

What gives?

image

Privacy | Site terms | Cookie preferences