Hi, I am setting up a pipeline, that should allow for both rendering and simulating on the same farm. Everything works quite well so far, but I want to reformat how Deadline handles TOP jobs from Houdini. Right now the PDG is setup as combination of two jobs - monitor and “worker” - where monitor runs the scheduling and worker contains all the caching ROP Fetches that are set up as tasks. However, this is not wanted behaviour as each ROP Fetch represents different part of PDG network and you want to have overview how your jobs going.
What I would like to achieve is to have multiple “worker” jobs like this: BATCH
ROP FETCH - SOURCE WEDGE
ROP FETCH - SOURCE WEDGE 1
ROP FETCH - SIMULATION
MONITOR JOB
Where each job would have work items only generated by the rop fetch they belong to.
Does a set up for this handling of PDG for deadline exist somewhere?
If not, is this achiavable by alterning the submission python file for deadline, or do I have to rework the Deadline Scheduler for TOPs from SideFX as well?
Is Deadline framework able to have job handling like this?
Any tips or direction where to look or what to do to achieve this is much appreciated.
I don’t think you’ll find an answer here, since it’s more about how pdg is outputting the job than deadline itself. Nevertheless, I believe there are still limitations on this kind of workflow, but I would guess that it’s more about how you set the dependencies of the work items. How well do you know pdg?
I would say I know PDG quite well. It is not related to dependencies, but as you said yourself, it is a PDG problem rather than deadline problem. On monday I got in contact with SideFX people and they confirmed that the Deadline Submitter is of their making and the way the PDG is handled at Deadline level is really their problem rather than anything AWS can help with.
Anyway, if anyone has experience or examples of their custom PDG submitters, deadline or nondeadline, I am all ears. Any input is much appreaciated.
Hi, I am on the same boat, my current workaround is just manually create different schedulers for each tasks. I don’t know if there is any other way of custom building scripts / tools.
the other things is traditional houdini jobs, we can see all the tasks after submission on farm, but the natual of pdg creating tasks on the fly is really diffcult to check… a lot of time I don’t know which is which (plus deadline don’t show the task name), I have to keep checking the log to see what is actually running on that tasks.
@xyzDist@Tomas_Novak My workaround is to submit PDG wedges as python jobs. Each python job is is to simply open the hipfile → select wedge index → press ‘save’ on the Labs Filecache node.
This is tested on Labs File Cache only with wedging enabled. I didnt try to run a full PDG network yet.