AWS Thinkbox Discussion Forums

Deadline Redshift Standalone on AWS render output issue

So I got a lot working with Deadline and Redshift standalone using the AWs portal etc… but now I run into an issue.

The docs state that redshift only supports ‘scenefile path remapping’ which indeed works fine, the spotfleet picks it from the asset server without issues…

But now for the fun part… the renderoutput location is not remapped which makes it kind of impossible to use since the spotfleet linux instances have a different mounted asset server path every time the the farm is restarted so I can’t hardcode a farm path into the output while submitting.

So right now it’s all rendering fine… into oblivion!

So what’s the workflow for this? Do I need to talk to the repository and figure out the mapping using RepositoryUtils.GetPathMappings() and alter the output paths before submitting it?

Well, we should be doing that remap for you and if not build it in.

One workaround that usually works is make your output path relative to you rs file. I don’t quite know enough about Redshift to know if that’s actually possible but if it is you should be in a good spot.

If you are going to code a solution though, I’m game to help here and we can just airlift it into the mainline Deadline when the dev team have the chance.

Update: What process is creating the Redshift files again? Just want this to be self-contained for future generations. Path mapping for Redshift should have been done four months ago for C4D and I’ll have to see what we did to make that work.

It’s from Houdini or 3dsmax, but an rs proxy file can come from anywhere so ideally it would be a solution that works after the submission, like a true repath.

What I could code is to use python to ask deadline about the aws portal and the existing repath definitions and use that to alter the output path before submission… But that’s a bit of an ugly hack requiring the portal to be running just to generate the rs file before submitting and eveb breaks in some other use cases.

What could be a somewhat allright solution is to have a fixed name for the asset server’s data folder on the slaves. Because now it’s some sort of hashed ID that’s different on every farm restart. If it would be fixed then we could just submit the rs files with that folder as output.
This could even be a symlinked folder the the hashed one.

But as said I hope just a normal repath would be possible to make!

Or even better, redshift has output overrides build in, it’s just a matter of setting the right variables.

Excuse the formatting:

redshiftCmdLine scenefile [-oip PATH] [-opbp PATH] [-oro FILENAME]

              [-gpu N] [-cachepath PATH] [-texturecachebudget N]

or

redshiftCmdLine -listrenderoptions

or

redshiftCmdLine -fileinfo proxyFilename

or

redshiftCmdLine -printdependencies proxyFilename

Parameters:

scenefile is the .rs proxy file containing the scene to be rendered

-oip followed by a path, overrides the image file paths

This includes the paths of all the AOVs

For example, if a scene normally renders images:

  z:\myprojectpath\images\myscene.exr

  z:\myprojectpath\images\myscene.diffuseLighting.exr

You can redirect them to c:\myfolder by doing:

  redshiftCmdLine [test.rs](http://test.rs/) -oip c:\myfolder

This will produce images:

  c:\myfolder\myscene.exr

c:\myfolder\myscene.diffuseLighting.exr

If you submit the RS file from Deadline Monitor and set an Image Output Directory this will do path mapping and output to the proper location. Give this a test to confirm. If you go to the jobs submission params you will notice in the plugin file…

ImageOutputDirectoy=pathToOutput

This is the param that we are using to do path mapping on the output. Houdini does not add this param for Redshift Export jobs at the moment. That may be the same issue with 3ds Max, can you confirm?

You can modify the submitter script where the PluginInfo file is created to add this param.

So… adding the last two lines shows here to “SubmitHoudiniToDeadlineFunctions.py” make the redshift output repathing work:

elif exportType == "Redshift":
                        fileHandle.write( "SceneFile=%s\n" % ifdFile )
                        fileHandle.write( "WorkingDirectory=\n" )
                        fileHandle.write( "CommandLineOptions=%s\n" % jobProperties.get( "redshiftarguments", "" ) )
                        fileHandle.write( "GPUsPerTask=%s\n" % jobProperties.get( "gpuspertask", 0 ) )
                        fileHandle.write( "SelectGPUDevices=%s\n" % jobProperties.get( "gpudevices", "" ) )
                        
                        #Set output path
                        print("adding ImageOutputDirectory")
                        fileHandle.write( "ImageOutputDirectory=%s\n" % os.path.dirname(node.parm("RS_outputFileNamePrefix").eval()) )
1 Like

the saga continuous… Only the first frame of a ‘frames per task’ (dl_chuck_size) actually renders… the ones ‘suspended’ in the screen shot all errored out with a:

"2018-09-11 12:33:04:  0: STDOUT: Loading: C:\Cloutput\_assets\RS_playground\CGI\cam1\rs\cam1_v0000_0001.rs
2018-09-11 12:33:04:  0: STDOUT: ======================================================================================================
2018-09-11 12:33:04:  0: STDOUT: ASSERT FAILED
2018-09-11 12:33:04:  0: STDOUT: File Common\File.cpp
2018-09-11 12:33:04:  0: STDOUT: Line 1956
2018-09-11 12:33:04:  0: STDOUT: CFile::Write() for file 'C:\Cloutput\_assets\RS_playground\CGI\cam1\rs\cam1_v0000_0001.rs' failed to write to file. Error: 5
2018-09-11 12:33:04:  0: STDOUT: ======================================================================================================

"

This is part of a 2 stage submit… I submit this as a ‘don’t export RS locally’ job using the deadliine ROP which means it automatically ends up as 2 jobs in deadline, first one generates the rs files, the second job does the actual rendering using the rs files.

Ideally I want the rs generation job to be done as one big task so Houdini isn’t restarted for every frame but then only the first frame will render without errors. But RS need to be task-chucked to it can go to multiple slaves. Would be nice if there was a way to set the options for those two jobs separately.

A solution for the screenshot error would be to set the chuck_size to 1 but then Houdini is restarted for every frame which is slow and if you have non-cached simulations each frame would have to recalculate from 0 so that’s a no-go unfortunately.

Capture

So far I had to modify AWS Linux instances, hack around in a various of python scripts and now this. I don’t mind tweaking things a bit but by the looks of it I’m the first guy beta testing the Houdini+RS workflow using AWS portal.

And I haven’t event tried scenes with actual assets like textures yet and still have to make the RS on-demand licensing work… I hope that part works a bit smoother.

So any clues on how to make redshift render all frames?

edit: to clarify, the RS file generation job goes fine, the actual rendering job using those RS files results in what’s shown in the screenshot.

Modifying the submitter to have separate chunk size between export(generation) and render jobs is pretty easy. Don’t know about the ROP though.

Basically you’re just inserting different ChunkSize value to houdini_submit_info.job when job type is export.

something like this:

fileHandle.write( "ChunkSize=%s\n" %  jobProperties.get( "arnoldframespertask", 1 ) )

Submitter UI would then have something like this added:

INT_FIELD "ASS Frames Per Task":LABEL_WIDTH CELL(1,1,1,1) VALUE(arnoldframespertask.val);

I’m actually not sure of the number of folks working with Houdini and RedShift on AWS Portal to be honest.

The Redshift job can definitely be broken into whatever sized chunks you like there. The path should be remapped for you here in “[repo]/plugins/Redshift/Redshift.py”:

        outputImageDirectory = self.GetPluginInfoEntryWithDefault( "ImageOutputDirectory", "" )
        outputImageDirectory = self.CheckPath( outputImageDirectory )
        if outputImageDirectory != "":
            arguments += ' -oip "%s"' % outputImageDirectory

The fact that it’s not is either that the machines are in the wrong region for path mapping (which would be a bug, check the Slave list for their region) or that it’s not configured in your Asset Server configuration.

I forgot to mention, I’m testing this locally now on a single machine so I can work a bit faster. The remapping looks all alright now after all modifications.

The part that is troubling me now is that the generated redshift files are a bit wonky because If I submit the generated RS files as a manual job using the monitor I get the same pattern of success/error matching the initial chuck size. (still all on the same machine)

But the good news is that I’ve narrowed it down the the AOVs, if I disable them all is working fine. So I guess I’ll have to dive in why… but we’re getting closer! :slight_smile:

1 Like

Then we’ll need to fold in these discoveries so that you don’t need to maintain them.

I ‘fixed’ most of it but it will only work for my setup, I’ve added some additional python code that remaps everything to my pipeline but it’s all hard coded and not using the DL remap settings.

in the hrender_dl.py added some code to basically remap all files used in the Houdini scene, using hou.fileReferences() will give a list of tuples containing parm+path so that’s easy to do.

In the SubmitHoudiniToDeadlineFunctions.py I’ve added things to set the chucksize to 1 for RS files, while using what ever is set in the UI for the Houdini job that generates the rs files. And basically did a repath in here on ifdFile paths for redshift…

Got the asset server working as well, textures files are working nicely.

but now for the last step…

UBL for RS… I’ve got a redshift.pfx file and got somw testing hours. I’ve tried putting it in the ‘cert’ folder defined in the portal setting’s advanced tab… and manually uploading using the ‘upload certs’ on item in the deadline infrastrucure listing, still getting watermarks and a:

STDOUT: Failed to load file /home/ec2-user/redshift/redshift-core2.lic (does not exist). Will have to communicate with server.

(I did manually update the RS version on the AMI, was there maybe a specially cooked version on there for UBL that has now been overwritten?)

Ahh just found out about the license forwarding… going to give that a go…

btw… wouldn’t hurt if there was a link from the store to https://docs.thinkboxsoftware.com/products/deadline/10.0/1_User%20Manual/manual/licensing-usage-based.html#third-party-usage-based-licensing

And it’s just not working… so to recap:

  1. I can see a license forwarder running on the Portal Gateway instance.

  2. I can upload the redshift.pfx to it using the AWS portal panel… do I need to do that every time I restart the farm? Is this even the correct place to upload it?

  3. Is there actually some step by step documentation on how to setup the On demand (redshift) render licenses while using the AWS Portal… The above link is confusing since there is already a license server and it suggest installing one? Anyways… I could use some help with this here!

And btw… the issue in post #7 here Deadline Redshift Standalone on AWS render output issue

Turned out to be a Redshift bug which should be fixed in the next release, so that that’s not DL related.

Kind of mixed issues here a bit. Are you having UBL issues still? Can I split that out into its own thread?

Privacy | Site terms | Cookie preferences