AWS Thinkbox Discussion Forums

Client error, Houdini.py, HandleStdoutError

Trying to set up Deadline on two machines. Local (submitter) machine works fine. The other client is throwing this error:

2021-01-30 08:43:53:  Scheduler Thread - Render Thread 0 threw a major error: 
2021-01-30 08:43:53:  >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2021-01-30 08:43:53:  Exception Details
2021-01-30 08:43:53:  RenderPluginException -- FailRenderException : Error: Caught exception: The attempted operation failed.
2021-01-30 08:43:53:     at Deadline.Plugins.DeadlinePlugin.FailRender(String message) (Python.Runtime.PythonException)
2021-01-30 08:43:53:    File "C:\ProgramData\Thinkbox\Deadline10\workers\LEPTON\plugins\60156260ecf6233ec06271b8\Houdini.py", line 424, in HandleStdoutError
2021-01-30 08:43:53:      self.FailRender(self.GetRegexMatch(1))
2021-01-30 08:43:53:     at Python.Runtime.Dispatcher.Dispatch(ArrayList args)
2021-01-30 08:43:53:     at __FranticX_Processes_ManagedProcess_StdoutHandlerDelegateDispatcher.Invoke()
2021-01-30 08:43:53:     at FranticX.Processes.ManagedProcess.RegexHandlerCallback.CallFunction()
2021-01-30 08:43:53:     at FranticX.Processes.ManagedProcess.e(String di, Boolean dj)
2021-01-30 08:43:53:     at FranticX.Processes.ManagedProcess.Execute(Boolean waitForExit)
2021-01-30 08:43:53:     at Deadline.Plugins.DeadlinePlugin.DoRenderTasks()
2021-01-30 08:43:53:     at Deadline.Plugins.PluginWrapper.RenderTasks(Task task, String& outMessage, AbortLevel& abortLevel)
2021-01-30 08:43:53:     at Deadline.Plugins.PluginWrapper.RenderTasks(Task task, String& outMessage, AbortLevel& abortLevel)
2021-01-30 08:43:53:  RenderPluginException.Cause: JobError (2)
2021-01-30 08:43:53:  RenderPluginException.Level: Major (1)
2021-01-30 08:43:53:  RenderPluginException.HasSlaveLog: True
2021-01-30 08:43:53:  RenderPluginException.SlaveLogFileName: C:\ProgramData\Thinkbox\Deadline10\logs\deadlineslave_renderthread_0-LEPTON-0000.log
2021-01-30 08:43:53:  Exception.TargetSite: Deadline.Slaves.Messaging.PluginResponseMemento d(Deadline.Net.DeadlineMessage, System.Threading.CancellationToken)
2021-01-30 08:43:53:  Exception.Data: ( )
2021-01-30 08:43:53:  Exception.Source: deadline
2021-01-30 08:43:53:  Exception.HResult: -2146233088
2021-01-30 08:43:53:    Exception.StackTrace: 
2021-01-30 08:43:53:     at Deadline.Plugins.SandboxedPlugin.d(DeadlineMessage bcn, CancellationToken bco)
2021-01-30 08:43:53:     at Deadline.Plugins.SandboxedPlugin.RenderTask(Task task, CancellationToken cancellationToken)
2021-01-30 08:43:53:     at Deadline.Slaves.SlaveRenderThread.c(TaskLogWriter aja, CancellationToken ajb)
2021-01-30 08:43:53:  <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<

So I solved this by just manually copying the project file to the F:\ drive which is the same letter and location of the project on the submission machine. Does the client need to have the project file manually set up? First time using Deadline so I feel like I’m missing something here. I thought everything was supposed to be in the repository. Do I need to structure network folders or set up path mapping?

In a typical Deadline deployment, nothing should ever be on local drives. Both the submitting workstation and render nodes should all have access to all asset files via network shares - UNC paths, or Mapped Drives on Windows. In mixed environments (Windows+Linux+macOS), Path Mapping can be used to translate the paths between the operating systems, but the actual storage (NAS drive, Isilon, whatever file servers might be on the network) would be the same for all clients.

You should NEVER EVER copy any assets data manually to the Repository. The Repository is where integration and submission scripts could be placed by you, but never scene data. Only the Clients may copy some auxiliary files there, if requested programmatically via the submitters’ options.

Most submitters allow the scene file itself to be copied with the Job as an auxiliary file (e.g. “Submit Houdini Scene” in the Houdini submitters). However, it is expected that any external references like textures, caches etc. are all on a shared network location so they don’t have to be moved around. So even if the scene itself is sent with the Job and then copied automatically to the local temp. folder of each render node, all external references will remain where they were on the network share, and will be loaded from there without copying.

There are a few small exceptions to this rule - the 3ds Max submitter allows the submission of all external assets including local ones as auxiliary files to the Repository’s Job folder, but I really dislike that, and it was added only because Autodesk Backburner was doing it, and we were nagged by ex-Backburner users to add the option. Also, Maya has a “local asset caching” (LAC) option now which allows external references to be copied from the network to each client, and would remain there to speed up re-rendering of iterations. But this does not change the scene setup, just tries to reduce the network load by first copying the file from the network share to the render node before rendering, then loading from the LAC storage for rendering.

This is all of course mentioned in the Deadline documentation:

1 Like

Thank you this is just what I was looking for. Clearly overlooked this part of the documentation. Appreciate the detailed explanation!

I just ran into the same initial log error message showing a fail and an exception thrown by self.FailRender(self.GetRegexMatch(1)).

This obscured the real problem because it looks like the job plugin or something in Python has blown up for some reason. But actually Arnold had logged out an error message, in this case “ERROR | [htoa.object.camera] Unsupported camera projection: lens” so Deadline had failed the job.

I ended up making a custom copy of the Houdini plugin to try to figure out why FailRender or GetRegexMatch was throwing a stack trace and finally realized it was just how Deadline was stopping the job because of the Arnold error.

Could you add some more messaging to the Houdini plugin to make that more clear? I added a separate handler for the Arnold case that makes it more obvious to the artists what happened and that it’s not Deadline blowing up:

def HandleArnoldStdoutError(self):
    msg = 'Task failed due to Arnold error: "' + self.GetRegexMatch(0) + '"'
    self.FailRender(msg)

@Thinkbox: If you could something like that to a future version (and maybe similar to the Houdini message error handler) it would be appreciated!

1 Like

I can fill out a ticket for our dev. team to modify the logging and make it more obvious.

However, if you are willing to attach your version of Houdini.py, I could try to just get your version through a code review to be included in a future release… :slight_smile:

It is not obvious, but the current Houdini integration was donated by a studio that was using Deadline in production and was unhappy with the previous Houdini integration. So it won’t be the first time a user would help improve the product for everyone to enjoy…

glad to run into this thread and hitting the same error (possibly an alembic camera reference). Would be great if a fix for this could be implemented or at least make it clear where the issue is

I’m seeing a lot of this error all the sudden wtih AE rendering. Things had been very smooth but now very frequent issues with these errors:

“HandleStdoutError
self.FailAERender( self.GetRegexMatch(0) )”

FailAERender
self.FailRender( message )

If I have a suspicion… it’s that Deadline repository can’t keep up because of server traffic but that is just a hunch. Anyone come to a conclusion or answer about this?

Privacy | Site terms | Cookie preferences