AWS Thinkbox Discussion Forums

CustomPluginDirectory in a mixed OS farm?

Hi all,

How do you deal with the CustomPluginDirectory Job Property in a mixed OS farm?

For example, if a Job is submitted from a Windows machine, and the CustomPluginDirectory path is e.g. X:/my/custom/plugin/dir, currently when testing if that Job gets on a Linux machine it can’t resolve that path – even though we have Global Rules in the Repository Options that should be replacing X:/ with /mnt/tools/ prior to running a Job.

Advice is appreciated, at the moment we’re stuck on this.

Cheers,

Hello

Thanks for reaching out. I think it should be able to path map the custom plugin directory. Can you share the screenshot of your Drive Mapping and Path Mapping rules and a job report in question?

Mapped Drives: Repository Configuration — Deadline 10.2.1.1 documentation
Mapped Paths: Path Mapping (Cross-Platform Rendering) — Deadline 10.2.1.1 documentation
Job Reports: Controlling Jobs — Deadline 10.2.1.1 documentation

Hi @zainali

Here are relevant screenshots & a Job report.

Cheers,

SSE_CPDLinuxFail_2023-06-12.zip (94.7 KB)

Hello @dean-sse

I tried to reproduce the issue internally but my job renders fine. The worker replaces custom plugin directory at the beginning of task like below:

2023-06-13 14:30:31:  0: Got task!
2023-06-13 14:30:31:  0: Render Thread - Render State transition from = 'ReceivedTask' to = 'Other'
2023-06-13 14:30:31:  0: Plugin will be reloaded because a new job has been loaded.
2023-06-13 14:30:31:  0: Loading Job's Plugin timeout is Disabled
2023-06-13 14:30:31:  0: SandboxedPlugin: Render Job As User disabled, running as current user 'alizainb'
2023-06-13 14:30:34:  0: Loaded plugin CommandLine
2023-06-13 14:30:34:  All job files are already synchronized
2023-06-13 14:30:34:  CheckPathMapping: Swapped "C:/Repo/" with "C:\blahblah\CommandLine/"
2023-06-13 14:30:34:  Plugin CommandLine was already synchronized.
2023-06-13 14:30:34:  0: Executing plugin command of type 'Initialize Plugin'

However in the job UI (environment and submission params) the directory is unchanged (not path mapped in the UI) because it is done at the time of requeue as opposed to at the time of submission.

I think the next step should be to turn the verbose Worker logs on you side, reproduce the issue and share the logs with me.

My guess here is that the Custom Plugin Directory you are using is the directory which hold plugin files. You need to chose one level up like:
image

This is just my guess :slight_smile: what lives in deadline_mayabatch_scripts?

Hi @zainali

The CustomPluginDirectory shown in my included-in-the-zipfile screencap works perfectly when our ecosystem is Windows-only (and has for months in production) e.g. here’s the start of a typical Windows MayaBatch Job running on a Windows machine here:

Based on your log screencap you didn’t do a mixed-OS test (as seen by your log’s CheckPathMapping: Swapped "C:/Repo/" with "C:\blahblah\CommandLine/"). If you would try a Windows-submitted Job → Linux worker attempt with a Windows-path to Linux-path swap that would be appreciated.

In the meantime I’ll check our Worker log verbosity settings to see what additional info it might get us.

Cheers,

Hey @zainali

With verbose Worker log settings there’s not much more information to peruse (see attached).

CPDlinuxFail_Job_2023-06-15.zip (1017 Bytes)

Guaranteed that our CustomPluginDirectory path in the Windows-submitted Job is correct, since:

  1. as I mentioned it’s been working fine in a full Windows ecosystem.

  2. in testing we had the Windows machines create Jobs with the CustomPluginDirectory explicitly mapped to the Linux equivalent (meaning we weren’t relying on Deadline to do any substitutions once the Job started on a machine), and those worked fine running on Linux machines but broke on Windows machines, as expected.

On your side Could you please try submitting a Job with a Windows-pathed CustomPluginDirectory to a Linux worker?

Cheers,

Hello @dean-sse

Thanks for sharing the updated job report it is expected that the job report will not hold any useful information if the plugin was not initialized. Which is why I asked for Worker logs. Sorry about not sharing the location where you can find the Worker logs.

To get the logs it is very important that you get them from application logs directory locally stored on the Linux machine (render-145) in question because those are the most verbose form and will have information about what happened during path mapping.

The Worker logs live here: Logs — Deadline 10.2.1.1 documentation
Name of the log file will be something like
image
please check the time stamps in the log that matches the time stamp of the job report if you will not reproduce the issue right before sending those over.

One more test you can do on a Linux machine could be to login into it and from the terminal run below:
$DEADLINE_PATH/deadlinecommand -CheckPathMapping <CustomPluginDirectoryPath>

It should return the mapped path for the directory. Share the result.

Another Guess: I re-checked the path mapping and drive mapping rules you are using. X: is mapped to <somepath>/tools in Mapped Drives then it is converted to X:/ --> /mnt/tools in Mapped Paths which will make the final path on Linux to look something like: <somepath>/tools/mnt/tools… is that a valid path?

The region in Mapped Drives is set to unrecognized try changing that to All

P.S. I do not think that a Linux system is required to test path mapping. Path mapping just does text replacement based on the rules, for my test it did it for custom plugin directory so I believe it is a valid test.

Hey @zainali

On a Linux machine, given our current dev Repository settings (screencapped upthread), running the following:

/opt/Thinkbox/Deadline10/bin/deadlinecommand -CheckPathMapping "X:/dev/ss_dev_linux/studio_pipeline_2_6_repo/maya/scripts/python/render_farm/deadline_mayabatch_scripts"

…returns:

/mnt/tools/dev/ss_dev_linux/studio_pipeline_2_6_repo/maya/scripts/python/render_farm/deadline_mayabatch_scripts

…which is correct and available to our Linux machines.

Also, AFAIK Deadline’s Mapped Drives feature is Windows-only, it doesn’t run on Linux machines since unlike Windows, Linux doesn’t map server locations to drive letters like C: or X:. The Deadline docs and the Deadline UI itself seem to support my understanding, e.g.:

deadline_mapped_drives_ui

==

I’ve attached a Worker log. Please focus only on the sections with the error:

ERROR: Encountered the following error while initializing the Plugin Sandbox: 'Value cannot be null. (Parameter 'input')'.

…since as I mentioned there were some earlier tests with an already-Linux-pathed CustomPluginDirectory hardcoded into the Windows-Job which worked on Linux and whose outputs are comingled into this log. You can also ignore any warnings in this log about Limits: we’ve tested and there is no difference in the Plugin Sandbox failure either with or without the Limits mentioned in this log (subsequent to this log we added them to our dev Deadline repo).

CPDlinuxFail_Job_2023-06-15.workerlog.zip (164.7 KB)

Cheers,

Hey @zainali

Any updates on this?

Cheers,

Hello @dean-sse

I have checked the attached worker log and seen it is loading the render plugin from the correct Linux path and started rendering:

> 2023-06-15 13:40:29:  Synchronizing Plugin MayaBatch from /mnt/tools/dev/ss_dev_linux/studio_pipeline_2_6_repo/maya/scripts/python/render_farm/deadline_mayabatch_scripts/MayaBatch took: 0 seconds

It has errors related to rendering that are coming from Maya

2023-06-15 13:41:59:  0: STDOUT: Warning: file: /tmp/tmpXJiV3G.tmp line 58: Errors have occurred while reading this scene that may result in data loss.
2023-06-15 13:41:59:  0: STDOUT: Read 5 files in  7.4 seconds.

also your Initial Task report and verbose worker logs are in different time stamps, can you please reproduce the Issue and share them at the same time stamp, If possible share the archived Job to check Maya settings defined for Job.

Thanks!

Hi @Nreddy

As I mentioned upthread, about the supplied worker log:

there were some earlier tests with an already-Linux-pathed CustomPluginDirectory hardcoded into the Windows-Job which worked on Linux and whose outputs are co-mingled into this log.

If you look at any section of the supplied worker log that has the error:

ERROR: Encountered the following error while initializing the Plugin Sandbox: 'Value cannot be null. (Parameter 'input')'.

that’s when it’s trying to run a Windows Job with a Windows path for CustomPluginDirectory on a Linux machine, with all the previously mentioned repository Path Mapping rules, and failing.

Can someone at Thinkbox please try the following?

  1. Be connected to a Deadline repo that has both Windows and Linux Workers on it.
  2. Set up your Deadline repository’s Path Mapping rules so that Linux Workers remap Windows paths appropriate to your setup.
  3. Have a CustomPluginDirectory path that lives outside of the Deadline repo on a server that all your Workers can see, with whatever Deadline plugin folder + contents you want to try (e.g. CmdLine) in that directory.
  4. Submit a relevant Job from the Windows machine that has a Windows path for CustomPluginDirectory and ensure it ends up only on a Linux Worker.
  5. Report your results.
Privacy | Site terms | Cookie preferences