AWS Thinkbox Discussion Forums

AWS & Phoenix FD -- still no joy

I’ve been trying to work through a rendering error when using Phoenix FD on an AWS farm. Here’s the latest from Svetlin N. at Chaos support:

" Hmmm, I’m afraid I am out of ideas. There seems to be something going wrong with the automation that transfers the data, but not sure what exactly - according to the logs Phoenix does not read the cache files at all. Have you already tried contacting the AWS guys?

Meanwhile, we are trying to get our license running so we can try and reproduce the issue here, but seems like it’s gonna take some time…

Svetlin Nikolov, Lead Phoenix FD developer"

I’m bringing it up here in case anyone has ideas. If not, I have to move to using a VPN on Azure, most likely.
My most recent AMI is a custom image built from Deadline (Windows) base, using Max 2019 sp2, VRay Next 4.1, and PhoenixFD 3.1.200. Sample render comparison and a section of the Asset Server log are attached. The image is from testing with a VRay Volume rather than a Phoenix Volume, but the results are the same in both cases: the cache data seems to be transferred, but not read.


AWS_asset_server_phoenix-jobs_2018-12-14.zip (23.6 KB)

Hey Jeff,

Can you please send the job report from both the local and aws render? If you could provide that sample scene as well that would be great!

Thanks!

Charles

You bet. Job reports, and job file are attached.Phoenix_LiquidTest-AWSjob.zip (44.5 KB)
Phoenix_LiquidTest-LocalJob.zip (29.0 KB)
Phx_LiquidTest_VRvolume_03.zip (94.8 KB)
And, here is a link: CacheFiles_for_Liquid_Test , for the cache files for frames 30-39
Let me know if would like anything else.

Hey Jeff,

Would this be similar to your submission method?

What I think is happening is these files are not getting path mapped. Which we do not have support for at this time.

Here are a couple alternatives you could try to get around this:

#1 - See if you are able to change the cache files to be relative to the scene file so that when the files are collected the AWS scene path is used to reference those files. Example $(scene_dir).

#2 - Generate the simulation on AWS using Submit Phoenix FD Simulation To Deadline. This should generate all the paths on AWS making them accessible to the AWS Slaves. Notice in the screenshot below you can reference output to be relative to the scene directory:

Hopefully one of those solutions can work. Let me know how it goes!

Regards,

Charles

Thanks, Charles. Yes, that’s the submission method I was trying.
In trying your suggestions, here’s what I found:

  1. Implicit/default Phoenix cache path = $(scene_path):
    Deadline error = “Runtime error: $directory not recognized: $(scene_path)”
    – MaxScript rollout handler exception, appears to be in line 3069 of the submission script,
    at “local theBaseName = getFileNameFile theFilename”

  2. Explicit (browsed) change to cache files in scene path:
    No cache files used (render “blank”, without error message)

  3. Simulation rendered on AWS fleet: Deadline Error:
    –CANNOT SUBMIT! Calling the SubmitJob() function of the current workflow failed.

Note that:" $(scene_path)" is the default path for PhoenixFD rendered cache files under 3dsMax.

I’m trying one more thing next…

And… nothing from the current test: Using VR-VolumeGrid, explicit scene directory, all external files copied to Repository.
Result = “blank” render (no cached geometry rendered).

Hey Jeff,

Adding a file here provided by Bobo that should provide path mapping for those cache files.

Add the file in the attached zip to:

C:\DeadlineRepository10\plugins\3dsmax

postloadPathMapping.zip (1.0 KB)

Regards,

Charles

Thank you Charles & Bobo. I should be able to test it over the weekend.
Good to see Bobo is with Thinkbox. He’s been providing awesome for like, 20+ years now?

Privacy | Site terms | Cookie preferences