I’m having some issues with w/successful cloud renders not syncing back to on-prem.
Renders successfully pull assets and project files up to the cloud but completed renders are not pushed back down.
I’ve ssh’d into the cloud instance and the completed file is indeed rendered.
Looking in the central_contoler.log file on the gateway instance, I’ve found this error:
1674500592.312214 2023-01-23 19:03:12,312 [/opt/Thinkbox/S3BackedCache/Central/lib/python2.7/site-packages/central/central.py:process_upload:338] [root]  [Thread-6] [ERROR] OutputDirectoryPermissionsError received from on prem: Download from S3 was aborted for /[BUCKETNAME]/projects/aws/HoudiniRS_TEST/rs_rendertest/redshiftCmdLineOutput.exr. Retrying in 58.992271 seconds
Possibly s3 permissions error?
I’m not sure if it’s related or not, but in the local (on-prem) DeadlineMonitor → Tools → Configure Asset Server. The Create New Bucket, and Clear Bucket buttons are no longer active.
I guess this all means that your s3 bucket is not properly configured. There was a config file that needed to be cleared, don’t remember the exact location - maybe
/etc/s3central or something like that.
Thanks for reaching out and sharing log here. I have looked at the script which is creating this error:
OutputDirectoryPermissionsError seems like it attempted to put file back but got permissions denied. Please follow this KBA, and share the logs from on premise.
I was able to figure this out. Seems the upgrade changed the service user to root:root for the Asset Server.
Root doesn’t have access to the network shares.
The render was completing w/ older cached assets. (It’s my benchmark render)
Changing the AWS
to run as a user:grp with network access AND changing permissions on
to the same user did the trick.
Note: In the future, it would be nice to be able to select the user the process would run under on installation. Install requires sudo, so gets the elevated process user by default.