AWS Thinkbox Discussion Forums

AWS Asset Server / Portal Link issue

Hello,
I’m setting up AWS Portal Link along with AWS Asset Server. I’ve managed to succesfully install those, launch an infrastructure and a fleet. Workers are appearing and pick up jobs. But on the machines (linux or windows) there’s no way to access assets.
No errors are shown in the logs files in C:\ProgramData\Thinkbox.

Paths seemingly are mapped correctly, but the job (draft) fails quickly :
Initialize: Error: Failed to create output directory "/mnt/Data/DProjects72bd5ce404ac6e221a73c20e63d3efec/test/awsportal". The path may be invalid or permissions may not be sufficient.

The only errors I can see are in CloudWatch Logs :

In /thinkbox/S3BackedCache/worker and in /thinkbox/S3BackedCache/central

1715787576.437389 2024-05-15 15:39:36,437 [/opt/Thinkbox/S3BackedCache/Client/lib/python3.10/site-packages/slavelib/utilities.py:wrapper:79] [root] [3409] [Dummy-4] [ERROR] CacheManagerException: 'getattr'
Traceback (most recent call last):
  File "/opt/Thinkbox/S3BackedCache/Client/lib/python3.10/site-packages/slavelib/cache_mgmt.py", line 420, in _get_file_attributes
    response = self.central.GetFileAttributes(request)
  File "/opt/Thinkbox/S3BackedCache/Client/lib/python3.10/site-packages/grpc/_channel.py", line 492, in __call__
    return _end_unary_response_blocking(state, call, False, deadline)
  File "/opt/Thinkbox/S3BackedCache/Client/lib/python3.10/site-packages/grpc/_channel.py", line 440, in _end_unary_response_blocking
    raise _Rendezvous(state, None, None, deadline)
grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with (StatusCode.UNKNOWN, Exception in central controller: <_Rendezvous of RPC that terminated with (StatusCode.UNAVAILABLE, Connect Failed)>)>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "/opt/Thinkbox/S3BackedCache/Client/lib/python3.10/site-packages/slavelib/utilities.py", line 71, in wrapper
    ret = func(*args, **kwargs)
  File "/opt/Thinkbox/S3BackedCache/Client/lib/python3.10/site-packages/slavelib/utilities.py", line 18, in wrapper
    ret = func(*args, **kwargs)
  File "/opt/Thinkbox/S3BackedCache/Client/lib/python3.10/site-packages/slavelib/utilities.py", line 54, in wrapper
    ret = func(*args, **kwargs)
  File "/opt/Thinkbox/S3BackedCache/Client/lib/python3.10/site-packages/slavelib/fuse_operations.py", line 212, in getattr
    ret = self.cache_manager.lstat(path_rel)
  File "/opt/Thinkbox/S3BackedCache/Client/lib/python3.10/site-packages/slavelib/cache_mgmt.py", line 30, in wrapper
    ret = func(*args, **kwargs)
  File "/opt/Thinkbox/S3BackedCache/Client/lib/python3.10/site-packages/slavelib/cache_mgmt.py", line 1638, in lstat
    response = self._get_file_attributes(file_entry)
  File "/opt/Thinkbox/S3BackedCache/Client/lib/python3.10/site-packages/slavelib/cache_mgmt.py", line 451, in _get_file_attributes
    raise CacheManagerException(str(e.code()))
slavelib.cache_mgmt.CacheManagerException: StatusCode.UNKNOWN

In /thinkbox/S3BackedCache/worker (repeated)

1715788052.807198 2024-05-15 15:47:32,807 [/opt/Thinkbox/S3BackedCache/Client/lib/python3.10/site-packages/slavelib/utilities.py:wrapper:79] [root] [2809] [Dummy-7] [ERROR] CacheManagerException: 'getattr'
Traceback (most recent call last):
  File "/opt/Thinkbox/S3BackedCache/Client/lib/python3.10/site-packages/slavelib/utilities.py", line 71, in wrapper
    ret = func(*args, **kwargs)
  File "/opt/Thinkbox/S3BackedCache/Client/lib/python3.10/site-packages/slavelib/utilities.py", line 18, in wrapper
    ret = func(*args, **kwargs)
  File "/opt/Thinkbox/S3BackedCache/Client/lib/python3.10/site-packages/slavelib/utilities.py", line 54, in wrapper
    ret = func(*args, **kwargs)
  File "/opt/Thinkbox/S3BackedCache/Client/lib/python3.10/site-packages/slavelib/fuse_operations.py", line 212, in getattr
    ret = self.cache_manager.lstat(path_rel)
  File "/opt/Thinkbox/S3BackedCache/Client/lib/python3.10/site-packages/slavelib/cache_mgmt.py", line 30, in wrapper
    ret = func(*args, **kwargs)
  File "/opt/Thinkbox/S3BackedCache/Client/lib/python3.10/site-packages/slavelib/cache_mgmt.py", line 1593, in lstat
    raise CacheManagerException('SequenceManager is not ready yet')
slavelib.cache_mgmt.CacheManagerException: SequenceManager is not ready yet

I’ve tried reinstalling multiple times, with different users and on different servers. Service account is the user used on all our render nodes and has access to the relevant assets.

Any help appreciated,
Thank you

Is this the service user for the AWSPortal services on your on-premise server or on the EC2 instances? Just making sure you haven’t changed the user running on the EC2 Workers, that’s got to stay as it is.

Which version of Deadline are you running on premise, and which version of Deadline is on the AMI? We’ve seen this issue when everything isn’t running the same version of Deadline across the board.

1 Like

Service account is only for on prem servers. I didn’t touch the config of the EC2 Workers.
EC2 Worker version: Command Stdout: v10.3.0.15 Release (76d003b0a)
Here are the installers used throughout our on prem servers / workers :

DeadlineClient-10.3.0.13-windows-installer.exe
DeadlineRepository-10.3.0.13-windows-installer.exe
AWSPortalLink-1.3.0.3-windows-installer.exe

Could this minor version mismatch be causing this error ?

Possibly! 10.3.0.13 was pulled for an issue in the AWS Portal event code where the Workers wouldn’t automatically shut themselves down and would sit running until the Resource Tracker steps in.

So please upgrade to 10.3.0.15 and lets see how it behaves.

Thank you I’ll try it out. Can I update only the Portal / Asset Server ? Or Repo aswell ? Or do I need to update everything including all my workers ?

The Repo is a must-update, and I’d update the Portal/Asset server if the version number on the installer is bumped. I cannot recall offhand if it has and I’d rather reply quick if you’re working now than make you wait. :sweat_smile: Your local Workers should be fine, though auto-upgrade can take care of the local Workers for you.

As an aside, technically auto upgrade can upgrade your EC2 instances as well, but those charge by the minute so having an upgrade get run every time the connect isn’t cost efficient.

Alright that makes sense thank you for your input. I’ll go ahead and update repo + portal to the latest version and we’ll see how it goes !
I’ll leave the auto update for a couple of days so our workers pick it up and I’ll disable it so not to waste time on EC2 workers.

1 Like

Well, I updated everything, EC2 workers are on the same (10.3.2.1) version as RCS and Repository. But still the same exact errors in CloudWatch Logs and on the EC2 workers not being able to access on prem files.

Worker error :

2024-05-17 15:03:36:  <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2024-05-17 15:03:37:  Scheduler Thread - Job's Limit Groups: 
2024-05-17 15:03:38:  0: Loading Job's Plugin timeout is Disabled
2024-05-17 15:03:38:  0: SandboxedPlugin: Render Job As User disabled, running as current user 'ec2-user'
2024-05-17 15:03:39:  All job files are already synchronized
2024-05-17 15:03:40:  Plugin DraftPlugin was already synchronized.
2024-05-17 15:03:40:  0: Executing plugin command of type 'Initialize Plugin'
2024-05-17 15:03:40:  0: INFO: Executing plugin script '/var/lib/Thinkbox/Deadline10/workers/ip-10-128-24-251/plugins/6644dc7f641f9be783d465be/DraftPlugin.py'
2024-05-17 15:03:40:  0: INFO: Plugin execution sandbox using Python version 3
2024-05-17 15:03:40:  0: INFO: Found Draft python module at: '/var/lib/Thinkbox/Deadline10/workers/ip-10-128-24-251/Draft/Draft.so'
2024-05-17 15:03:40:  0: INFO: Setting Process Environment Variable PYTHONPATH to /var/lib/Thinkbox/Deadline10/workers/ip-10-128-24-251/Draft:/home/ec2-user/Thinkbox/Deadline10/pythonAPIs/vXiJchfTd6HrfrxRHxsOCw==:/opt/Thinkbox/Deadline10/bin/python3:/opt/Thinkbox/Deadline10/bin/python3/lib:/opt/Thinkbox/Deadline10/bin/python3/lib/site-packages:/opt/Thinkbox/Deadline10/lib/python3/lib/python310.zip:/opt/Thinkbox/Deadline10/lib/python3/lib/python3.10:/opt/Thinkbox/Deadline10/lib/python3/lib/python3.10/lib-dynload:/opt/Thinkbox/Deadline10/lib/python3/lib/python3.10/site-packages:/opt/Thinkbox/Deadline10/bin/
2024-05-17 15:03:40:  0: INFO: Setting Process Environment Variable MAGICK_CONFIGURE_PATH to /var/lib/Thinkbox/Deadline10/workers/ip-10-128-24-251/Draft
2024-05-17 15:03:40:  0: INFO: Setting Process Environment Variable LD_LIBRARY_PATH to /opt/Thinkbox/Deadline10/bin/python/lib:/var/lib/Thinkbox/Deadline10/workers/ip-10-128-24-251/Draft
2024-05-17 15:03:40:  0: CheckPathMapping: Swapped "P:\test\awsportal" with "/mnt/Data/elysiumprojectseffd94c1274df72bf35b367f8ddd5957/test\awsportal"
2024-05-17 15:03:40:  0: INFO: Creating the output directory "/mnt/Data/elysiumprojectseffd94c1274df72bf35b367f8ddd5957/test/awsportal"
2024-05-17 15:03:40:  0: Encountered an error while executing plugin command of type 'Initialize Plugin'
2024-05-17 15:03:42:  Sending kill command to process tree with root process 'deadlinesandbox.exe' with process id 4682
2024-05-17 15:03:44:  Scheduler Thread - Render Thread 0 threw a major error: 
2024-05-17 15:03:44:  >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2024-05-17 15:03:44:  Exception Details
2024-05-17 15:03:44:  RenderPluginException -- Initialize: Error: Failed to create output directory "/mnt/Data/elysiumprojectseffd94c1274df72bf35b367f8ddd5957/test/awsportal". The path may be invalid or permissions may not be sufficient.
2024-05-17 15:03:44:  RenderPluginException.Cause: JobError (2)
2024-05-17 15:03:44:  RenderPluginException.Level: Major (1)
2024-05-17 15:03:44:  RenderPluginException.HasSlaveLog: True
2024-05-17 15:03:44:  RenderPluginException.SlaveLogFileName: /var/log/Thinkbox/Deadline10/deadlineslave_renderthread_0-ip-10-128-24-251-0000.log
2024-05-17 15:03:44:  Exception.TargetSite: Deadline.Slaves.Messaging.PluginResponseMemento d(Deadline.Net.DeadlineMessage, System.Threading.CancellationToken)
2024-05-17 15:03:44:  Exception.Data: ( )
2024-05-17 15:03:44:  Exception.Source: deadline
2024-05-17 15:03:44:  Exception.HResult: -2146233088
2024-05-17 15:03:44:    Exception.StackTrace: 
2024-05-17 15:03:44:     at Deadline.Plugins.SandboxedPlugin.d(DeadlineMessage bgx, CancellationToken bgy
2024-05-17 15:03:44:     at Deadline.Plugins.SandboxedPlugin.Initialize(Job job, CancellationToken cancellationToken
2024-05-17 15:03:44:     at Deadline.Slaves.SlaveRenderThread.e(String ake, Job akf, CancellationToken akg
2024-05-17 15:03:44:     at Deadline.Slaves.SlaveRenderThread.b(TaskLogWriter aka, CancellationToken akb)
2024-05-17 15:03:44:  <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<

Running `ls -ls /mnt’ on the EC2 workers outputs this :

Connection Accepted.Command exited with code: 0
Command Stdout: total 4
0 d--------- 1 ec2-user ec2-user 81920 May 17 14:49 Data
4 drwxrwxrwx 6 ec2-user ec2-user  4096 May 17 14:59 dtu

And running ls /mnt/Data fails :

Failure: Command exited with code: 2
Command Stdout: total 0

Command Stderr: ls: reading directory /mnt/Data: Bad address
 (System.Exception)

Asset Server seems quite happy in the logs:

1715958548.715086 2024-05-17 17:09:08,715 [C:\Program Files (x86)\Thinkbox\AWSPortalAssetServer\awsportalassetserverlib\share_util.py:refresh_shares:104] [root] [137156] [Dummy-1] [INFO] Refreshing shares list.
1715958548.720086 2024-05-17 17:09:08,720 [C:\Program Files (x86)\Thinkbox\AWSPortalAssetServer\awsportalassetserverlib\share_util.py:refresh_shares:111] [root] [137156] [Dummy-1] [INFO] Share: Path: \\elysium\projects\ Id: elysiumprojectseffd94c1274df72bf35b367f8ddd5957
1715958548.721086 2024-05-17 17:09:08,721 [C:\Program Files (x86)\Thinkbox\AWSPortalAssetServer\awsportalassetserverlib\share_util.py:refresh_shares:111] [root] [137156] [Dummy-1] [INFO] Share: Path: \\elysium\Assets\ Id: elysiumAssets0365ed9f9512882829c36fc20ae3dcc1
1715958550.141956 2024-05-17 17:09:10,141 [C:\Program Files (x86)\Thinkbox\AWSPortalAssetServer\awsportalassetserver.py:get_and_set_ip_address:89] [root] [137156] [Dummy-1] [INFO] IPAddress set to 10.88.49.55

As does AWSPortal Link Tunnel connected

No other logs I’ve found show any errors that could help me.

Hello, any news on this issue ?

Up ? I’d really like to get this up and running.

Hi.

You posted:
Running `ls -ls /mnt’ on the EC2 workers outputs this :

Connection Accepted.Command exited with code: 0
Command Stdout: total 4
0 d--------- 1 ec2-user ec2-user 81920 May 17 14:49 Data
4 drwxrwxrwx 6 ec2-user ec2-user  4096 May 17 14:59 dtu

IIRC, that /mnt/Data directory should not be 000 – I think it is supposed to have the same perms as the dtu directory.

  1. Did you use the correct AWS credentials (access and secret access keys) for the AWSPortal IAM user?

  2. Are you able to list the bucket using those credentials?

Thank you for insight. The AWSPortal user has been created by Deadline and does have permissions on buckets :

                "arn:aws:s3::*:awsportal*",
                "arn:aws:s3::*:stack*",
                "arn:aws:s3::*:aws-portal-cache*",
                "arn:aws:s3::*:logs-for-aws-portal-cache*",
                "arn:aws:s3::*:logs-for-stack*"

My cache bucket is aws-portal-cache-51fd459f-5aba-4939-xxx
I believe these are the correct credentials as they have been in use recently.
I ran AWS_ACCESS_KEY_ID=XXXXXX AWS_SECRET_ACCESS_KEY=XXXXXXX aws s3 ls s3://aws-portal-cache-51fd459f-5aba-4939-xxx and I do find my test file.

OK. I think you listed just the resources, not the action e.g. allow “s3:GetObject”, “s3:PutObject”, “s3:ListBucket”, but since you wrote you were able to list the bucket and see the file that is good.

One thing comes to mind – when you tested the aws cli, were you logged into the EC2 instance as the ec2-user or did you list it from your local box? Just trying to determine if the aws access keys you used were the same that were used in the AWS Portal Login UI (in case you had checked “remember credentials”). I think that credential is still saved unless you explicitly logout of the AWS Portal – if you accidentally logged in using a different IAM user.

I believe this is the correct user as I redid the install and I can see this key being used activelty in the IAM portal. Also I can perform actions like cleaning the bucket from the interface.

The reason why I was asking is because sometimes you see permission weirdness on the mount point when the shared filesystem is unreadable/inaccessible so it gets mounted with 000 or ??? (question marks).

Did you open a support ticket? It may be easier for them to check if your IAM roles have the correct policies attached.

I’ve noticed the AWSPortalAssetServerUser IAM user is not being used. So I updated the credentials with new ones from this user on my AWSPortalAssetServer using update_credentials.py.
On the AWS Worker, I ran aws s3api get-object --bucket aws-portal-cache-51fd459f-*** --key test/test.txt /dev/stdout from the Monitor and got a valid output :

Connection Accepted.Command exited with code: 0
Command Stdout: hello world{
    "AcceptRanges": "bytes",
    "LastModified": "Tue, 23 Jul 2024 11:38:07 GMT",
    "ContentLength": 11,
    "ETag": "\"5eb63bbbe01eeed093cb22bb8f5acdc3\"",
    "ContentType": "text/plain",
    "ServerSideEncryption": "AES256",
    "Metadata": {}
}

So access to the bucket is ok. But it’s empty even when I launch a job.
I’ve tried adding admin policies to my AWSPortal users as a test to make sure it’s not a permission issue, but I hasn’t helped.
Also I’ve noticed the user AWSPortalAssetServerUser that I provided for my Asset Server is not being used (last activity 66 days ago).

Am I supposed to be able to browse any files that are in the asset server’s root directories from the spot instances ? Or does Deadline need to copy them first to the bucket ?

I haven’t tried AWS support ticket as I do not have a plan that includes technical support.

I remembered now that we had a similar case where the Deadline installed on AWS workers was too old, which broke the sync. Pulled my hair over it for weeks. (don’t remember if we used a stock image)
It’s one possibility. We had to install a newer version and save the image.

All our Deadline infra as well as the image are on 10.3.2.1

Am I supposed to be able to browse any files that are in the asset server’s root directories from the spot instances ? Or does Deadline need to copy them first to the bucket ?

It’s been a minute since we’ve used AWS cloud rendering (can’t run the H20.x + Redshift versions with the linux AMIs) but…

On the ec2 instance, IIRC the bucket is mounted at /mnt/Data . From your post, you’d be able to ls: /mnt/Data/elysiumprojectseffd94c1274df72bf35b367f8ddd5957/ which should be the “mirror” of on-prem where: \\elysiumprojects\ is the share and test\awsportal are the files\folders . My main concern is that it looks like the bucket is not being correctly mounted on the ec2 instance.

I believe that when the Job is started on a worker, the assets are sync’d to the bucket. IIRC it happens before any JobPreload.py scripts. Usually you can see it the output worker job logs – something like starting S3 Backed Cache. I think there is a command to pre-cache the job deadlinecommand -AWSPortalPrecacheJob <Job ID(s)>, so you can try that and see if files shows up in the bucket.

Let’s at least double check if the AWS Portal is able to communicate with the RCS.

  1. Start the Remote Connection Server (RCS)
  2. Launch the AWS Portal
  3. Create the AWS Portal Infrastructure
  4. Start a spot fleet of 1 EC2 instance and submit a test job.

Check / verify connection:
https://docs.thinkboxsoftware.com/products/deadline/10.3/1_User%20Manual/manual/aws-portal-troubleshooting/verify-connection-to-rcs.html#aws-portal-verify-connection-to-rcs-ref-label

and then run that pre-cache command: deadlinecommand -AWSPortalPrecacheJob <Job ID(s)>

Thanks for helping me troubleshoot !
So no problem with the RCS connection Wed Jul 24 10:26:02 UTC 2024 -- Connection to the RCS established.
I can see the worker in Monitor and run commands.

Running deadlinecommand -AWSPortalPrecacheJob 66a0d77b0959151ce065fee0

Connection Accepted.Command exited with code: 0
Command Stdout: Preparing to send precache request to Asset Server: 10.88.49.55:4000
Failed to push files for job 66a0d77b0959151ce065fee0

Here are the Asset Server Startup logs :

721820805.603858 2024-07-24 13:33:25,603 [C:\Program Files (x86)\Thinkbox\AWSPortalAssetServer\awsportalassetserver.py:serve:527] [root] [138120] [Dummy-1] [INFO] Stopping Asset Server.
1721820807.047885 2024-07-24 13:33:27,047 [C:\Program Files (x86)\Thinkbox\AWSPortalAssetServer\awsportalassetserverservice.py:main:30] [root] [104360] [Dummy-1] [INFO] Starting Credential Update Check.
1721820807.048886 2024-07-24 13:33:27,048 [C:\Program Files (x86)\Thinkbox\AWSPortalAssetServer\awsportalassetserverlib\credentials.py:try_update_credentials:161] [root] [104360] [Dummy-1] [INFO] No credential update.
1721820807.048886 2024-07-24 13:33:27,048 [C:\Program Files (x86)\Thinkbox\AWSPortalAssetServer\awsportalassetserverservice.py:main:33] [root] [104360] [Dummy-1] [INFO] Finished Credential Update Check.
1721820807.048886 2024-07-24 13:33:27,048 [C:\Program Files (x86)\Thinkbox\AWSPortalAssetServer\awsportalassetserver.py:test_credentials:41] [root] [104360] [Dummy-1] [INFO] Attempting to load credentials.
1721820807.048886 2024-07-24 13:33:27,048 [C:\Program Files (x86)\Thinkbox\AWSPortalAssetServer\awsportalassetserver.py:test_credentials:44] [root] [104360] [Dummy-1] [INFO] Finished loading credentials.
1721820807.048886 2024-07-24 13:33:27,048 [C:\Program Files (x86)\Thinkbox\AWSPortalAssetServer\awsportalassetserverservice.py:main:41] [root] [104360] [Dummy-1] [INFO] Finished Credential Test.
1721820808.177599 2024-07-24 13:33:28,177 [C:\Program Files (x86)\Thinkbox\AWSPortalAssetServer\awsportalassetserverservice.py:main:69] [root] [104360] [Dummy-1] [INFO] Mapping additional drives in the service account...
1721820809.479121 2024-07-24 13:33:29,479 [C:\Program Files (x86)\Thinkbox\AWSPortalAssetServer\awsportalassetserverservice.py:main:73] [root] [104360] [Dummy-1] [INFO] The following letter drives are mapped in the service account: C:\ P:\ 
1721820809.479121 2024-07-24 13:33:29,479 [C:\Program Files (x86)\Thinkbox\AWSPortalAssetServer\awsportalassetserver.py:serve:477] [root] [104360] [Dummy-1] [INFO] Asset Server starting up.
1721820810.638389 2024-07-24 13:33:30,638 [C:\Program Files (x86)\Thinkbox\AWSPortalAssetServer\awsportalassetserverlib\share_util.py:refresh_shares:104] [root] [104360] [Dummy-1] [INFO] Refreshing shares list.
1721820810.640394 2024-07-24 13:33:30,640 [C:\Program Files (x86)\Thinkbox\AWSPortalAssetServer\awsportalassetserverlib\share_util.py:refresh_shares:111] [root] [104360] [Dummy-1] [INFO] Share: Path: P:\ Id: P45fb528f0ad05bcc56680f4dffded1ec
1721820810.645382 2024-07-24 13:33:30,645 [C:\Program Files (x86)\Thinkbox\AWSPortalAssetServer\awsportalassetserver.py:serve:498] [root] [104360] [Dummy-1] [INFO] Asset Server Startup Complete!

When changing the Root directories in the Asset Server Settings i also always get an unknown error.
My project files are on a mounted drive at P:\... for network path \\elysium\projects. Asset server logs shows both P and C drives.
Here are the logs after refreshing the path (root directory at P:\)


1721820868.559165 2024-07-24 13:34:28,559 [C:\Program Files (x86)\Thinkbox\AWSPortalAssetServer\awsportalassetserver.py:GetFile:109] [root] [104360] [ThreadPoolExecutor-0_0] [INFO] GetFile called for path: P45fb528f0ad05bcc56680f4dffded1ec/awsportalassetserver1ymwwpkbtemp
1721820868.633166 2024-07-24 13:34:28,633 [C:\Program Files (x86)\Thinkbox\AWSPortalAssetServer\s3backedcachelib\aws_s3_util.py:upload_file:187] [root] [104360] [ThreadPoolExecutor-0_0] [INFO] Created metadata for filename: {'filename': '//elysium.8849.studio/projects/awsportalassetserver1ymwwpkbtemp'}
1721820868.633166 2024-07-24 13:34:28,633 [C:\Program Files (x86)\Thinkbox\AWSPortalAssetServer\s3backedcachelib\aws_s3_util.py:upload_file:196] [root] [104360] [ThreadPoolExecutor-0_0] [INFO] About to upload //elysium.8849.studio/projects/awsportalassetserver1ymwwpkbtemp to bucket aws-portal-cache-51fd459f-5aba-4939-b4a6-ec2b33ae89bd at c7bdeb3270454896a1c437b16148be2d
1721820868.854863 2024-07-24 13:34:28,854 [C:\Program Files (x86)\Thinkbox\AWSPortalAssetServer\s3backedcachelib\aws_s3_util.py:upload_file:223] [root] [104360] [ThreadPoolExecutor-0_0] [INFO] Completed upload to aws-portal-cache-51fd459f-5aba-4939-b4a6-ec2b33ae89bd at c7bdeb3270454896a1c437b16148be2d in 0.22 s (uploaded 0 byte file in 0.22 s)
1721820868.854863 2024-07-24 13:34:28,854 [C:\Program Files (x86)\Thinkbox\AWSPortalAssetServer\awsportalassetserver.py:GetFile:135] [root] [104360] [ThreadPoolExecutor-0_0] [INFO] Uploaded file at //elysium.8849.studio/projects/awsportalassetserver1ymwwpkbtemp to fulfill P45fb528f0ad05bcc56680f4dffded1ec/awsportalassetserver1ymwwpkbtemp

Job is a simple Draft job to convert EXR to mov in P:\....

0 byte files have also been uploaded to S3 and at the root of my P share.
So the asset server does have access to S3 and can write files to S3 and the P share (as shown by those 0 bytes (test?) files).
The infrastructure has access to my RCS, the EC2 workers have the rights to the s3 bucket.
But it seems that the pre-cache command doesn’t work. How is deadline aware of the input files path ? I know the job’s properties have the output path, but the input is plugin dependent no ?

1 Like
Privacy | Site terms | Cookie preferences