Aws G5 compatible or not with redshift ? (log attached)

Hello Gang, im almost there
Created a custom AMI with Linux and my latest redshift and according to that log, thats the closest ive ever been to a rendered frame.
Im limited to 8 quotas, cheap instances are never available, but i managed to test with a g5.xlarge, which has 1 GPU (right?) and should be compatible with redshift. (compute capability >7.0), still the log says no GPU was found. Not sure where to go from here, so any lights on the matter would be v helpful.
Thanks!

2023-08-19 07:35:23:  0: STDOUT: RLM License Search Path=/home/ec2-user/redshift:/etc/opt/maxon/rlm
2023-08-19 07:35:23:  0: STDOUT: Detected env variable REDSHIFT_PATHOVERRIDE_FILE. Loading path override data from file: /var/lib/Thinkbox/Deadline10/workers/ip-10-128-39-198/jobsData/64dcc494fa524f3484c3f6d3/RSMapping_temp5ULmK0/RSMapping.txt
2023-08-19 07:35:23:  0: STDOUT: Loading Redshift procedural extensions...
2023-08-19 07:35:23:  0: STDOUT: 	From path: /usr/redshift/procedurals/
2023-08-19 07:35:23:  0: STDOUT: 	Done!
2023-08-19 07:35:23:  0: STDOUT:  
2023-08-19 07:35:23:  0: STDOUT: Preparing compute platforms
2023-08-19 07:35:23:  0: STDOUT: 	Could not load the CUDA core library from /usr/redshift/bin/libredshift-core-cuda.so
2023-08-19 07:35:23:  0: STDOUT: 			dlerror: libcuda.so.1: cannot open shared object file: No such file or directory
2023-08-19 07:35:23:  0: STDOUT: 	Found CPU compute library in /usr/redshift/bin/libredshift-core-cpu.so
2023-08-19 07:35:23:  0: STDOUT: 	Done
2023-08-19 07:35:23:  0: STDOUT: Determining peer-to-peer capability (NVLink or PCIe)
2023-08-19 07:35:23:  0: STDOUT: 	Done
2023-08-19 07:35:23:  0: STDOUT: PostFX initialized
2023-08-19 07:35:23:  0: STDOUT: Loading: /mnt/Data/D_active_jobsAWS_test9d41df76868091aa94b562b425dcd728/hou19.5/render/rs/0010_v001/adx0010_v001_main.1120.rs
2023-08-19 07:35:23:  Port Forwarder (redshift:5054): Client connected to port forwarder.
2023-08-19 07:35:23:  Port Forwarder (redshift:7054): Client connected to port forwarder.
2023-08-19 07:35:24:  Port Forwarder (redshift:5054): Client connected to port forwarder.
2023-08-19 07:35:24:  Port Forwarder (redshift:7054): Client connected to port forwarder.
2023-08-19 07:35:25:  0: STDOUT: License for redshift-core 2023.12 valid until Dec 08 2023
2023-08-19 07:35:25:  0: STDOUT: Detected change in GPU device selection
2023-08-19 07:35:25:  0: STDOUT: No devices available
2023-08-19 07:35:27:  0: STDOUT: PostFX shut down
2023-08-19 07:35:27:  0: STDOUT: Shutdown GPU Devices...
2023-08-19 07:35:27:  0: STDOUT: 	Devices shut down ok
2023-08-19 07:35:27:  0: STDOUT: Shutdown Rendering Sub-Systems...
2023-08-19 07:35:27:  0: STDOUT: License returned     
2023-08-19 07:35:27:  0: STDOUT: 	Finished Shutting down Rendering Sub-Systems
2023-08-19 07:35:27:  0: INFO: Process exit code: 1
2023-08-19 07:35:27:  0: INFO: Sending EndTaskRequest to S3BackedCacheClient.
2023-08-19 07:35:27:  0: DEBUG: Request:
2023-08-19 07:35:27:  0: DEBUG: 	JobId: 64dcc494fa524f3484c3f6d3
2023-08-19 07:35:27:  0: Done executing plugin command of type 'Render Task'
2023-08-19 07:35:27:  0: Executing plugin command of type 'End Job'
2023-08-19 07:35:27:  0: INFO: Sending EndTaskRequest to S3BackedCacheClient.
2023-08-19 07:35:27:  0: DEBUG: Request:
2023-08-19 07:35:27:  0: DEBUG: 	JobId: 64dcc494fa524f3484c3f6d3
2023-08-19 07:35:27:  0: Done executing plugin command of type 'End Job'
2023-08-19 07:35:29:  Sending kill command to process tree with root process 'deadlinesandbox.exe' with process id 3472
2023-08-19 07:35:30:  Scheduler Thread - Render Thread 0 threw a major error: 
2023-08-19 07:35:30:  >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2023-08-19 07:35:30:  Exception Details
2023-08-19 07:35:30:  RenderPluginException -- Error: Renderer returned non-zero error code, 1. Check the log for more information.
2023-08-19 07:35:30:     at Deadline.Plugins.PluginWrapper.RenderTasks(Task task, String& outMessage, AbortLevel& abortLevel)
2023-08-19 07:35:30:  RenderPluginException.Cause: JobError (2)
2023-08-19 07:35:30:  RenderPluginException.Level: Major (1)
2023-08-19 07:35:30:  RenderPluginException.HasSlaveLog: True
2023-08-19 07:35:30:  RenderPluginException.SlaveLogFileName: /var/log/Thinkbox/Deadline10/deadlineslave_renderthread_0-ip-10-128-39-198-0000.log
2023-08-19 07:35:30:  Exception.TargetSite: Deadline.Slaves.Messaging.PluginResponseMemento d(Deadline.Net.DeadlineMessage, System.Threading.CancellationToken)
2023-08-19 07:35:30:  Exception.Data: ( )
2023-08-19 07:35:30:  Exception.Source: deadline
2023-08-19 07:35:30:  Exception.HResult: -2146233088
2023-08-19 07:35:30:    Exception.StackTrace: 
2023-08-19 07:35:30:     at Deadline.Plugins.SandboxedPlugin.d(DeadlineMessage bgt, CancellationToken bgu
2023-08-19 07:35:30:     at Deadline.Plugins.SandboxedPlugin.RenderTask(Task task, CancellationToken cancellationToken
2023-08-19 07:35:30:     at Deadline.Slaves.SlaveRenderThread.c(TaskLogWriter ajy, CancellationToken ajz)
2023-08-19 07:35:30:  <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<

I used the default machine to make the custom AMI, t2.micro.
Started the instance from the thinkbox Lunix + Deadline template then installed redshift using the documentation ‘sudo sh ./installationFile.run’, all seemed to go well on the install.
… also didnt not install any NVIDIA drivers…

Tried again installing driver but wasnt successful. I followed aws docs on NVIDA driver install but had loads of warnings during installation.

That looks like it’s missing GPU drivers, given this error:

STDOUT: 	Could not load the CUDA core library from /usr/redshift/bin/libredshift-core-cuda.so
STDOUT: 			dlerror: libcuda.so.1: cannot open shared object file: No such file or directory

It looks like you’re using AWS Portal, if you use one of our AMIs with Redshift on it, it should behave better. It could be that installing Redshift while on a t2 without a GPU is causing drivers to be skipped during installation.

I started a new G5 instance from the t2.micro image, and installed the NVIDIA driver. The installation ended up successfully but had loads of warnings as I mentioned, at the very end of the process. Something related to vulture and modular X.Org release… i should have take screen grabs.
Anyways, i tested the driver installation following the aws docs and the test was ok.
So i start Deadline Launcher, Monitor, create infrastruture, add a G5 fleet, but the fleet wouldnt fullfill. Im thinking ill just focus on makin an instance work ‘locally’ and manually copy RS & texture files across to the g5 and render locally via command line. Hoping once thats cleared and out of the way the rest should follow suit.
Will try google as well coz aws is getting on my nerves with quotas and paperwork.