Refresh period for Deadline custom plugin sync to spot instance?

Hi all,

I have a custom plugin that I use to test out modifications for Houdini + Redshift renders.

If I update e.g. custom_hrender-dl.py (located in our deadline repo) while the infrastructure and spot fleet has already started, how long does it take for the updated file to transfer over?

Or is there a way to force it to refresh?

I’ve waited more than 10 mins, and it doesn’t update on the cloud instance(s). When I look in the /var/lib/Thinkbox/Deadline10/workers/ip-ec2-local-ipv4/plugins/myjobid/ , it’s the not the updated script. Not sure where it’s sync’ing from since the file in the repo is updated.

Any thoughts?

thanks

Did you try restarting the worker? This might force it.

I did try restarting the worker on the node but that didn’t seem to refresh it.

This is an AWS Portal machine right? There’s an nginx cache living on the “Gateway” EC2 instance that’s likely hanging onto your old file and it refreshes every 60 minutes.

If I were you I’d swap out the updated code with what’s in /var/lib/Thinkbox/Deadline10/workers/ip-ec2-local-ipv4/plugins/myjobid/. Assuming this is a single Worker you’re using for testing.

If you’re comfortable you can clear out the nginx cache with these steps:


Remote into the Gateway instance.

sudo su
service nginx stop
cd /var/cache/nginx

There should be multiple two character folders in the directory. These will all need to be removed.

sudo rm -r /var/cache/nginx/*
service nginx start

The nginx directory will re-populate with the new repository cache once the service has been started.


Or stop and start the infrastructure, which will fix any trouble created by messing with nginx on the Gateway.

If you get stuck, cut a ticket to awsthinkbox.zendesk.com and we should be able to help out!

2 Likes

Thanks Justin.

Yes, it was a single worker (AWS Portal machine) that I was using for testing.

I tried directly modifying the file that was in /var/lib/Thinkbox/Deadline10/workers/ip-ec2-local-ipv4/plugins/myjobid/ but the file would get replaced with the old version when it loaded the next task.

I’ll try the nginx cache purge on the gw instance the next time I can test. If that doesn’t work, I’ll open a ticket.

thanks,
Janice

2 Likes

Yup, that worked! I just got a chance to try this out and purging the ngnix cache worked. The updated py file was sync’d to the worker node.

thanks again

2 Likes