So at our studio we have a local repo/farm set up running DL 10.0, which is running on Python 2.7.
I am setting up DL10.2 to co-exist with DL10.0, which includes two different pulse server versions.
I unfortunately found out the hard way, that the 10.2 pulse servers can still auto-configure and update the 10.0 clients (by setting up auto-configure rules in the 10.2 repo using a ... ruleset).
What I’ve changed to fix this from happening again, is to change the auto-configure port on the 10.2 pulse server to use 27001 instead of the default 17001 (which DL 10.0 is set up to use). This should sever the connection between 10.0 pulses competing with 10.2 pulses.
I will also make sure any worker machines using DL 10.2 will also use this 27001 auto-configuration port, which should keep the 10.0 and 10.2 workers/servers/repos separated.
One issue I’ve been having though, seems to be DL 10.2 pulse service caching (primary pulse service running on a linux server).
In case you’re wondering, the ruleset I’ve made does get applied to the worker machine I’m testing with (confirmed with the worker startup logs). The issue it seems, is that the pulse server is not sending out the latest information that is in the repo.
I know that it takes around 10 minutes for the cache to update. However, in my tests, if I make a change to the repo location in an auto-configuration ruleset, restart the primary pulse machine (both the machine and/or pulse service), and finally restart the worker machine, the pulse starts back up still using cached data, and the worker does not get updated when it restarts. Only after the 10 minutes does the pulse service seem to actually re-read the repo updates (and no reboot/restart required) and sends that info to any workers that restart.
So no matter what, it seems like you have to endure the 10 minutes.
Is there any other way to force a pulse to recache the information from the repo?