I need to transfer repository, pool/group/limit, users/user group, and possibly some other settings from one repository to another. I’m guessing the simplest way to do this would be to dump them from one database and import them into the other, but I wanted to check and see if this would be enough to do the job, or if something else would need to be touched to trigger them to be picked up properly by the clients on the second farm.
See attached for 2 x shell scripts. Replace “HOSTNAME” with your MongoDB hostname. Replace “deadline6db” with the name of your DB. Collections of interest will be “DeadlineSettings”, “LimitGroups”, and “UserInfo”. The export script is actually a useful way to run an ad-hoc backup of your DB as it can be done live.
[DISCLAIMER - the “–upsert” flag in the import script will OVERWRITE the same named collection in the DB it is IMPORTED into. Also, collection names, etc may well change between MAJOR versions, so if your reading this forum post in 2 years time, don’t assume it is still correct ]
Thanks Mike, it’s good to know that I can just manually dump and import Mongo documents. I’ve got one more slightly more involved question though…
I’m in a situation where I would like to do a partial transfer of the “deadline_network_settings” document in the DeadlineSettings collection (i.e. omitting some location-specific settings). I noticed there’s a field in there named “NetworkSettingsVersionHash”, which I imagine is used by the clients to provide a fast-path for verifying their settings are up-to-date without querying and diffing the entire document. If this is the case, I’m not sure it’s a good idea to transfer the value of this field, but I’m wondering if there’s a way I can manually recalculate the hash based on the merged settings after the transfer.
At first glance it seems like the value should be fine to transfer between databases, since it will likely not match the value in the target DB. However, since the settings update will only be partial, the hash that would be calculated on one side may not actually match the value on the other. Thus, if a slave sees an invalid hash, pulls the settings down again, and then calculates its own updated hash, it will be calculated against the transferred settings minus the omitted location-specific fields, where the value that would have been transferred if the “NetworkSettingsVersionHash” were included would only be valid against the source settings (which again, would differ by location).
Conversely, if the “NetworkSettingsVersionHash” were not transferred, the clients would never actually re-read the new settings, since the hash would still match the previous version. And even if they did (say, on a slave restart), the hash they calculated would still never match the value stored in the database.
So, to recap, is there a way for me recalculate the stored hash from the document’s contents to properly support re-reading and caching of the updated settings? Or maybe I’m reading too much into this field… maybe it isn’t even used anymore…