Hi,
I know the idea has been thrown around for a while, but I think its really important moving forward…
HA Pulse clustering and the ability for both multiple Pulses to co-exist to allow better redundancy (better Power Management) support BUT also to allow satellite studios with a VPN tunnel to allow multiple Pulses to share their cached information between each other. Then when my US office wants to see the UK farm, it just looks at its local Pulse server, which is the only machine querying the UK Pulse server for up-to-date information. It also means that the US users experience a fast refresh as it only retrieves its UK farm information from its local US Pulser server.
Mike
i’ll let ryan jump in with more details, but the plans for 6 include a database backend option and we are reviewing high-perf database backends currently.
Pulse would change slightly, with the intent to use it as a proxy cache for remote access as you are describing. there are a number of ideas on how this might work, but as we prototype the new backend we’ll definitely be looking at you and other testers to evaluate the performance!
cb
As Chris mentioned, we are looking at database backends. Specifically, we’re looking at document databases, such as CouchDB or MongoDB (maybe we’ll even support both if that makes sense, giving end users the option). These databases should give us many of the things we’re looking for out of the box:
- Performance
- Scalability
- Flexibility
- Reliability
The first two are obvious, as we will no longer be using the file system as the main source of communication. We’ve read that 50,000 concurrent connections can be supported out of the box on a decent system, but multiple servers can be used to scale the backend even higher.
Flexibility comes from the fact that there is no schema, and everything is stored as a document (similar to how the file system currently works). This should mean that tables don’t need to be updated when rolling out new versions of Deadline.
Reliability comes from the ability to set up redundant servers on the backend. We had been tossing around ideas for how multiple Pulses could achieve this, but that should no longer be necessary.
Pulse would no longer be a proxy for the Deadline applications. It would still be optional, and used for server duties like Power Management. I’m not sure if it would be necessary to use Pulse as a proxy cache for remote locations, but that’s something that would be evaluated.
I should note that none of this is set in stone. We’re keeping it pretty hush-hush at the moment as we work on the prototype, but we will definitely be asking for alpha testers when we begin testing!
Cheers,
- Ryan
Sounds really interesting!
Db back end is the way to go
If Pulse is no longer used as a proxy cache , then it will be quite lightweight. I like the idea of having a backup power management Pulse cluster tho to protect all my expensive hardware from overheating.
The remote location thing is still an issue, as I would like to speed up this setup, so remote artists get an equivalent experience to those located locally to the farm. Pipe dream, but I think it’s possible
Thanks,
Mike
I agree - i would love to have local artists and remote artists to have a similar experience for a variety of reasons.
cheers!
cb
Would we be able to use Python etc to interface with this database directly? That would be nice as well. Would you still use XML just as a document or would you break up the current XML format into database schema? e.g. could we pull down a frame range from a job directly or would we have to pull down the .job as a document and then parse the XML as we currently do?