After using Deadline for a bit it is clear to me that the UI/UX could be much better on the job monitoring, specially since things can get very complex with lots of machines, multiple pools, groups, status…
Wouldn’t be nice if we could see it all in a more intuitive way? let’s face it, we are far from the 80s spreadsheet approach to life and watching what the stock market applications do to get faster interaction with the user makes me think this could be applied to Deadline. maybe soon?
And play the video and see how the user drills down on the various stocks (boxed executing a job?) and further drills down to see the stock news, charts, etc…
And the selection/filtering mechanisms which are great, something to understand as pools? groups?
Actually, I absolutely want to make a Slave visualizer like Stocktouch. I hadn’t even seen it before, but it’s super close to what I wanted to implement. My plan is to do it on my off time as just a fun little project once we allow scriptable custom panels (which I don’t think are going to land soon). Getting a grid of red/orange/green Slaves is way better to pick one out at a glance.
As far as actually implementing this in core, the dev team have a laundry list of demands they’re trying to get though, so we’re going to need you to rally some big prospect behind the idea before we can justify the man hours on this guy. Folks are still loving the Qt panels and themes at this point, so we’re not getting a lot of great UX demands lately.
I do love the UI in these, and it’s super exciting (I’m a bit fan of interesting designs). I just don’t know where we’ll find the time. The core guys have been supping up the Linux integration in 8.1 which is going to kick all kinds of ass on the IT angle. It’s just kind of the polar opposite of the UI.
Thanks Edwin for the honest answer, really appreciate it.
May be you can help me pointing me in the right direction with regards with the API to query the Deadline database… is is something I would happily do in-house.
Do you reckon all the API bits are there to access all the necessary data?
I know we helped him with the software access to the data while he was working on this.
It is not what you asked for, but I figured it might be an inspiration…
There are a huge number of different ways to grab data. Troy’s used the web service which makes the data access easy, but isn’t as close to the data as I’d prefer (gets serialized and sent over a TCP stream) which doesn’t feel like the right approach for a desktop app.
Deadline’s core API, the one we use day-in-day-out, is powered by Python.net. It’s CPython, but it must be run via a Deadline app like with DeadlineCommad ExecuteScripy MyMonitor.py. From there, if you want to get particularly crafty, you can import your PyQt or ours and start hacking on things. As I said before, there’s some work that needs to be done to support custom panels (we don’t remember docking status and some other key pieces), so making this behave consistently within the Monitor is going to be a challenge.
I agree with your view of using a web service may be easy but a bit too far from the current status of the database… I fancy using the API but running it inside Deadline is something I would prefer not to if possible.
I will have a look at the documentation but I wonder if there is any other method? Maybe direct communication to your abstraction layer to the database? Sockets?
We are doing something similar, both for better reporting and queries to deadline.
I ended out using the pyflask to create a requests api that we use to query custom information from the deadline backend. flask.pocoo.org/
One such example for why we use this, is we store show and shot specific information in the extra info fields, like episode, shot_code etc.
We can then ask the db for only documents that match that search criteria, which ends out being in the micro seconds to respond, vs getting all the jobs and parsing though that data
We only use it for read access, and the flask framework has a queue built in, so we do not cause any extra load on the db.
We are also starting to push stuff into an ELK stack so we can some realtime reports.
Oh boy. Well, just to be that guy: we don’t support this method, so when we change the database design for jobs and such, it can break your pipeline when you go to 9.0.
For as slow (that’s relative for everyone, I find it pretty quick) as web API might be at times, we do make sure that the abstraction remains as compatible as possible. TBH, if you need fast lookups, I’d throw an HTTP cache in front of it like varnish-cache.org/ (if you feel like getting really intense). I’ve played with the nginx caching features for personal projects and that works great. We don’t yet support ETags or HEAD http requests in any of our APIs or Proxy though, so you’d have to just set and trust the different endpoints to update every X seconds.