Remote/cloud hosted logging/reporting analysis system for deadline clients.I have a design schema showing visually how this could work.
Multi-Region rendering support for animation for 3dsMax & Maya. Other 3D apps? C++ re-write from MXS?
DBR for MR for 3dsMax & Maya?
Make satellite office rendering / monitor viewing a reality. No slowdown on monitor refresh!
Path Mapping for 3dsMax plugin. Ability to reroute all 3dsMax external references. v.important for cloud. Path Mapping required for all plugins, where possible. Support 3rd party file paths such as VRay LC / GI, Mental Ray FG file paths, etc. Important for file path handling in cloud environment in the future.
better report viewing / debugging - colour coded, search field more powerful, Filter - “INFO:” or “ERROR:” or “WARNING:”
IDE for Deadline API scripting - PTVS - see my forum thread prototype - project templates, etc - viewtopic.php?f=156&t=9980&start=10#p43746 Essentially, a way to externally execute Deadline scripts in VS but in the context of the Deadline environment.
Repo Stats - inject into standalone studio DB. Support injecting older v4/v5 stats into DB? 10. Make the Maya submission UI as good as the SMTD one! Win more Maya studios?
Submit a DBR job from within the SMTD interface?
Custom Asset Transfer plugin for Deadline. Handles compressing/transferring data around & sending compressed renders back to studio file server & uses different file transfer protocols - SFTP, rsync, Aspera UDP, etc. Upload once only to cloud storage - chksum
Transfer Jobs - detach dependency on having to be submitted into a dummy deadline repository
Missing Map on Frame=0 issue in 3dsMax / ADSK bug, which via SPARKS might be able to get sorted or via Deadline Lighting hack?
Tile Assembler - better support 32bit TIF and other image formats/compressions it has always failed on
Session Paths injecting NOT working for VRay, but OK for scanline & Mental Ray!
Fail Safe on high RAM usage. Kills slave.exe big time. Proactively monitor RAM usage. Fail-safe setting
FumeFX - output path include in SMTD submission & report sim job progress in ‘task render status’
Concept of a scanning folder job type for automated submission. Scriptable in current architecture?
client submission UI script helper dialog needed back into v6?
Pop-up handler ability to execute functions and not just press buttons
new mobile/tablet apps needed. Various bugs have been reported by me, need fixing. Maintenance pack? Mobile/tablet apps need updating (android submission date bug/% complete rounding up issue/Pulse crash if too many jobs in queue).
Pulse redundancy for PM (WoL / IPMI) & Balancer redundancy req. for VMX
SMART farming. AI based decisions. Now we have a DB…many possibilities…
Missing functionality from v5.2 - “Suspend ALL Active Jobs”, “Resume Previously Suspended Jobs” 26. Missing v5.2 feature - Where is Pulse “Repository Disk Space Monitoring” warning if disk space < x MB & email notification for low disk space warning?
script / interface to quickly dump out and display the currently selected jobs properties - plugin & job info
Deadline “HARDWARE” architecture - white-paper. Advice on small/med/large best config setups.
Industry certification / hardware certification / 4*hr initial Deadline config setup for new clients.
MongoDB sharding support. Ability to build Deadline mongoDB HA cluster in the cloud
Solve the slow refresh monitor refresh for remote offices by placing cloud based mongoDB shards around the world?
On OSX, launcher app should only run in system tray and not as an external app via launcher bar!
I really miss having the Deadline Scripting API resource on your website as well for quick lookup. Would it be possible to add this information back in, but also keep the newer doxygen manual system? A brief google, shows that doxygen can be outputted to html and various plugins as available?
de-couple Deadline. Allow all or part of monitor app to be embedded in another app. In 3dsMax extended viewport, inside of Nuke, Maya? - QT plays nice now?
Image Frame Buffer in Monitor - streaming of rendered images into buffer?, playback? replacement RV?
ftrack event plugin. Add FTrack support to Deadline. Competitor to Shotgun?
tactic event plugin. Add Tactic support to Deadline. Competitor to Shotgun?
5th Kind unveiled Core 4.0 event plugin. Competitor to Shotgun?
Deadline needs to be memory aware when assigning tasks. Especially important for cloud rendering. Over provisioning resources will cost more money. Ability for Deadline to automatically review tasks thus far processed and the RAM levels used and pro-actively dial up or down the slave instances/hardware requirements to meet the requirements of the job. The Foundry have something similar cooking called: [‘heterogeneous compute,’ but which basically means using all the compute devices in your machine simultaneously - so shipping off parts to compute on GPU and others on CPU in a sensible, sustainable fashion.]. In the future, I see renderfarm management software required to consider all aspects of the hardware of a machine and assist in best utilising the hardware for optimal processing, scaled up by X number of local & remote devices.
LDAP/AD security integration for the new user groups security model in Deadline. Allow studios to hook up to it.
SMS / Growl support for Deadline
more interactive “customise event plugin” dialog. Allow scripting API UI elements in event panel & RC job plugin properties tab (“3dsMax” tab - extra UI features)
Web Pulse crashes if you have more than say, 800+ jobs in queue and keep hitting “refresh” on mobile app
Render calculator via py script.
different platform Installers for flexlm License Server should be provided. Especially on OSX. Nightmare!
Could the dependency on MONO ever be removed? I’ve noticed that people seem to discount Deadline based on this dependency. #justsaying
Customisable template driven notification system for email, iOS and Android notification of jobs. The current email notification system is blackbox. For example, option to include log reports as email attachment with job failed email notifications. Implement iOS and Android push notification systems with “user options” to allow individual users to enter iphone/android device details.
Py packager script ready to send to support, essentially archive job script,but include log reports, all local repo settings, dB info? Security issue if you grab repo settings?
Add Hiero support. TheFoundry already ship Rush submission py scripts. Perhaps, they would also kindly ship yours as well? On OSX, under: [/Applications/Hiero1.7v2/Hiero1.7v2.app/Contents/Plugins/site-packages/hiero/examples] there are 2 example py scripts:
rush_render_auto_submit.py & rush_render_start_irush.py which show how to submit to a Rush queue from Hiero.
Ability to ‘visually’ categorise jobs in monitor queue (via job data columns, incl. extra info’s). ie: “Extra Info 1” = Project, “Extra Info 2” = Sequence,“Extra Info 3” = Shot. TreeView in Deadline monitor, folding up these similar jobs by category. Could also sort by job plugin type, submitted machine, etc. Also, visually need to show this in the new node dependency graph as well?
KL / Fabric Engine - integration. Nah. Too premature to worry about at the moment?
OSX Mavericks potentially drops QuickTime MOV in favour of AV Foundation. No more ProRes? MOV defunked?
Screen-scraper / image attachment function to email notifications for error reports / log reports. Overhaul the entire notification system. Template driven.
Unicode support for all languages.
Shotgun Pipeline Toolkit integration into Deadline Launcher - tie in Deadline/Shotgun more? - launch apps from SPT global config?
Flexible instant messenger integration into launcher app? Ability to message another Deadline user? “FYI - all users - attention, farm restarting…” style.
Ability via launcher to postpone any LOCAL Deadline Slave scheduled start-ups. Global override via super-user mode. Build a flexible, slave startup schedule - “when machine idle”, “scheduled time”. Provide variables to control what is identified as “idle” state - ie [pseudo code]: “CPU idle time for all processes called: [3dsmax.exe]” + “keyboard/mouse inactive time > 10min” + “custom studio extra 5 mins” = “start up deadline slave as user obviously is in a meeting and isn’t rendering locally”
Alienbrain - Deadline integration?
VFX Showrunner - Deadline integration?
TheaRender - add support (various ‘live’ plugins for various 3D apps incl. 3dsMax)
Client asked for a Python/Maya version of the custom submission tutorial on the TBS website to be created.
Thanks for the awesome list, Mike! I’ve placed items 2, 5, 6, 12, 23, 30, 31, and 39 into my Feature Requests list for VMX. As we start looking at various feature areas, I’ll post questions to the VMX Alpha board seeking input. Please jump in and provide details on any ideas that you have.
A lot of good ideas here, and many of them are already on the todo list. See my comments on some of them below:
Bobo is looking into supporting animation for 3dsmax. In order to support other apps like Maya, we’ll need to write a tile selection tool like the one Bobo developed for Max.
We’d like to support Mental Ray as well, but we’ll probably hold off a bit until we get some feedback on the VRay feature.
The goal is to continue to improve the Monitor’s performance in remote situations. I think where the current Monitor has issues is when mass-changing jobs or slaves.
Definitely want to be able to path map anything that’s included in the job’s plugin info properties.
08: Ideally, we’d like to build a little script editor into the Monitor (a new panel). That way, it’s not tied to any platform or IDE.
09: Event plugins! Everyone wants it done differently. Currently, stats are injected into MongoDB though.
What’s missing that would make it better?
I think we want to keep the DBR jobs in a separate UI like they currently are. So much of what’s in SMTD wouldn’t apply to DBR jobs, and having a dedicated UI makes it cleaner and simpler to use.
How could you see this working? Edwin here has suggested something like a submission queue (similar to a printer queue system).
The tile assembler application is essentially EOL, with Draft being its replacement. Have you had a chance to see if Draft properly handles 32 bit tifs?
I think we need to move to a proper “installer” here, mainly due to the requirement for elevated privileges in most cases. Making it a requirement to run the Monitor as Admin to use this seemed a bit weird.
Yeah, we haven’t done much (if any) mobile dev in a while. We’re currently working on a RESTful API for pulse, which could improve mobile usability. I think if and when we take another look at mobile, we’ll want to write a new tool that uses unity or another cross-platform toolkit so that we’re no longer maintaining 3 different mobile app code bases.
The housecleaning option in the tools menu does the limit stub check. There also isn’t a trash bin to empty anymore. The Suspend/Resume ALL feature was prone to mistakes, so we took it out for now. If there is demand for it, we’ll probably add it back in. Just didn’t have time during 6.0 dev to get it in there properly.
The problem with this one is that it only worked when studios ran pulse on their repository machine. Now that there is a database and repository component (both of which can be on separate machines), we weren’t sure how helpful this feature would be in most cases.
Yup, we definitely want to support this.
We’d love to upload the html to our site, but our website provider makes this a real pain because it lacks any mass upload option. We can generate the html pages just fine, but we would have to manually upload each page to the site. We do go through this process already with the online docs though…
We think a “slots” based system would be the solution here.
Can you elaborate more on this?
Edwin was working on something here in his spare time, but now he doesn’t have spare time anymore…
Would require us to rewrite everything in C++. Not out of the question, but it would be a ways off.
Fair enough. Wise move. FYI: VRay v3 goes into public beta today with an updated license/cost model. Unified render-node licenses. If you own 3dsMax for VRay v3.0 rendernode license then it will also be able to render Maya / VRay standalone as well.
I was thinking here of the v5 “satellite mode” which was ok, but never really delivered on the idea that a remote office could view your local farm within the remote user’s monitor app. We just provided a VM for the remote users to RDP into to control the monitor. I’m looking for a solution to this issue overall.
Coolio. Path Mapping the plugin job file is great. But my concern here is how well the likes of 3dsMax react to file paths being replaced after the file has been opened…
Not sure here…let me explain… See: viewtopic.php?f=156&t=9980&start=10#p43746 Thinkbox providing a directory of parsed/exported out py files or site-package as part of the compile / build process means studios can choose which IDE they use, which of course might be platform specific. Deadline monitor supporting a script editor is good as it ensures support over all platforms/newer versions of Deadline/updated API functions, etc. However, will this Thinkbox script editor have built-in intellisense, stack-trace, break-points from v1.0? Sounds like quite a lot of dev time to make this happen? I see this a bit like the python script editor inside of Nuke. Good for one liners, small scripts, but anything bigger, that might also require a 3rd party py library and it’s my guess that most Nuke py developers use the IDE of their choice. The new py support in 3dsMax 2014 or py scripting in Maya…I’m pretty sure scripters use their own choice of IDE. I would throw it out there, that Bobo would probably use a different IDE for all his MXS development work if it was possible to remotely execute within the 3dsmax environment? So, Yes, a built-in Deadline script editor sounds great, but if it means waiting X months/yrs for it to be fully realised, compared to hooking into known good IDE’s…I know what I would choose…[caveat: as long as the current issues of being able to remote execute in Deadline environment for GUI submission scripts, event plugins, etc could get solved and the API py files could get created]
I was referring to a TransferJob submission effectively using the deadlinecommand.exe to submit the job and thereby requiring it to see / network path resolve to the remote repository. In our Aspera transfers to our remote office jobs, I had to setup dummy repository directories to make this work. Now with a DB, if you are submitting the data file (3dsmax scene) with the job, do we still have a dependency? Ideally, I was looking at completely decoupling any dependency so that transfer jobs could be packaged up and transfer to anything. ie: a bit like the new “archive job” and “re-import” it type of situation. Perhaps I misunderstand your comment? and a print queue; where jobs are just thrown into a queue and then sub-sequently transferred are dealt with by a managed process will solve the above issue?
RESTful API. Cool. How about a maintenance release for the mobile apps, as I identified quite a few little glitches/bugs a while ago now as an interim solution? Not sure if Unity is the best way to go. We used these tools to deliver to multiple devices and keep 1 codebase: sencha.com/products/touch/ phonegap.com/
I’d say it’s really important for any studio submitting their data files with their jobs into the repository. You could easily fill-up your repository file server. So, monitoring and email warning is a good safe-guard.
Oh dear! Must be some kind of work-around for mass upload? feature request to website provider? sounds crazy!
“slots”? sounds interesting. Could you explain a bit more?
So, in the past, I’ve been at an artist’s desk when the question how long will it take to render if I submit it to X of our X spec machines? Obviously telling them to get their calculator out and walking off to make a coffee tends not to go down well as a reaction. So, a general scripts, calculator for users to cross-reference potential render times against their own farm machines (assuming they have made use of the time multiplier feature to identify the speed of machines). Probably just a py script that someone needs to code one day and share
Shame. The latest Foundry FLT installer (Inno Setup) is an installer for RLM and flexlm (as they slowly migrate away from flexlm), combined with a wrapper utility which starts/stops/re-read/diagnostic log dump out. It’s really nice and the kind of thing I would like to see across all 3 platforms. I wonder how much time you guys spend having to support clients license server setup issues? Possible time saver in the future combined with consistency of license server setup on client’s networks?
The goal for the Monitor in D6 was to make the need for a satellite mode unnecessary, and we think we can still achieve that. There are very few file-based operations left in the Monitor, and those that are still there are now optimized. As I mentioned before, the remaining areas for improvement are batch options that I’m guessing involve too much back and forth with the remote database. We already have some clients that use the D6 monitor in remote situations, and for the most part it works quite well for them.
Out of curiosity, is this request based on your experience with D5 or D6?
Yeah, I see what you’re getting at. There is no way we could write a fully fleshed out IDE for Deadline, but I would have to image we could write something that works pretty well for things like testing render plugins, event plugins, pre/post job/task scripts, etc. Imagine a UI where you could edit your event script and then press the “Complete Job” button to test it. Sure, you won’t have the nice things like auto-complete or intellisense, but being able to directly test these things has some upside to it.
I guess we could expose a command line option to test these sorts of things as well so that external IDEs could be set up to use them. I guess our concern for writing IDE specific plugins is that they will become another external component that we’ll have to maintain, and we’ll probably need plugins for VS, Eclipse, Scite (which I love to use), etc.
I don’t think we can really get around the requirement of having the submission process see the remote repository. Even though we use a database to store most things, any auxiliary files submitted with the job are still copied to the repository. I thought you meant removing the requirement of submitting to a local repository first before transferring it to the remote one, which a submission queue could essentially replace.
Honestly, we just don’t have any spare resources for the mobile apps at this time. When we do have some resources available, I personally would rather start from scratch and use the RESTful API along with a single code base (using Unity or whatever) then try to patch 3 different code bases. It’s definitely something we need to have an internal discussion on.
But there is a good chance that Pulse won’t be running on the repository server machine. They also might be using an alternative auxiliary file location so their job files wouldn’t even be on the repository machine. That’s really the main reason we removed it, because only under specific circumstances did it actually work.
How do studios deal with this potential problem in general for asset servers? There must be some general purpose disk monitoring software out there?
Nope, and we’ve already discussed this with their support team. We might need a separate server for our product documentation that’s more flexible. It’s definitely something we’ve talked about internally on more than one occasion.
The idea is that a slot is an arbitrary unit that can represent available RAM, cores, diskspace, etc. For example, a slot could be 1 core and 2 gigs of RAM. Each slave would have a slot count, and each job would indicate how many slots they require. For example, a QT job might only need 1 slot, and a 3D render might require 8 slots. The slaves would then try to fill up their slots as best as possible. If a job goes over its slot requirement, it could be failed, or it could increase the slot requirement as appropriate.
Not only should this system achieve what you’re looking for, but it would also allow a single slave to process more than one job at a time.
Ok, cool. My feedback/experience is based on v5. I need to give v6 a test (not quite sure how tho…!)
Yeah, that sound’s awesome. The console already prints out any messages/errors when I manually test an event plugin firing, so we already have some functionality!
command line option would be good as well for external connection. OK, I accept the overhead of upkeep on external plugins; so how about this? You already create/dump out/auto-build the py API script files which contain effectively the deadlinecommand.exe methods for Laszlo/others to use for API access. If you could just add to this all the deadline scripting/plugin/event API methods as well and they get dumped out as py files as well at build time, then these can be added to ANY IDE search path to allow intellisense to work for the user. This way, it’s IDE agnostic, automated as part of the Deadline build process and whenever a new API function is added or edited, then it just updates the particular scripting/plugin/event API files at build time?
OK. I guess TransferJobs could be submitted back into the repository they reside in locally and then a magical event plugin works this out and shifts the data manually to a chosen network directory path. I think I’ll park this conversation as it doesn’t directly effect me anymore!
Fair enough. The big question. Should mobile/tablet based apps have control/write access to a farm remotely…? Win or complete FAIL? Who know’s
Fair enough. Studios should take care of this themselves. We did.
Ah, a separate server could be useful to allow you to build the docs and then auto ftp upload to this server as it’s under your control.
“Slots” - sounds good. Probably wise to consider all elements of a “computer’s hardware” for potential inclusion in the slot architecture. ie: GPU, network I/O capability. Could be very interesting…needs to be effective but simple enough for people to understand and use in practice!
Then you could submit maybe a VrayRT job and say it requires 4.0 GPU compute Units or maybe you have a nuke job that requires 20NET Units and 20 CPU Unites. Then it could go… ok it can render the VRayRT job since it only requires GPU Units. And since it uses 0 CPU Units this slave can render 3 simultaneous tasks of Nuke.
You might have a slower slave that is:
RAM : 16.0U
DISK : 6.0U
GPU: 2.0U
CPU: 40.0U
NET: 100.0U
Then it couldn’t render the GPU job since it requires a minimum of 4.0 GPU units but it could render 2 simultaneous Nuke tasks.
Another option you could set for a job would be MAX. So you could set RAM to 16U for a render but set CPU to MAX UNITS. Then you would know it uses the full CPU whatever is available.
I also feel like maybe there should be a way to have quantity in addition to Units. So you would have 16U of RAM available but maybe you have 3x3.0 GPU available. That way it would know you don’t have the capability to render one 9.0U Min job but only 3x 3.0U jobs. Not sure how that would work.
FYI.
Blur Studio’s Arsenal has “slots” in it’s architecture, but this only handles the number of CPU cores per slave. The idea being that empty/un-used cores can be used for processing other jobs. In Deadline, this is accomplished by “multiple slaves” and “processor affinity”, which is more flexible.
I’ve also added #47 - #59 ideas to the top of this thread as well