AWS Thinkbox Discussion Forums

GreenButton

Interesting announcement during Siggraph:

greenbutton.com/Applications/VRay
greenbuttonwebstorage.blob.core … aSheet.pdf

For studios with little or no tech guys, this will probably be a very attractive solution to get rendering working in the cloud for studios needing an extra boost of rendering resources.

With regard the PDF (2nd link), their process of converting to vrscene, before sending to external/cloud solution was exactly what I was thinking for our setup for VRay rendering. However, I had plans to do the conversion as a ‘special’ local processing job, which also collected the external resources (textures, proxies) and also handled the upload ~ maybe using Aspera tech to assist as well in the transportation as well as keeping assets in sync with the copy in persistent storage in the cloud.

This custom plugin, could then have support later added for other renderers, technologies, etc…ie: Modularise the connection to the public cloud.

All these 3rd party offerings like GreenButton, don’t offer up a slick solution to off-loading the data files & asset files upload from the submitting machine…thereby tying up the artist’s workstation.

i will look at this. met teh CEO today, nice guy. however their service [last i checked] is incredibly expensive. on the plus side, our VMX demos were -very-well received and we got a lot of people excited about it ;-p
cb

I was Customer #1 for 3ds max. I helped even write/debug their submission script. It’s a really nice smooth process. They also do bit level diference work on the asset uploads so that you only upload the delta of your data not the entire file. It’s just SOOOOOOOoooooooooooooooooooo expensive.

You pay a huge premium for their asset management system. And RenderTitan etc have pretty decent asset management systems now. Their sales rep also split a taxi with me heading home from siggraph (completely randomly) and was kind enough to pay for all of it. :wink: And like Chris said, the founders are all super nice. They stopped by our studio were really supportive and seemed interested. Ultimately though they were just prohibitively expensive compared to building a farm. We were going to spend $62,000 per year on GreenButton based on our capacity. You can buy damn good farm for $62,500.

yeah, i think it makes more sense for greenbutton to be another cloud provider in our deadline VMX product.

cb

In some ways you already are a cloud provider for Greenbutton. Greenbutton runs on Azure. They just add an automated asset management API to upload assets. Slap an asset sync system and Deadline is Greenbutton+.

And in the case of 3ds Max I wrote the maxscript that provides the file list. :stuck_out_tongue: And that was based partially on bobo’s comments on CGTalk. So you could just go to the source. :wink:

I reckon I/we can build a better asset management system than GreenButton’s one which is more flexible and already integrated into Deadline in the future.

GreenButton like all other cloud providers I don’t see as a replacement for a studio’s local farm due to this expense. I see it for only 2 reasons:

(i) rapid scaling up to meet very short turn-around. (aka management took on too much work in too short a period of time to deliver / or there was a mini disaster in production?)
(ii) disaster recovery. What if all my local farm resources go bang? Well a VM wrapped up environment is going to be less complicated and more flexible than just an interface to 1 provider to get running back up again. Who know’s how they might charge in the future…(I want to be able to jump ship rapidly).

Disclaimer:
Small studios with no farm will still be attracted if they simply don’t have the CapEx budget to release for a farm hardware purchase. Hence cloud is attractive if still more expensive in long run. I’m talking about companies that need a farm solution yesterday and don’t know if the company will still be around in a year’s time. (Normal pay-back RoI on a chassis of top-end blades is about 3 years)

Interestingly, if you could ‘reliably’ predict ahead of time, how much RAM a render needs and allocate the minimum spec cloud instance to process this subsequent job, then cloud providers do actually become rather attractive over a 12 month period. Well it did, when I analysed our numbers. However, this was based on the above being possible and the concept of us splitting our jobs between those that need to stay local (30%) and the longer (70%), lead time based jobs being shifted to an external solution. I never got as far as trying the concept of running up 1 instance on a longer lead time job on a high spec cloud instance, then analysing the RAM / CPU usage and then automatically relating to this additional info, by say, down-scaling the VM instance spec, thereby saving cash $$. In theory, using something like the Deadline event plugin to react to the job finishing frames and talk via API to the cloud should all be possible :slight_smile:

Mike

One important factor in cloud render solutions you have to take into account is that the traditional “cloud” services such as Amazon and Azure are both pretty slow per core. Compared to an Intel 3930k 6-core machine I found Amazon to be about equivalent to ~24 amazon ECUs. I see they just released a 2nd gen EC2 nodes but I haven’t benchmarked them yet but 25 ECUs per Double-XL node would be perfect for a render slave. But yeah, scaling on a per-project basis if you want to wrap the capacity into each job’s bid can be very attractive. We just need gig-E internet and it would be great.

Agreed on ‘current’ cloud CPU power vs. cost. However, I’m looking towards the future/much longer view, where cost is going to be driven down and CPU/RAM is going up :slight_smile:

That’s where a smart data transport mechanism like Aspera (UDP) comes into play and would really help here. With less than <100Mbps connection, I can shift A LOT of data around reliably. However, Aspera is too expensive for most clients.

Going slightly off topic, I don’t think I ever mentioned the technique Framestore used to get themselves extra render power when they had to extend the original deadline for delivery of Gravity from Nov 2012 to earlier this year. They effectively (this has been done a few times before over the years), contacted the other Soho VFX houses and used some of their nodes to assist. However, the smart thing is they didn’t re-image a load of machines to get them working in their specific pipeline. They just wrapped up a small VM image and this was run on the other companies nodes, sending back image data via dark fibre to them.

In effect, VMX :slight_smile:

Nick Nakadate actually was a couple hours out from pulling the trigger on renting our capacity this spring for a project. Thanks to all the universities using deadline the user controls are pretty good now for ‘sharing’ render capacity but keeping it secure. :slight_smile: I like the idea though of just sending a small VM though. I really need to look into this Openstack business for internal VMX stuff.

great thread - i’ll make sure james has a read.

mike - i think if there ever was an opportunity for you + thinkbox in some capacity , this is it. lets talk this week

cb

Good points all. I had the opportunity to talk with a Senior Manager from Amazon Web Services. He said in their considerable experience with cloud-based services that generally the larger the (customer’s) company the less likely Software-as-a-Service (SaaS) is to be viable. This is for the obvious reason that larger companies have more sophisticated pipelines that SaaS models cannot accommodate. Not only that, the VMX (self-serve) model lets companies use commodity prices for cloud services or even negotiate pricing directly with cloud service providers. That said, I think there is room for both SaaS and the VMX (self-serve) approaches. And I’m sure that with a little work we can make it easy for smaller shops to take advantage of public cloud rendering with VMX.

However, based on feedback from Siggraph, I’m estimating that the fastest adoption for VMX will relate to private cloud usage within companies (even smaller ones). The removal of the OS barrier alone that virtual machines provide is enough to make VMX attractive, but there are many other advantages as well. Plus there are no, or at least substantially fewer, issues surrounding assets since a private cloud presumably has direct local access to the asset store. And, as others have mentioned, it’s typically more cost effective to buy hardware for a private cloud or classic farm than to rent compute cycles on a public cloud. That may well change in time. Large demand spikes are the possible exception, where the usage will be short-term and high scalability is required.

Just to throw out one more facet I saw this last month:
microsoft.com/en-us/server-c … -pack.aspx

Run Azure locally and then just transfer your VMs to the Azure cloud. So you could go from local to infinity theoretically with the same platform if I understand the pitch correctly. Downside is that Azure is even slower than EC2 and therefore a little more expensive. But it could be worth the cost difference if the convenience is there.

I’ve said before I like the idea of TB running a “market” and acting as a “broker” for render jobs. If you can securely dispatch VM’s based on a market pricing scheme, you could skim off a broker fee and we’d still have something cheaper than EC2. If EC2 is cheaper, of course, then Deadline would submit to there, but if someone across town is offering their farm for the next 48 hours at 1/3 the price, they make a little money off their downtime (beer money for IT dept at least), and you get faster transfers than going to a far away cloud center. The cost of rendering goes way down, but TB actually makes more money because they suddenly have a trading platform.

Agreed. It would be great to get some revenue off of our farm for the price of electricity and bandwidth. By distributing it across farms you could also spread bandwidth impact if someone like us only has 100mbps down. Saturate our pipe and then if you have massive upload over fiber shift to another facility who might have some more capacity/bandwidth open.

Privacy | Site terms | Cookie preferences