Usually when I see a difference in renders between the local and AWS Arnold image render it is because there is some out-of-project texture file skulking around. This doesn’t seem to be the case. Attached are some examples
It appears there my skydome texture is not being used properly but is working in some capacity because there is no other light in this scene and if it wasn’t uploaded at all then the frame would be black.
Anyone have any ideas why this discrepancy is occurring??
Do you know if there is a version mismatch between the version of Arnold you rendered with locally or are submitting from compared to the version installed on the AWS instance? Are you able to send the job report?
Looking at the my local Maya version is 2018.3 and it appears the latest is 2018.4 however I do not know what version the current AWS Image has installed on it for Maya or Arnold.
here is the frames render report. I have my packet size set to 1 so I could see what was happening per frame.
I just updated Maya to 2018.4 in hopes that would fix it. No such luck.
I’ve attached the render log set to “info”. The only referances I can see to a Arnold version number is the Arnold Core version 5.1.1.1 and MtoA 3.1.1.1
I believe Charles meant to say that the aws version is running arnold 5.0.1.1:
2018-11-08 17:59:46: 0: STDOUT: 00:00:00 537MB | Arnold 5.0.1.1 [8b0ed0d7] linux clang-4.0.0 oiio-1.7.15 osl-1.9.0 vdb-4.0.0 clm-1.0.3.513 rlm-12.2.2 2017/07/27 14:36:23
whereas your local log shows that you’re running 5.1.1.1:
00:00:00 1102MB | Arnold 5.1.1.1 [3849b993] windows icc-17.0.2 oiio-1.7.17 osl-1.9.0 vdb-4.0.0 clm-1.0.3.513 rlm-12.2.2 2018/06/26 21:12:06
So these versions differences could potentially be affecting what you’re seeing since it was presumably created with 5.1.1.1, but rendered with 5.0.1.1
Thank you Morgan, I see the impact that could have on the rendered image. Arnold render licensed through the AWS market place is the only way I have access to batch rendering using Arnold. Because of that I am at the mercy of the Images available to me from Thinkbox when I start a spot fleet. So I have no control over what point version I use for rendering through deadline.
How can I troubleshoot to see if the version discrepancy is the culprit?
You definitely will not want to be on Maya Update 4 as we do not have any AMIs with that version available. Our latest is Update 3.
We do have an Image running Maya Update 3 - MtoA 3.0.0.2. Go to EC2 > AMIs > Public Images > Filter “Deadline Slave Base Image Linux” to see them. You can grab the AMI ID and launch one of those.
To get an exact version match you would need to customize one of our AMIs and install that version of MtoA.
Whoops…I may have gone ahead and updated out of desperation
Trying that now…hopefully the fact that I just updated won’t exacerbate the situation
It is a noise texture running through a few remaps and a color composite, all things that are native to Maya. They manipulate the roughness, color and IOR of the specular material attributes . That particular object (the skirt) does not have a bump associated with it. The only item that does have a bump is not really viewable in that frame of render
Thank you both for all your help, I just got an order for 4 animations and I am quite concerned about how my renders are gonna turn out
Can you point me to the AMI you believe will work best for me?
I am trying this one:
Deadline Slave Base Image Linux 10.0.19.0 with Maya 2018_3_Update and V-Ray 36004 and Arnold 3.0.0.2 2018-08-08T185429Z ID:ami-0e8105e7e40dabef7
But the node will not pickup the job. I can tell from the limit panel that it attempts to check out an Arnold license but a moment later drops it and restarts the process.
Are you able to connect to the slave log to see if it is outputting any errors? Sounds like there is with communication between slave and license forwarder.
Make sure that the certs were uploaded to the S3 bucket that corresponds with your infrastructures stackid. You can try removing limits to see if the job picks up and fails due to a license error. Do one at a time, Arnold then Maya to narrow down which product could be causing the issue.
I don’t know how to do that with Spot Fleets but I’ll give a go if you can point me to some documentation
I started a traditional Maya 2018 Arnold AMI from my infrastructure and it licensed okay and started rendering although with the original render issues.
Spun up the AMI I referred to earlier: without limits it attempted to pick up the render and then failed. After I put the Maya limit on it never showed it picking up a task however I could see the Limit count report one in use and then quickly drop back down to 0. Same results for Arnold Limit as well.
If supported in your installed version you should be able to right click on of your AWS Slaves (In the Slave Panel) and connect to slave log. It will pull up a log stream.
If you cannot connect to the slave log, we do have a document for sshing into the slave.
Good news on all fronts Charles, It appears that AMI works with licensing just fine and that version is kicking out renders that match my local renders.
I’m gonna let it chew up a full render before I breath easy but it looks like things are smooth now.
It is a noise texture running through a few remaps and a color composite, all things that are native to Maya. They manipulate the roughness, color and IOR of the specular material attributes
I understand all of those things in isolation but not exactly what you’re doing. I shall leave the art to you!
I am happy you’ve got things working but I’m not clear on exactly what fixed this… Just making sure we matched Maya versions?
Yes, I think there was a major change to how Arnold handles specular reflection tracing between the 2018 release and the 2018.3/.4 release I was rendering with. Once I spun up a node that had an updated Arnold render everything seemed to jive just fine.