Deadline + Vray Render Elements Problems

I am having two problems regarding rendering with 3dsmax/Vray + Deadline.

I am working on an animation which is several thousand frames long with lots of particles. Thus I have to uncheck “Restart renderer between frames”. That beeing done and switching off Vray VFB leads to totally messed up Render Elements. For some reason Vray then saves its Elements under the wrong filenames, sometimes you get a Reflection in your Zdepth, sometimes your Multimatte is saved under the name of Zdepth. Local Rendering is fine, this happens when I submit it to Deadline (deadline gives a warning saying that “Restart renderer between frames” when using Vray is no good idea).

I hoped the solution might be to turn on Vray VFB. This is somehow working, but the Render Elements are not put into the subfolders anymore. Subfolders are created but empty, Beauty+RenderElement are in the root folder. How can I make this work? Copying those files per hand (several thousands) takes about one hour and is quite tiresome…

Thanks in advance,
Daniel

You shouldn’t have any issues rendering particles with ‘restart renderer between frames’ on.

I thought so too. But I then realized that every machine then starts calculating ParticleFlow events from frame 0. Therefore render times skyrocket…

I would cache the particles to disc.

And also there is an option in Deadline for 'Enforce Sequential Rendering (recommended for Particle Rendering) on the job tab of the Submit to Deadline dialog.

Thought about that: takes hours and creates tons of data. I can’t do that every time I submit a preview

That doesn’t change anything as the renderer is still being restarted between each frame. This option is in general quite mysterious: deadline then renders blocks of frames in a total random manner.

Nevertheless thanks for your suggestions

Put the Task Chunk size up too.

This is one of those things that might be quicker to render just using one machine as the actual render time itself is less than the calculation time, in which case set the take-chunk size to the duration of your sequence and allow one machine to render it. I’m sure there’s a horribly complicated way to calculate the best way to get a farm to render a job like that. It may well be one machine renders 0,10,20,30,40 and another machine renders 1,11,21,31,41 etc… but unfortunately without caching it before you send they will all have to calculate every frame at some point.

Particles are still being recalculated between each frame, no matter how lange the chunk size is. I really wonder if deadline is dealing with particles as it should: I never had those problems using backburner.

To be clear: it works perfectly fine when “Restart renderer between frames” is turned off. No particle calc from frame 0. But it then messes up my render elements (as written above)

Daniel,

Could you send over a job report for us to look over? At worst we can get the command used so it can be tested from the command line.

Cheers,

Thanks for looking into this. I sent you an email with a job report.
Daniel

Any progress here? Did you get my job report? I am asking because I am about to manually copy thousands of frames to the correct folders which will take several hours :slight_smile: :confused:

Hello Daniel,

I don’t seem to see the job report. Can I verify you sent it to support@thinkboxsoftware.com? Thanks.