Bleder fire cache files not being sent with job

The cache files for our Blender jobs do not get sent along with the job. Therefore, those cached things don’t show up in the render. Is this a feature that needs to be added or is there a configuration option somewhere?

Hello Brad,

Usually cache files are something that is calculated before hand and saved to, and then referred to from a network location. We don’t generally copy them with the scene, as they are treated like any other asset in a scene.

So how do I render fire and particle systems then? We tried before and it didn’t work.

Hello Brad,

You would usually need to have the caches pre calculated, and available in a network location for the render node to access them.

Is there any way you can create a script that runs when you submit a Blender job that sends the cache files? Blender searches for the cache files in the same directory that the .blend file is located. It looks for a folder called blendcache_filename, filename being the name of the desired file.

Well, I don’t know much about the blender API, but I do know Deadline.

I took a quick look and our integrated submitter ("[repo]\DeadlineRepository7\submission\Blender\Main\SubmitBlenderToDeadline.py") is one of the simple guys that stores some information and runs the Monitor submitter ("[repo]/scripts/Submission/BlenderSubmission.py").

The problem here is that the main submission window is outside of the Blender API so you can’t query that information when a user clicks the ‘submit’ button.

It looks like this is going to be a laborious problem to solve. We could prompt when the script is run I suppose… Users aren’t able to save those out themselves?

hello

ive been trying to render some water from Blender using deadline… is there anyway to do this?
when you say you have to have it all precalculated - does that the water needs to be baked on a single computer (without the power of the render farm) and then put in the same folder or something?

Sorry for my questions if they are very self explanatory. This is a very important issue for me at the moment, as it means i will abandon certain ideas completely if they can’t be achieved.

Fluid and dynamics simulations are history-dependent, meaning that the state of the system in next moment in time (or the next frame) is built upon information about the moment in time just before it. This chain-link nature of simulation means that it typically cannot be spread across multiple machines in an efficient manner. So the common approach is to bake the simulation on a single computer and then save the simulation cache to the network in a place where all the render nodes can access it. Then the cache can be loaded by each Slave that needs it for rendering purposes.

It is sometimes the case that the render time for a given frame is much longer than the simulation time for a frame. If the simulator writes out cache data on a per-frame basis, then the render Job can be frame-dependent on the simulation Job. So while only one machine is generating the simulation cache, other machines can start rendering as soon as simulation data is available for a given frame, which means greater parallelism in the processing pipeline for a given shot.

ok thank you for that reply. Sounds like this is a problem that hasn’t yet been overcome.
what about with 3DS max, does that program have a workflow for fluid that doesn’t require a single computer to do the bulk of the processing - thereby making the whole point of using a render farm void? Actually kind of blows me away that this problem hasn’t been solved.

What about with dynamics… for instance with simple rigid body simulations - such as a falling sphere onto a plane, it seems like when i sent to deadline, the object that is supposed to fall never does… and just hangs in place for the duration of the rendered frames. Does this require some sort of baking process as well, or can deadline engage this?

thanks for your help,

Hugo

The problem is inherent to the mathematics of any history-dependent system (fluids, rigid body dynamics, etc.). I have seen Siggraph talks about simulators that can identify disconnected chunks of a simulation and compute those chunks as separate processes until they interact with other chunks, but those were academic projects and the approaches weren’t really suitable for commercial products. That said, I haven’t followed the state of commercial simulators, so maybe others know of recent advances.

Some systems don’t require a cache for rendering. This usually means that the application must process the simulation history up to the point of the frame being rendered. This can be very inefficient since it means that the simulation computation is being duplicated by several machines, so in most cases it’s best to pre-compute the simulation on one machine and save the results into a cache.

In cases where the dynamics objects appear frozen in renders, it means there is a configuration error of some kind. Either the simulation has not been set to compute automatically (in the manner of the previous paragraph), or the application is expecting a simulation cache file and cannot find it, so it applies no motion to the objects.

James responded here while I was writing this, but I’m going to post mine anyway. :smiley:

Simulation is a hard problem. It’s sort of like playing pool. If every machine is responsible for one ball, how do you know on frame 5 that your ball is going to be hit without knowing what happened to the balls you can’t see.

Houdini is the only product I know that can do it, and it’s pretty genius. The idea is that they break the environment into a 3D grid, and each machine gets a grid cube to work on. When an object passed from one grid cell to the next, the object is actually passed from one machine to the other. It’s great, and it can work in Deadline with some fiddling (using HQueue).

Your sphere problem though… That doesn’t sound right. Can you send your blend file and I can play around with it a bit.