I am mesghing a Realflow BIN sequence containing roughly 50 Million particles using a Zhu/Bridson Mesh. The resulting Mesh has more than 20 million polygons and it seems it that is why our machines need 2- 3 hrs/ per frame to render it and unfortunately, we cannot afford these rendering times. It seems like the meshes of that size are too much for our machine’s memory (24GB) resulting in these big rendering times.
Therefore, I was wondering what can be done in order to optimize the mesh and since I am still new to frost, I hope you guys can help me.
My current mesh is using a Particle Size -> Radius of 2,2 and Meshing Quality set to 1,4 (Rel. to Max Radius). Vertex Refinement is set to 10 (Render) and 0 (Viewport).
My Blend Radius is at 2,0, using Low Density Trimming at Trshd: 1,0 and a Strength of 10.
Anyway, the Mesh did work with acceptable rendering times with Particle Size -> Radius 3,0 but this wasn’t detailed enough. In fact, there are areas in the mesh that could use even more detail than what I get from 2,2 but anything else was even worse, rendering- wise. There is a character running over a sea surface, so there are areas around the character where it could use more resolution and other areas that aren’t very interesing.
Do you have any suggestions, how I might improve my mesh rendering times- wiese without losing too much detail? Is there a setting I accidentaly used that is very critical?
I don’t think that 20 million polygons would be a problem for a 24GB machine.
A few questions and things you could explore:
*What renderer are you using?
*Are there any settings in the renderer that might need tweaking?
*Have you tried rendering the Frost mesh without any materials to make sure the shading is not the slowing down factor?
*Have you tried rendering that mesh in Scanline without any shaders / raytracing to see how much memory it uses / how slow it is?
*If you create a primitive object (say, a box with max. number of segments) and turbo-smooth it to produce 20M polygons at render time, does it challenge your machine?
Frost simply creates a single large trimesh and passes it to the renderer. If you can render any geometry with that polycount in reasonable time, you should be able to render Frost, too.
At this point I suspect your shading/rendering to be the culprit, until proven wrong
the post is not about blaming Frost for my rendering not working, but on inquiring if there are any suggestions on how one could try making the emsh less heavy without just increasing Particle Size.
Ok, I checked the scene using Vray Standard Material, Scanline…everything. It ate up all the memory and therfore took ages to render.
Then I created a KCM Selection + Delete Modifier and killed some particles out of the PRT loaders until I had a lot thinner “slice” of water to render. This reduced my rendering times from crazy to apporx. 10mins per frame on the local machine and now I hope the farm does the same.
Frost rocks, but I am seriously wondering how I could do my job without Krakatoa all those years. Krakatoa is one of the most versatile pieces of software in FX.
Thanks for testing this out. I will try to run some tests here to see what memory usage I get from 20M faces.
I know you weren’t complaining about how Frost works, I was just surprised that the mesh would use that much memory and wanted to make sure there weren’t other factors (renderer settings, shading etc.) that would affect the render time. Your experiments point in the direction of the TriMesh itself, so I will investigate.
As I stated yesterday, I can render my scene in less than 10 minutes…but only if I render in a single frame in the Max viewport.
As soon as I render the exact same frame in Range mode (e.g. frame 151 -152) using the same scene on the same machine, the frost object is created and then it says Rendering Image…and the counter ticks up and nothing happens. If you let it happen, it can take four hours and more to render a single frame. If I change Particle Size -> Radius to a larger value, it renders again.
I do not understand what is going on there. How can a frame that takes 10mins to render take over an hour, just by changing Output from Single to Range?
First thing to try: Switch from Relative to Absolute meshing to ensure that you are always producing the same mesh resolution regardless of particle radius.
See if this change affects the outcome.
Once again, please post the name of the renderer you are using.
Also, if possible, test with Scanline before moving to VRay or whatever. We don’t want to have a 3rd party renderer’s influence while debugging, since it could be very easily a renderer issue and not a Frost issue.
I am using Vray, yesterday’s scanline tests didn’t render anything so I stayed with vray. After some further testing, I dared touching Vray’s Dynamic Memory Limit. It was set to 400 (I guess the default) and found 400 was also the default back in the days when 2 Core machines were fancy stuff. I changed it to 1000 and now it feel like the issues have gone.
What I still do not understand is why Max’ Time Output Parameter in the Render Setup works differently when set to Single than it works when set to Range or Frames. With Single, the problem just wasn’t there.
One last question, that is more in connection with the original Theme of this thread- what is the use of Randomizing Radius by Particle ID? I will try if this helps my mesh and how it looks, after I solved my mysteries, but why should I want different particle scales per ID? What is the idea behind the concept?
There are two sides of the answer, so let’s forget about the ID for a second and look at the Randomization part.
Normally, you set your Radius to a constant value (we use 5.0 as default because it matches the default Shape size in PFlow which is Diameter of 10.0) and all your particles get the same size. If you are meshing a PFlow or another system that can vary the particle size, you can simply enable “Use Radius Channel” and the incoming particle size variations will be respected.
But if you are meshing a Teapot’s vertices, you have no way of specifying a random variation in the mesh itself, so we give you the option to vary the size DOWN from the Radius you have specified by up to 99%. This makes the Radius value a kind of “Max. Radius” value, and all particles will have a Radius less of equal to it.
In the case of a Mesh Vertex or a PRT Volume’s particle, there is no ID channel, so the Index (order) of the incoming vertices or particles will determine what the random value will look like. This means that if you feed in 10 vertices from a mesh and 10 vertices from another mesh and 10 particles from a PRT Volume into 3 Frost objects with the same settings, the size of the particles will be identical for the particles with the same Index. This of course depends on the Random Seed, but with the same Variation % and Seed values in 3 Frosts, you will get identical variations!
Now we come to the ID channel. What if you want to apply the same approach to a Particle Flow system? Particles would move from Event to Event, will be born and will die, and the order of the particles could change at any point. So if we used the Index (order) of the particles, there would be heavy flickering between frames because random Radius variations would jump from particle to particle as the indices change! This is where the ID (known as Born ID in Particle Flow) channel comes in. When a particle is born in PFlow, Thinking Particles, Naiad, RealFlow etc, it is assigned a unique ID and it keeps it for the whole life. In fact, once a particle dies in PFlow, its ID is never reused (so it is possible to run out of IDs over time with high Birth Rates!). BUT the good news is that now the Radius Variation can be linked to the ID of the particle and be traced throughout the animation, keeping the variation constant between frames!
So when we say that something is “By ID”, it means we look for the ID channel and assign a value to the particle based on it, ensuring it is consistent over time. In a PRT Loader for example, there is an option “Load Every Nth Particle” which simply goes along the file and skips the rest of the particles. But if the count is changing between frames, a particle shown on frame 1 might not be loaded on frame 2. So we also have a “Load Every Nth By ID” which looks at the ID channel and loads the SAME particles on every frame, always skipping the exact same ones, thus producing an animation that is flicker-free and consistent.
Whenever an option that contains “By ID” in its description is selected and there is no ID channel, we revert back to using the order of the particles (their Index). This does not produce consistent results between frames, but at least it does not error out… The Krakatoa PRT Birth and Krakatoa PRT Update operators in PFlow only work correctly if there is a valid ID channel, but they will fall back to using Indices (and warn you about it) if there is no ID channel in the PRT sequence.
For reducing the face count: first make sure the Radius is at least high enough to remove any air bubbles inside the fluid volume. I think you already did this.
Next decrease the Mesh Resolution. Note that there are separate Resolution controls for the viewport and render; you’ll want to change both of them. If your meshing resolution is…* Relative to Max. Radius: decrease the spinner
Absolute Spacing: increase the spinner
You could also use some other tool to remove unnecessary faces, such as those on the seabed.
Definitely, if you’re using Frost you should install at least the free version of Krakatoa. Tools like the PRT Loader, Magma, and the Delete modifier are essential for how I use Frost.
This is surprising to me. Did the render succeed, but Frost did not appear in it?
Are you using different values for the Viewport and Render resolution (in Frost’s Meshing Quality rollout)? I wonder if the Frost mesh is actually much bigger than 20 M faces.
It’s there to add some variation in the droplet size, in case a constant radius is boring or conspicuous. It’s “by Particle ID” for the reason Bobo described above. Unfortunately I don’t think it will help with the problem you described.
Changing this value to something bigger like 4k or more is one of the first things I do, when setting up a scene with Vray. This really can be a massive slow down. With a low limit Vray starts to load and unload geometry data during the rendering process and this can take quite a while. Not sure though, how it works with one highpoly object. Anyway, never had an issue with higher limits