Frost

This is a repost from the old Advisory Board:

Hey guys, we are going to be coming out with an alpha of ‘FROST’ in the coming weeks. Its a plug-in for max, and I’ll post more soon. Just a heads up - as a note, i’m going to need a NDA from you guys :wink: sorry!

cb

October 28, 2010 | Chris Bond

Ooo…shivering with excitement … Haha, ok lame, but I haven’t had much sleep lately. Chris, do you have an NDA online or did you want to send it out…you might as well get a “global” one for everything we are going to discuss and hear about here.

  • Chris

November 4, 2010 | Chris Harvey

its coming!

cb

November 4, 2010 | Thinkbox Software

The first build of Frost is ready for download on the Advisory Board Download Page.

Frost is a particle mesh generator for 3dsmax. It links to 3dsmax Particle Objects or can direclty load particle files such as PRT Particle files, RealFlow bin files, and CSV particle files. Documentaition is included in the installer and if you have any questions email us at support@thinkboxsoftware.com.

For licensing email sales@thinkboxsoftware.com with your hostname and mac address and we will get your beta license.

Post your feeback here, or you can email us at support@thinkboxsoftware.com. We are excited to hear what you think of our newest project!

December 14, 2010 | Thinkbox Software

So the license is based on hostname and MAC, not on flexLM?

EDIT: Oh, I see, it is LMtools. Can we just use our existing license file with DL, Krak, Awake, etc and just add a new feature line?

December 14, 2010 | Chad Capeland

Yes, Frost uses the same license file as Deadline and friends, so you can just add a new feature line.

December 14, 2010 | Paul

Under the hood, is each particle just “painting” a sphere to a level set array, and the result is meshed?

December 14, 2010 | Chad Capeland

Yes, that’s a good description. Union of Spheres uses exactly a sphere, while Metaballs and Zhu/Bridson use different spherical kernels.

December 14, 2010 | Paul

So the difference is in the painting, not in some filtering applied to the level set, and not some filtering applied to the mesh?

The interpolated data, like mapping and color, are those applied to additional channels in the array? Or are those applied to the mesh afterward?

December 14, 2010 | Chad Capeland

Most of the difference comes from the painting, but we apply some level set filtering in Zhu/Bridson mode. Currently we do no filtering on the output mesh.

The interpolated channel data is applied to the mesh afterward. We have a pure level set implementation which writes to channels in the array, but this is not included in Frost.

December 14, 2010 | Paul

So would you be able to do any level set array I/O, like to/from a texturemap?

Assuming you are interpolating the per-particle values to the mesh, could you do normal mapping from the particle normals? Actually, getting any channel as a per-pixel map might be interesting… I guess we could see how horrible high spatial density meshing is.

Since the scale channel is a vector, could you use orientation and scale to make ellipsoids instead of spheres?

When is Frost getting rolled into Awake? :slight_smile:

Should we start posting questions somewhere else?

December 14, 2010 | Chad Capeland

“Should we start posting questions somewhere else?”

We are preparing the support.thinkboxsoftware.com forums for that kind of discussion because the Advisory Board software is still a bit primitive.

The level set meshing you have in mind might be a bit out of scope since there is a separate tool for that as mentioned already. This one was meant mostly for particle meshing. But you can keep on asking :smiley:

December 14, 2010 | bobo petrov

“Assuming you are interpolating the per-particle values to the mesh, could you do normal mapping from the particle normals? Actually, getting any channel as a per-pixel map might be interesting…”

You should be able to map any channel as a per-pixel map. Assuming Frost’s input is a “PRT Loader” object, would this scenario work for you:

  • Add a “Krakatoa Channels” modifier to the PRT Loader,
  • In the modifier, select the channel you want mapped as input (eg “Normal”) and pipe it to “Mapping10”,
  • Make a standard material 100% self illuminated and apply it to your Frost object,
  • Apply a “Vertex Color” to the diffuse slot, and set the channel to “10”.

I just tested it with particles that had a custom “Normal” channel and it rendered the interpolated data from the particles. Also, for normal mapping, could you do a similar thing, but use a “Normal Bump” in the bump slot with the “Vertex Color”? I haven’t tested that, but it seems like it might work.

December 14, 2010 | Conrad Wiebe

Similarly, you can create a Box, convert to PRT Volume, uncheck Jittered, add a KCM, load a 3D map as input and output its Mono value into a custom Radius channel of type float16[1]. Pick this PRT Volume as the source of the Frost and enable Metaballs with Use Radius Channel checked. The result will be the 3D texture sampled according to the PRT Volume grid and converted to a blob mesh. This is of course not exactly the same as controlling a level set with a texture (we have done that on projects using the dedicated LS Mesher), but it is similar…

December 14, 2010 | bobo petrov

So would you be able to do any level set array I/O, like to/from a texturemap?

So you have a level set stored in a texturemap, and you want to create a mesh from it? And you want to create a level set from the particle data? Like Bobo said, we have a separate tool for working with level sets. However, combining them together was on our wish list, so this seems possible. Technically it is very reasonably because both plugins use the same engine.

I’d like to hear more about what you have in mind. Is there some format you like for working with level set data?

Since the scale channel is a vector, could you use orientation and scale to make ellipsoids instead of spheres?

Currently we only use spheres, but ellipsoids are in the works! We’re planning to add a meshing mode that uses ellipsoids – we should expose the orientation and scale like you described.

December 14, 2010 | Paul

Aliasing the data as a mapping channel would work but it is applied per vertex, so it wouldn’t be any more useful than explicit normals.

The PRT Volume idea sounds like a hilarious workaround… I like it. I’ll have to try looping this a few times… mesh to points to mesh to points.

We’re not really thinking about level set I/O as a file format, just suggesting that you let us input 3D maps from Max, like cellular or Perlin or a Darktree. This would let us mask off or otherwise modify the voxels before the meshing happened. Afterburn style high frequency detail could be added too.

The per pixel normal map idea would be the reverse, but I can see how that would be problematic unless you cached out the voxels or sampled the particles in a procedural map. Both sound bad.

Back to the post-meshing particle sampling… could you make a Vertex Paint style modifier that could do per vertex particle interpolation for arbitrary meshes? Like let’s say I put a push or relax or noise or turbosmooth on the mesh output of Frost, then wanted to resample the particle channels back to the vertices?

  • Chad

December 14, 2010 | Chad Capeland

We’re not really thinking about level set I/O as a file format, just suggesting that you let us input 3D maps from Max, like cellular or Perlin or a Darktree. This would let us mask off or otherwise modify the voxels before the meshing happened. Afterburn style high frequency detail could be added too.

Such level set operations would be useful for sure. I’m not sure how the interface would work… Let’s say I want to mask off a flat wall. How do I create the map? How do I position it in space?

The per pixel normal map idea would be the reverse, but I can see how that would be problematic unless you cached out the voxels or sampled the particles in a procedural map. Both sound bad.

I think that’s what we would need to do. Maybe it’s not so bad in practice?

Back to the post-meshing particle sampling… could you make a Vertex Paint style modifier that could do per vertex particle interpolation for arbitrary meshes? Like let’s say I put a push or relax or noise or turbosmooth on the mesh output of Frost, then wanted to resample the particle channels back to the vertices?

Yes, that seems very reasonable. Do you envision a modifier that points at a Frost node to grab its vertex channel data?

There’s a bit of a complication for arbitrary meshes, because a vertex could be outside all of the particle spheres. Currently we handle this by setting the vertex’s channel value to zero, but I think we would need to add other options.

December 15, 2010 | Paul

Such level set operations would be useful for sure. I’m not sure how the interface would work… Let’s say I want to mask off a flat wall. How do I create the map? How do I position it in space?

That’s the beauty. It’s not your problem. :slight_smile: Seeing as how any map could be used, we could rely on a map of our own making, or use any of the built in maps. You would just provide a map that is the “painted voxels” and we would process it with Linear or Darktree or Gradient Ramp or whatever. The result would be used to generate the mesh. For placement, if we worked in world or object space XYZ, that would be pretty straightforward, but I suppose you could supply the normalized voxel coordinates as UVW, too. Of course, the particles themselves have mapping…

I think that’s what we would need to do. Maybe it’s not so bad in practice?

Well, you would need to do particle sampling in the shadecontext, since you can’t average the normals in the voxel array. That would probably be a very slow map. Alternatively you could cache the normal sampling per face (as you do the sampling to make the mesh, not at rendertime) to a giant bitmap and assign mapping to the vertices to do the lookup.

Hmm… Now that I think about it, it’s really just the difference between per-vertex data and texture maps. Beyond normal mapping, it would give you color or any other data on a higher sampling rate than the mesh itself. So you get smaller meshes but good render detail/sampling. When you think about it that way, it doesn’t sound so special-case.

But like I said, maybe generating the level set and mesh at absurdly high spatial density isn’t so bad in practice either.

Yes, that seems very reasonable. Do you envision a modifier that points at a Frost node to grab its vertex channel data?

No, not to get the actual vertex data. We don’t need to actually have Front make a mesh at all, just do the sampling. The Frost node could store the list of source data objects and other settings, though. Or you could just build all of that into the modifier. The idea is that the modifier would do all the same things that Frost does post-meshing with particle sampling to vertex colors/channels.

There’s a bit of a complication for arbitrary meshes, because a vertex could be outside all of the particle spheres. Currently we handle this by setting the vertex’s channel value to zero, but I think we would need to add other options.

Or you could leave the data unchanged for those vertices.

December 15, 2010 | Chad Capeland

We don’t get any progress for the meshing itself. I can see the progress for loading the particles, but the actual meshing gives no feedback. I can see the massive memory deltas and one CPU core being pegged, but other than that I have no idea what’s going on.

When using a PRT Loader, the viewport PRT’s are not used. Maybe those should be used for the viewport Frost mesh, and the render PRT’s used for the render Frost mesh.

When directly loading PRT’s, having some sort of PRT Loader style viewport count, every Nth would be helpful. We can change the voxel size for viewport/render but not the particle inputs.

I don’t like the spinner for the Res. I personally think it should be “the voxel size will be this big” as opposed to the reciprocal, “there should be this many voxels in each unit of space.”

Regarding the Frost modifier idea, the Vert Refine would be very cool to have in that, too.

December 15, 2010 | Chad Capeland

We don’t get any progress for the meshing itself. I can see the progress for loading the particles, but the actual meshing gives no feedback. I can see the massive memory deltas and one CPU core being pegged, but other than that I have no idea what’s going on.

Just to make sure we are on the same page: I assume you’re talking about the “Frost: x % completed. Press [Esc] to cancel.” that appears in the status panel.

Did you notice what percentages appear? I believe the meshing takes place from 5 to 95 %. After 95 % we move the mesh into 3ds Max. Does it take a long time on a specific percentage? That will help us track down the problem.

When directly loading PRT’s, having some sort of PRT Loader style viewport count, every Nth would be helpful. We can change the voxel size for viewport/render but not the particle inputs.

Sure thing. Would a “Viewport % of Particles” control suffice?

I don’t like the spinner for the Res. I personally think it should be “the voxel size will be this big” as opposed to the reciprocal, “there should be this many voxels in each unit of space.”

I don’t really like it either… I think the advantage of the current system is that it’s easy to choose reasonable values. The disadvantages are that it’s difficult to control if the particle sizes change over time, it’s difficult to choose exactly what you want, and it may be unintuitive.

I’d like to hear other people’s thoughts on this.

Regarding the Frost modifier idea, the Vert Refine would be very cool to have in that, too.

That would move the verts toward the surface?

December 15, 2010 | Paul

We don’t get any progress for the meshing itself.

Are you talking about the rendering progress?

I don’t like the spinner for the Res. I personally think it should be “the voxel size will be this big” as opposed to the reciprocal, “there should be this many voxels in each unit of space.”

While I am not in love with it either, it does not do that. It is RELATIVE to the largest particle size and defines the number the samples to perform. Having it an absolute value as voxel size as you requested would be a great OPTION, IMHO.

December 15, 2010 | bobo petrov

No, haven’t gotten to rendering yet. Just viewport. I will check the progress more closely tomorrow.

So the resolution is relative to particle size? Ah. That makes it independent of the scene scale. Ok. Then yeah, it makes sense but the option for absolute size would be good.

Viewport % would be fine but you need first as well as every Nth. First lets me check density. Nth lets me check coverage.

Yes, having the vertices move to the surface in a modifier would be great. You could edit the mesh like with relax or vertex weld or optimize and then have the resulting vertices snap back.

December 15, 2010 | Chad Capeland

Would be neat if the SignedDistance could be written back to the particles. Guess that could be done via KCM if we could read in the voxel array as a texturemap.

December 16, 2010 | Chad Capeland

Regarding the long wait between the % completed and seeing a mesh, it seems to be related to the geometry being passed to max. If I’m in Tetrahedra mode, I end up making several tens of millions of tetrahedra, and it’s super slow with not progess indicated. But doing Union of Spheres, I end up with a much smaller mesh, and the speed is better. There’s still a small period between the progress finishing and the mesh showing up, but it’s small enough that it’s easy to miss.

December 16, 2010 | Chad Capeland

Would be neat if the SignedDistance could be written back to the particles. Guess that could be done via KCM if we could read in the voxel array as a texturemap.

I agree that would be nice. Unfortunately we only have good distance data near the mesh surface, and currently we can’t efficiently get it anywhere else. For this same reason, I’m reluctant to expose the internal “painted voxel” field.

Regarding the long wait between the % completed and seeing a mesh, it seems to be related to the geometry being passed to max. If I’m in Tetrahedra mode, I end up making several tens of millions of tetrahedra, and it’s super slow with not progess indicated. But doing Union of Spheres, I end up with a much smaller mesh, and the speed is better. There’s still a small period between the progress finishing and the mesh showing up, but it’s small enough that it’s easy to miss.

Thanks for checking. We’ve seen similar delays after the meshing is done, but I’m not sure how to improve it (aside from using a smaller mesh).

December 16, 2010 | Paul

Actually, Tetrahedron might not be the best default. It’s very slow except with very low particle counts. Where it crosses Union of Spheres on the speed vs count vs volume, I don’t know, but I suspect Union of Spheres would win for most production cases. Opinions?

WooHoo! Page 2!

December 16, 2010 | Chad Capeland

Frost operates with some very distinct steps… Getting the particles, painting the voxels, filtering, meshing, and refining the mesh. Maybe some others I don’t know about.

But just tweaking the vertex refine iterations is causing the PRT Loader to read in the particles off the network again. Really annoying. Can we cache that somehow?

December 16, 2010 | Chad Capeland

Actually, Tetrahedron might not be the best default. It’s very slow except with very low particle counts.

This really depends on the POV. You are in a very specific position, pushing the limits of Frost with millions of particles in impossible conditions. It does not mean that Joe Regular User will do the same. So the factory default of showing tetras instead of any form of iso-surfaces kind of makes sense in a large number of everyday cases where Frost will simply replace the BlobMesher / pWrapper.

That being said, Frost features exactly the same set of Presets options as the Krakatoa PRT Loader, so you can simply switch it to Union Of Spheres, save a Preset with your favorite settings (or a subset thereof) and then select that preset and hit the Default button. Each newly created Frost object on that machine will use your custom defaults, so everybody can live happily ever after :smiley:

December 16, 2010 | bobo petrov

I’m testing it using Krakatoa PRT’s, yes. The particle counts are probably not comparable. I’m thinking of the use case where someone would want a viewport mesh proxy of a PRT Loader, and currently that’s not very good for the tetrahedra. Maybe if it was using the viewport counts from the PRT Loader it wouldn’t be so painful.

December 16, 2010 | Chad Capeland

Actually, Tetrahedron might not be the best default. It’s very slow except with very low particle counts. Where it crosses Union of Spheres on the speed vs count vs volume, I don’t know, but I suspect Union of Spheres would win for most production cases. Opinions?

Yes, probably. I imagine the most common case is a volume filled with touching particles, where union of spheres will win. The problem is if the particles are disconnected, either because of the particle distribution or because the radius is set too low – tetrahedra should be better in this worst case. Until a month ago, the default was 20-sided spheres instead of tetrahedra, so I probably see them in a much more sympathetic light.

It looks like the code for tetrahedra could use some improvements. What kind of speed difference are you getting?

But just tweaking the vertex refine iterations is causing the PRT Loader to read in the particles off the network again. Really annoying. Can we cache that somehow?

For sure. We will fix this.

Maybe if it was using the viewport counts from the PRT Loader it wouldn’t be so painful.

Something I like about the current system is that I can turn off the PRT Loader’s viewport display while Frost continues to get particles.

The next build of Frost will have a “% of particles” control, like the PRT Loader. I hope this will help.

December 16, 2010 | Paul

Yes, probably. I imagine the most common case is a volume filled with touching particles

Right. What’s the use case for Frost when the particles are sparse?

Something I like about the current system is that I can turn off the PRT Loader’s viewport display while Frost continues to get particles.

I’m not sure I follow. How do you know if the mesh is fitting the particles if you can’t see them?

Something that might also be handy is having PRT Loader style v/r checkboxes for the various objects. So you could have 2 PRT Loaders, one that is a proxy of the the other and only evaluate the one in the viewport and the other in the render. Just a possible idea.

December 17, 2010 | Chad Capeland

Here are some interesting benchmark results regarding the whole Tetra vs. Union Of Spheres defaults discussion:

I used a simple synthetic test:
*Create a Standard PFlow
*Invert the Speed, set Divergence to 30 degrees
*Emit a given amount of particles from frame 0 to frame 30
*Convert to Frost using default settings (Tetrahedron) and compare with Union Of Spheres.

In my first run, I created 100K particles over the 30 frames.
The results were as follows:
*Tetrahedron : 15.127 sec.
*UoS Radius 1 : 94.836 sec.
*UoS Radius 5 : 7.998 sec.
*UoS Radius 10 : 5.805 sec.

In this simple case, Tetra beats UoS at default settings (Radius 1), but is 2 to 3 times slower compared to UoS with Radii 5 and 10.

Then I increased the particle count to 1MP:

*Tetra : 152.801 sec.
*UoS Radius 1 : 373.225 sec.
*UoS Radius 5 : 59.958 sec.
*UoS Radius 10: 52.135 sec.

Again notice that while Tetra is still over 2 times faster than UoS at default Radius 1, it is up to 3 times slower than UoS at higher Radii.

For comparison, MetaBalls with Radius 10 took 83.908 sec.

Then I reduced the Speed of the PFlow from 300.0 to 30.0 and tested again UoS with Radius 1 - it took 47.346 sec. because the particles were clumped closer together, producing less surface to mesh.

As you can see, in a relatively typical 1MP case, having Tetrahedron as default makes sense as long as the default Radius is 1.0.
So I question whether a default radius of 1.0 makes sense - changing the factory defaults to “Union Of Spheres” and Radius 5 would produce faster previews from typical particle systems - note that the default particle size in a Standard Flow is 10 units (Radius of 5!).

The reason we added Sphere and then Tetra preview was that loading a fluid simulation from Flood:Spray, RealFlow or other similar sources typically resulted in unconnected droplets either due to initial scale settings or too low Radius settings. Despite being a relatively advanced user, I always forgot to tweak the settings BEFORE picking source objects.

Our goal here is to make the initial creation of a Frost mesh as painless as possible for a new and even the advanced user, and if he would have to wait for half a minute before he could set up better settings to be able to tweak anything, it would be bad for the overall impression.

Food for thought…

December 18, 2010 | bobo petrov

Seems like the small default radii would also be compounding the issue with the relative messing resolution. Tinier blobs make more surface and make for more dense meshing. A double whammy.

Crazy idea (but I have used something like it with our in-house voxel mesher)… Could you display the sampling data on a plane? Maybe 3 planes in XY, XZ, YZ, intersecting at the midpoint of the bounding box that showed the voxel data as 2D textures formed by the intersection of the mesh with the volume? If not 2D textures, perhaps vertex colors on dense meshes? In the case of the latter, you’d get a clear indication of the meshing resolution too. But the planes would clearly show the particle density, radius, blending, sample rate, etc.

Back to the subsampling… If you implemented a Nth % density, you could scale the radius to match. So if you loaded 10% of the particles, you would multiply the radius by ~2.154 to compensate. Fewer particles to sample and potentially less surface area to mesh (assuming they would have formed a connected volume initially). But if you have the current setup where the resolution of the meshing is based on the radius, then you’d have fewer faces in the end anyway.

Just noticed that Frost calculates when hidden and no dependent objects are visible. Expected?

December 20, 2010 | Chad Capeland

Right. What’s the use case for Frost when the particles are sparse?

I’m thinking of things like isolated drops that are part of a fluid sim, rain streaks, and sheets up to a few particles thick. Most of our internal use was sparse by this standard. I’d guess most external use would be dense but it’s not obvious to me.

I’m not sure I follow. How do you know if the mesh is fitting the particles if you can’t see them?

Turning off the PRT Loader saves some overhead. I’m used to loading files directly using the “Particle Files” rollout, so I don’t miss the viewport points. Obviously you find them useful – I wonder if Frost should provide the option to show its own viewport points?

December 20, 2010 | Paul

I’m thinking of things like isolated drops that are part of a fluid sim, rain streaks, and sheets up to a few particles thick. Most of our internal use was sparse by this standard. I’d guess most external use would be dense but it’s not obvious to me.

Ah, right, since the main mass of fluid is done in level set form already for you. And as Bobo said, some fluid stuff is all spray, not slosh. Yeah, we’ll have to get more users on this to see what the good balance is.

I figured the direct PRT loading was just a time saver (and for folks not using Krakatoa?). We’ll most likely ALWAYS use PRT Loaders, since we would be able to do culling and KCMs that way (even if only to tweak the radius). If you’re getting good data from Surf or whatnot, then yeah, I can see where direct loading would be a nice clicksaver.

Another crazy idea… Viewport culling? Assuming you were meshing only for the active view, you could evaluate only the particles and voxels inside the view frustrum. Bad for rendering, good for interactive feedback.

December 20, 2010 | Chad Capeland

Thanks for your benchmarks!

The reason we added Sphere and then Tetra preview was that loading a fluid simulation from Flood:Spray, RealFlow or other similar sources typically resulted in unconnected droplets either due to initial scale settings or too low Radius settings. Despite being a relatively advanced user, I always forgot to tweak the settings BEFORE picking source objects.

This is my biggest frustration with Frost. Before I could add a particle file, I would need to change the meshing mode (from spheres to Zhu/Bridson) and increase the radius to some scene-dependent value. If I missed either of these things, or set the radius too low, I’d need to ctrl-alt-delete and shut down 3ds Max. Definitely not a good initial experience… Changing the default shape from spheres to tetrahedra and adding the presets system fixed this problem for me, but I very much appreciate that it will not work for everyone.

So are both of you in favor of changing the default to Union of Spheres with a larger radius? I see BlobMesh uses 20 by default.

December 20, 2010 | Paul

So are both of you in favor of changing the default to Union of Spheres with a larger radius? I see BlobMesh uses 20 by default.

I think it would be worth trying to see how people feel about it as factory default.
The good thing about Frost meshing is we can hit Esc. at any point to stop the process, so it is probably much less of a problem than it was back in PRT Mesher days were we actually had to kill Max to get out of a wrong setup…

December 20, 2010 | bobo petrov

Is it possible for the progress to indicate which Frost is processing? I have a few in my scene and I can’t tell which one I’m canceling.

December 20, 2010 | Chad Capeland

Sure thing. I also wonder about:

  • if you cancel one Frost object, cancel all others too
  • after you cancel, don’t mesh again until you either “Force Viewport Update” or change the current time

December 20, 2010 | Paul

after you cancel, don’t mesh again until you either “Force Viewport Update” or change the current time

Some Max objects have the options to update Always, Manually or at Render Time only. When you cancel via Esc, they incorrectly switch to Manual instead of Render only. 10 years of complaining has not moved Autodesk to fix that. :wink:

If we go that route, some indication that the Frost is in a “manual” state would be good, possibly in the Info area that might come if I can get my Particle/Mesh/Time stats wish implemented. Or a color icon (like for the channel detection) that changes color to tell the user what will happen. It could be Green for Auto-Update On, Yellow for Auto-Update OFF and Red when Disable Viewport Meshing is checked… It could have a tooltip explaining what the color means…

December 20, 2010 | bobo petrov

Could you display the sampling data on a plane? … But the planes would clearly show the particle density, radius, blending, sample rate, etc.

I like it! This would be a nice help while debugging… I assume you’re imagining something like a heat map, with a colored square drawn for each voxel? I wonder how you visualize the sampling rate – is it from the size of the colored squares?

Back to the subsampling… If you implemented a Nth % density, you could scale the radius to match. So if you loaded 10% of the particles, you would multiply the radius by ~2.154 to compensate. Fewer particles to sample and potentially less surface area to mesh (assuming they would have formed a connected volume initially). But if you have the current setup where the resolution of the meshing is based on the radius, then you’d have fewer faces in the end anyway.

Yes, something like that seems wise, considering how unintuitive the effect of reducing the particle count can be…

Fitting the correct radius for a dense particle set seems to be a common problem. I wonder if we should detect that situation, and somehow suggest a radius always?

Just noticed that Frost calculates when hidden and no dependent objects are visible. Expected?

Yes, but it’s not good. I’d like to fix this.

December 21, 2010 | Paul

I was thinking just the normalized signed distance, but if you have RGB, you could map each channel to something different. Not sure what would make sense. Is the voxel density locked to the meshing density?

Calculating the local density might be hard but doing global density, comparing total count vs bounding box volume might give you the guesstimate you need.

When you calculate the signed distance, does the accuracy fall off or do you just stop after a certain distance?

December 21, 2010 | Chad Capeland

How do you set the license path automatically? Chad left a few jobs in the queue over the break but only his machine is rendering because the other machines can’t find the licenses. I can’t find any reg key or file that points to our license server.

December 23, 2010 | Ben Lipman

You should be able to set the license path using the FRANTIC_LICENSE_FILE registry value. The location of this value depends on your operating system:

  • On Windows XP, there is one registry entry per machine:
    HKEY_LOCAL_MACHINE\SOFTWARE\FLEXlm License Manager\FRANTIC_LICENSE_FILE

  • On Windows Vista and Windows 7, there is a registry entry per user:
    HKEY_CURRENT_USER\Software\FLEXlm License Manager\FRANTIC_LICENSE_FILE

Please let us know if this problem continues!

December 23, 2010 | Paul

I’m getting a rinse and repeat setup where Frost takes as long to generate as the autosave interval is, and after autosave, Frost reprocesses. Is there any way to prevent Frost from having to do so after a save?

December 29, 2010 | Chad Capeland

As far as I know, this is caused by the save function changing the current time internally in order to save the scene state at frame 0. When the saving finishes, the time gets restored and the scene gets updated. At least this is what Oleg told me about why PFlow does the reprocessing after save. Not sure if anything could be done about it, but I will leave it to Paul to look at…

December 30, 2010 | bobo petrov

When you calculate the signed distance, does the accuracy fall off or do you just stop after a certain distance?

We just stop.

I’m getting a rinse and repeat setup where Frost takes as long to generate as the autosave interval is, and after autosave, Frost reprocesses. Is there any way to prevent Frost from having to do so after a save?

Yikes! We’ll look into this. My experience matches what Bobo described (3ds Max switches to time 0 when it saves).

If you are working with data that you can share freely, I would like to see your scene so we can look for performance problems (perhaps a frame of particle data and your Frost settings?). I assume this is usually impossible, but I’m always interested in new test data.

January 3, 2011 | Paul

In theory, since the autosave timer is exposed to the SDK/MAXScript, Frost could compare its processing time to the Autosave interval and if the last update took longer than, say, 50% of the save interval, call autosave.resettimer() to postpone the next autosave to the number of minutes specified in the user settings.

Not sure if this is a good idea, but it might be…

What could be even better is a custom callback function that one could define as a global in MAXScript, e.g. fn FrostPostUpdateCallback =… which would be called each time Frost finishes updating. This way, ANY custom calls could be implemented by a facility to run things like the autosave postponing call, print stats to a log file or do a number of useful things we cannot even dream of right now… A matching FrostPreUpdateCallback() would also be useful to initialize timers etc.

January 3, 2011 | bobo petrov

How about a “disable during save” option? When enabled, Frost will simply create an empty mesh while the scene is saving. I’m guessing we can almost always get away with this. If it works well during testing then we can polish it later.

I’ll probably do something like Bobo suggested too, because this sounds very frustrating.

January 7, 2011 | Paul

Paul, can you detect when the saving happens (based on the notifications) and cache the current viewport TriMesh so when the scene returns from saving, you would just display the cached one instead of rebuilding it? You could indeed output an empty TriMesh when asked to save - not sure if this should even be an option, it should be the default behavior. There is no explicit geometry saved from Frost to the scene, right?

January 7, 2011 | bobo petrov


“There is no explicit geometry saved from Frost to the scene, right?”
Right.

I thought there would be problems if, for example, you use a Frost mesh as a Particle Flow “Position Object”, but it seems to work fine. For now I’ll return an empty mesh without any option (and cache the current mesh).

January 7, 2011 | Paul