Currently the voxel size control is based on the size of the largest particle in the frame. I’m intending to add an explicit voxel length control, and I’d like to hear your thoughts on it.
To set the voxel length explicitly, you enter the desired voxel size and check “Override voxel size”. The Voxel Size is divided by the Meshing Res control. For example, if the Voxel Size is 2 and the Meshing Res is 1, the effective voxel size will be 2 (the same as the Voxel Size control.) If the voxel size is 4 and the meshing res is 2, then the effective voxel size will be 4/2 = 2.
So much like PRT Volume? As I just learned from a conversation with Bobo the other day I think that this would be a great feature. As I gather it requires more memory to create more voxels and much less memory when you subdivide. Would this be the case here as well?
Also I think explicit control over the voxel size would give you more consistent control over the look, or does this not come into effect? I think I understand where you are going with this, so when the largest particle in the frame becomes the smallest particle on the next frame, the randomize radius will stay consistent.
Actually, with the PRT Volume method, where you set “spacing” So the viewport value is typically higher than the render one. The same scales are used for the Voxel Rendering, where you set the Size, not the Resolution.
And since Krakatoa is already released, I could argue that for consistency, that approach should be used. Though I may have also been the one arguing for how Krakatoa handles this. I forget. Anyway, you should be able to use the same numbers in PRT Volume as Frost, I think. I don’t think they should have different scales for measuring the same thing (sampling voxel size/density).
Besides, you are using Radius as the size of the particle in Frost, not Grain (or whatever the reciprocal of Radius would be called).
So someone could think in their head “I’ve got a PRT Loader that’s occupying a volume 100x300x1000” or “My particles are set to be 2 units in size” and convert that to Frost scaling by saying “If I want to have faces in my mesh that are about 2 units square” or “I want each voxel face to contain 5 particles on average” and not have to do any weird mental conversions.
So in this example, just looking at the left side of the pictures (and not the spinners), I would expect that the if the Voxel Size was 1, then the Viewport was either .5 or .125 and the render was either .25 or .015625. But if I think back to how the PRT Volume works, I’d then just assume it was .5 & .25.
Is the sampling array uniform? Is the data compressed in any way in the caching? Is the intention to always have the meshing voxels and the sampling voxels be the same size?
Is the sampling array uniform? Yes (except for the vertex refinement passes) Is the data compressed in any way in the caching? No Is the intention to always have the meshing voxels and the sampling voxels be the same size? I think I would like to decouple them
Feel free to break this off into a new thread if need be.
So a large volume with uneven density distribution will result in large regions of “zeroed” voxels? So without some sort of octree or something, we should consider monitoring our memory and potentially breaking our particles up into chucks? But I guess there’s no way to weld the edges of sampling volumes currently…
Actually, the only thing cached seems to be the particles themselves. Kernel splatting and meshing (what at the correct Frost terms?) are always done together.
Seems like you might want to be able to tweak the density of the mesh without affecting the surface itself. For viewport/proxy purposes, or for LOD based on occlusion/distance/speed/whatever.
New questions… Currently the particle data that affects the mesh is limited to Position and Radius, right? Nothing else? Velocity doesn’t affect the surface, just what velocity is passed to the renderer.
We mentioned having a scale/normal vector for ellipsoidal or superellipsoidal particles, but what about density? Should a particle with a very high density influence the surface as much as one with a low density? Like if instead of painting spheres of 1.0 to the voxel array, you painted spheres of 0.1 and thresholded the result for the surface? Bonus, we could use negative density to have particles remove from the surface. Think particle level sets or metaball modeling.
I tried using a KCM to set the radius based on a texture map, but since the Frost surface was only sampling at the points, and not interpolating the map between the points, I couldn’t have a map influence the meshing (except to have a particle spatial density higher than the Frost voxel density. Blobmesh2 has a means of inserting a map for the meshing. Would it be possible for Frost to read in a map, sampling at the positions of the voxels? Would it be possible to have Frost expose it’s voxel map to other texture maps? Even if for now it only worked at meshing time (meaning the Frost voxel array was flushed after meshing, so it wouldn’t be available for rendering). It would be excellent if Frost could cache the voxel array for the render (it’s not THAT much memory, especially on systems used to running Krakatoa). You’d get wet maps that way, as well as the ability to have KCM’s do inter-particle-system-sampling.
A side effect of this, modifiers like Relax which are based on vertex/edge lengths will be more consistent when changing radius parameters. Right now I’m hopping up and down the stack to compensate.
Not your fault: Why can’t 3ds max let you pin multiple modifer panels up at a time, like Fusion? Annoying.
Sorry, I misread your question. When I said the sampling array is uniform, I meant that we use a constant voxel length. We use a limited amount of memory to store density samples, and I think the maximum is under 100 MB. As far as I know the mesh is pretty much always the limiting factor. (Please let us know if you need to break up your particles! That would indicate a problem in Frost.)
Correct, we don’t keep a density “cache” after the mesh is built. The terms “splatting” and “meshing” are fine. (Internally I think we call splatting “density sampling”.)
I think it would only apply to metaballs, but yes this seems reasonable. I’ll add it to the wish list. I wonder if negative radii should similarly remove the surface?
We’ve got this on our wish list. I’ll add this to our notes.
I guess I was thinking that the particles themselves would use density as a sort of weighting factor in the other modes. In Zhu-Bridson and Anisotropic at least, you would weight lower density particles less so that they would affect the other particles less in the filtering. So you would weight by density and distance, not just distance.
The negative radii sounds interesting. Does it “just work” in the various methods? I wouldn’t expect it to do the same thing as density weighting, but it might be interesting anyway.
Actually there is an exception to this: zero-radius particles affect the position smoothing in Anisotropic mode. I should probably change that for the sake of consistency.
Interesting… Gets into what I was talking about with the density/mass weighting. Like maybe you don’t want a particle to splat (so you set it to zero radius) but you do want it to affect the particle smoothing. Having only the two inputs (position and radius) can be a bit limiting.
It wasn’t obvious to me from first glance what they do, and I do know what they do.
I would suggest “Relative To Max.Radius” and “Absolute Spacing” (the words “Relative” and “Absolute” help me “feel” the difference). Or something along these lines.
Just a suggestion. I like the controls layout though…
I agree, internally my brain will translate it to relative/absolute or adaptive/fixed. So I’d use either of those pairs of terms. Actually, the dual usage of “Subdivision” and “Spacing” seems odd. Is the Subdivision actually a multiplier on the number of divisions as opposed to a relative spacing multiplier? I forget how the current system works.
Vertex Refinement isn’t an integer anymore? How do we set the iterations?
We were thinking that setting the number may not be useful – that people will tend to either crank it up or leave it at zero. With a checkbox it either iterates to convergence or not at all.
It was just a speed thing. Like 10 iterations was taking a long time, while 2 wasn’t. But maybe the time difference isn’t too bad, especially if convergence happens at a fairly low interation count.
EDIT: Yeah, it’s not that much slower. Checkbox is probably fine.
EDIT 2: Whoops. In my current setup, changing from 2 iterations to 20 caused a 20% higher update time. Seems like a pretty big hit.