Well I know there is like 2% change of this happening but I thought I would ask anyway…
1: Particle sum radius craps out on a scene with 16.5 Million particles. It would be great if this could handle more.
2: Please allow a variable radius. It would be SO useful to be able to plug something into this parameter.
We have got some really nice looking blobmesh processing going on, but not having this control is hurting us on a current job.
Thanks.
P.S. I will buy you guys all puppies if you can add this feature.
Is this sampled from a grid? Like splat particles to voxels and then do a weighted sum of the voxels in a region based on the sample point? Or is it actually sampling all of the points?
Craps out with an error, or is just too slow to work? Its using a KD-Tree underneath which is somewhat of a memory hog. I’m not sure there’s much to do about it without a serious rethinking of the data structure. Could you get the same/similar effect by just using a random sampling of the heavy particle-set? There are tools in Stoke/Ember that let you splat particle data to a grid which is another option to consider depending on your needs.
For a lot cases, splatting to a grid would get you “enough” precision.
I sort of like the random approach, but wouldn’t you really need to do it based on ID’s in order to prevent (obvious) flickering? In that case maybe just adding a filter channel to the input like “Selection” or something like that so we could do a modulo based on ID and set the Selection channel based on that?