AWS Thinkbox Discussion Forums

NParticlesAverage

Is there an efficiency gain for doing N closest particles average instead of the average in a certain sized spatial neighborhood? I’m thinking if there is a large difference in spatial density, you would get weird results. Like if you are doing the 8 closest particles, but you’ve got a solitary cluster of 5 points way over there, all of those 5 will sample 3 more particles from far away. Even if you weight those 3 less, it seems like a waste. But maybe it’s just faster that way?

It would also be cool if we could get other operations other than average. Minimum, Maximum, Sum, Median, etc. Of course, Gradient, Divergence, and Curl would also be cool. :slight_smile:

From the release notes, it would seem that the search results are cached? Meaning that if we wanted to sample many values from the particles, it wouldn’t be re-doing the search each time?

  • Chad

Ack. Yeah, N-closest is really hard to use with the typical Krakatoa scenes. Since partitioning (and viewport proxying) assume that you have particle weighting based on spatial density. So when you have 15,000 particles in the view but render 150,000,000 particles, then need to double the density to 300,000,000 to fill in some of the graininess, you’ll get 3 VERY different results from sampling N-closest. Sampling the average in a spherical or conical region would yeild predictable results, though.

  • Chad

I agree, this should be in the final product.

The outputs from any single node are cached, so that means you can re-use the queried position as many times as you want. It also means that it does the search then computes the average for each channel specified in the list so its not searching the particles multiple times.

This isn’t too much harder to do, it just requires potentially unbounded space to process. I foresee complaints about it locking up Max with a particularly bad choice of radius or particle distribution.

Hmmm… While sampling a predictably sized neighborhood is very important to the stuff I’ve done with voxels, I could see a compromise where you sample N particles, but return not just the average, but the distance to the Nth particle so we can evaluate not just the sum of the values, but the sum divided by the volume.

Also, it might be efficient to sample N particles in volume V, that way preventing runaway particle counts. So you want to know the average value inside a sphere of radius R, but you only need to sample N particles max, because you don’t need a lot of accuracy, and the values being sampled have low variance. The N particle would need to be randomly selected though, not just the N closest.

Sorry, this sampling stuff all new for me in the context of a KCM. Trying to figure out what’s good enough for interactive work with huge datasets, saving us from having to process with something like Box#3 or TP.

  • Chad

I agree. This should be an output provided by the node.

An interesting idea. I’ll likely add it as a preferred option over the straight-up volume approach to see if its usable. I do plan on adding the purely volume based approach too though.

Privacy | Site terms | Cookie preferences