Any number to the Power of 0.0 is 1.0.
So (1.0+distance) to the falloffPower of 0.0 will also be 1.0 regardless of the distance, and 1.0/1.0 will always be 1.0.
So the “Option” is enabled when falloffPower != 0.
When falloffPower is 0.0, every particle’s data will be summed with Weight of 1.0.
When falloffPower is 1.0, the Weigth will be 1.0/(1.0+distance) = Linear falloff.
falloffPower values above 1.0 should produce fast falloff, values between 0.0 and 1.0 should produce slow falloff.
At the end, you have to divide the accumulated value by the TotalWeight to get the average value.
Of course, if falloffPower is 0, the sum will be unweighted. So the channel output of the node will contain the real sum, and the TotalWeight will contain a floating point number equal to the NumParticles output. Thus dividing the one by the other will produce the unweighted average value.
At some point in the past, there were some issues with this operator. The weight was incorrect if more than one channel was sampled. But I am quite sure it is working ok in the last few builds.
Btw, build 2.1.6 is coming along well with all the fixes for problems you reported, coming hopefully early next year.
Fantastic, thanks!
As for the new build - I guess I have another bug. To keep things clear I’ll open a new thread.
In case you did not know, you made the most powerful pointcloud data processor.
All serious software out there failed on the task of recoloring a 250 mln pointcloud.
Krakatoa thrives
can you say that over and over again when the press is listening? ;-p
in all seriousness, let us know how we can make it better for managing pointcloud datasets. this is a personal interest of mine and a goal of the company - to make great tools to manage massive pointsets [particles, lidar, or otherwise]
I’d suggest focusing on ‘creative’ side of pointcloud data processing for VFX artists first, since the ‘engineering’ and analysis methods are already there.
There’s a lot of pointcloud processing packages out there, all of them have serious flaws and shorts and the thing is that (which is ridiculous!) exchanging data between them is very difficult (even though most of formats are simple ASCII or very simple binary formats). Good in/out compatibility is a key and can be really done easily.
Off top of my head (the order is highest value at lowest effort first):
1. ASCII importer
Add a simple but efficient ASCII data parser / importer to include any and all ASCII pointcloud formats.
To do that you need to pull out just a few lines into a table (same as you do in a data viewer) and allow to map columns to various channels.
This can be done otherwise (if you geek out a and do a bit of simple scripting), but would be a great addon.
Current importer is a bit tight in this and it’s very easy to add it.
2. Add export to any ASCII format same way.
Both things may be even done as a separate utility so that even people w/out max/krakatoa/maya could use it.
3. Other formats
Add most common binary and other formats support (import/export) including: .PLY (both ascii and binary flavors), .OBJ and .Bin (photosynth)
4. Partitioning Add static dataset partitioning (not via seeding, but a few simple ways to split huge static dataset into partitions) - currently is possible as a manual process
5. Resampling
Add pointcloud data resampling algorithms (at least grid-based first)
6. Normals
Add normals w/out geometry calculations - most pointcloud processing software already has it and algorithms are already out there. Would be cool to have it there.
7. Coloring
Some kind of photoshop-like color/property editing would be cool. Maybe via vertex object proxies and vertex painting in max? A few packages allow for coloring, but tools are very primitive, inefficient and again, i/o issues make them not really usable.
Adding to that on top of the list - it would be awesome if your current parser supported also scientific E notation.
At the moment it seems it does not and therefore I need to process all my datasets manually using a python script first to get rid of all lines using this kind of notation.
The parser won’t take values like that unfortunately.
I copied the line you posted into a new text file and loaded it with a PRT Loader without problems.
I tested using both 2.1.3 and our internal build (the upcoming 2.1.6).
I also tested reading it directly into Frost and it loaded correctly, too.
Thank you VERY MUCH for your feedback!
Below are some comments about these ideas:
This can already be done using Magma - any data found in a text file without a Header creates a new DataN channel when N is a number. thinkboxsoftware.com/krak-csv-file-format/
But I agree we could provide easier to use tools. We just feel that Magma gives people who do NOT script more flexibility as you can recalculate and rewire the incoming DataN channels in ways that would not be possible without scripting expressions for the proposed datasheet tool.
In the past, we regarded ASCII files as the worst case scenario, so we just write our own flavor of CSV out of Krakatoa. I agreee that if we want Krakatoa to be used as a hub for reprocessing data and sending it out to the various other point cloud apps, support for both their Binary and ASCII formats would be necessary.
This is a very good idea and it could be implemented easily as a MAXScript tool. I will add it to the Wishlist.
This is part of what Ember is all about, esp. the Grid part. Stay tuned!
Our original plans were to allow the renderer to produce normals at render time similar to how we implemented the particle multiplication in Krakatoa SR. Ember will also be able to produce normals from point clouds without reference geometry because it already supports Gradients from any channel. So the technology is floating around in various forms. The big question for us is what parts should be in Krakatoa and what parts should be in Ember.
Once again, Ember might provide the tools for painting values of grids that could be easily applied to particle clouds. Sounds like we should think of a future bundle of Krakatoa, Ember and Frost targeted at point cloud/LIDAR editing…