In the Impressions thread a couple of people expressed interest in point cloud filtering and editing features. This is the area that is currently our biggest pain point and an area where all of the existing software I’ve tried is significantly lacking in one way or another. Most of our projects require some amount of editing so this is a big problem. One that continues to get worse as data volumes keep increasing. We don’t, however, do very many traditional AEC projects so I’m not sure how typical this is of other users.
In the context of Sequoia, its architecture seems well suited to addressing some of these issues and since “point cloud processing” is featured prominently in the description I’m hoping some of these features can make it on to the roadmap. I also think point cloud segmentation fits nicely alongside Sequoia’s role as mesh generator since cleaner point clouds will always produce better meshes. Similarly, different types of objects often require different meshing algorithms and/or settings to produce the best results. Since these objects inevitably occur in the same scenes having an efficient way to segment them out is important if the goal is to generate the best possible mesh.
Below is a description of what my ideal point cloud segmentation software would look like. Note that I specifically used segmentation rather than editing. Too many applications simply allow you to delete points, effectively giving you a very crude segmentation between “points to keep” and “points to discard”. Although it never reached its full potential, the original Pointtools concept of allowing users to segment points in to arbitrary user-defined classes (layers in their terminology) is extremely powerful. Depending on the specific task points could be assigned to classes based on their properties, e.g “outliers” that are too far from their neighbors. Points could also be assigned to logical groups such as “wall” or “floor” or “cables”, based on what they represent. Once classified, each of these groups can be treated in an appropriate way for visualization, meshing, etc. (ignore outliers, mesh walls and floors with spacing X, mesh cables with spacing Y, etc.).
I’ll assert that, no matter how sophisticated, no automatic segmentation method is ever going to be 100% effective. Therefore, having a core set of efficient manual and user-guided segmentation tools is critical and much more important than trying to develop or implement the latest experimental automated segmentation algorithms. To support this assertion I’ll point to the medical imaging community. Not only are the stakes and complexities often greater there than just about anywhere else segmentation is used extensively, their research and development efforts are also often funded at much higher levels than they are in other fields. Although they use automatic segmentation where it is effective, you’ll find that the majority of their production tools rely heavily on manual and user-guided methods. Human operators are amazingly good at identifying patterns and acting on them. Good segmentation software should not strive to replace them it should strive to make them more effective and efficient.
At the most basic level, manual segmentation requires being able to navigate to a specific area of a point cloud, select a group of points, assign them to a class, and be navigating to the next area within a few seconds. This implies a fast viewport and good navigation. I think Sequoia is already pretty good in these areas and with some of the improvements suggested elsewhere could be even better. For the selection itself having polygon, freehand, and potentially a brush selection tool would be great. Often it isn’t possible to select the desired points in one action so having the selection tools operate in multiple modes (Replace Selection, Add to Selection, Remove from Selection) similar to Photoshop is very helpful. Assigning points to classes could probably be done effectively in a few different ways. The number of unique classes being used on a project typically isn’t very large so something like a pie menu or similar palette of classes might work well. Having an active class that a hot key would assign selected points to would probably also work. In order to quickly assess which area to move to next it must be possible to visually identify the classification status of points. It should be possible to assign materials to classes so that, for example, unclassified points are gray, wall points are blue, floor points are green, cable points are orange, and outliers are hidden. In many cases during the initial segmentation pass it might be most useful to temporarily hide all classified points so that the focus can remain on the points that still need attention. As these visualization states may need to change many times throughout the process this implies adjusting class visualization should be easy.
When doing manual segmentation it is often necessary to temporarily hide everything except the small area you’re working on. The existing Region of Interest object is effective at doing this but its UI makes it better suited for marking out regions more permanently. Again I want to focus on one area, make some changes, and move on within a few seconds. The clipping view also has potential for being able to step through a point cloud making changes in a more systematic way. I like the way it is defined as Near Distance + Range. Turning on Prevent Camera Orbit in Ortho Views, switching to one of the preset views, and adjusting the clipping settings provides a reasonable way to step through the point cloud, while still being able to zoom in and pan around. It would be even better if you could have a similar constrained view but with the orthographic camera in an arbitrary orientation. It would also be great to have something like a Step button that would adjust the Near Distance in fixed increments. For example, you could set the Range to 1 and use a step size of 0.8 to step through Near Distance values of 0, 0.8, 1.6, 2.4… This would give you the ability to systematically work through the data in slices while keeping a small amount of overlap between them. The big limitation that I see with this camera clipping method is that it only works for a single viewport. It would be ideal to have two viewports open. One with a constrained orthographic view perpendicular to the data slice and another with an unconstrained camera that could orbit around the same slice.
After the manual tools my next priority would be simple automatic tools that will usually give the expected result. Examples include: Select by Color/Luminance. Select by neighborhood properties (e.g. identify outliers as points with few neighbors, identify redundant points as points with many neighbors). A lot of the algorithms you find in the Point Cloud Library and CGAL are good candidates for this stage.
Once the basics are implemented and working well there are countless other algorithms that could provide additional value, including various automated classifiers that would be more suitable for running against large data sets using a Deadline farm. Certainly the airborne community has been well served for years by various spike removal classifiers similar to MCC-LIDAR (geosciences.univ-rennes1.fr/ … rticle1284) that are better suited to ground-based lidar also show a lot of promise, not only for classifying vegetation but as general purpose tools. Of course there are also various types of region growing algorithms that can bridge the gap between manual and automated tools.
Obviously I have some strong opinions about this stuff, this post could easily have been much longer. I also, however, realize that my experience is necessarily limited so I’m really curious to hear what other testers think about the utility of tools like this for their work and what kinds of things would be most useful for them.