Point Cloud Segmentation (aka Editing)

In the Impressions thread a couple of people expressed interest in point cloud filtering and editing features. This is the area that is currently our biggest pain point and an area where all of the existing software I’ve tried is significantly lacking in one way or another. Most of our projects require some amount of editing so this is a big problem. One that continues to get worse as data volumes keep increasing. We don’t, however, do very many traditional AEC projects so I’m not sure how typical this is of other users.

In the context of Sequoia, its architecture seems well suited to addressing some of these issues and since “point cloud processing” is featured prominently in the description I’m hoping some of these features can make it on to the roadmap. I also think point cloud segmentation fits nicely alongside Sequoia’s role as mesh generator since cleaner point clouds will always produce better meshes. Similarly, different types of objects often require different meshing algorithms and/or settings to produce the best results. Since these objects inevitably occur in the same scenes having an efficient way to segment them out is important if the goal is to generate the best possible mesh.

Below is a description of what my ideal point cloud segmentation software would look like. Note that I specifically used segmentation rather than editing. Too many applications simply allow you to delete points, effectively giving you a very crude segmentation between “points to keep” and “points to discard”. Although it never reached its full potential, the original Pointtools concept of allowing users to segment points in to arbitrary user-defined classes (layers in their terminology) is extremely powerful. Depending on the specific task points could be assigned to classes based on their properties, e.g “outliers” that are too far from their neighbors. Points could also be assigned to logical groups such as “wall” or “floor” or “cables”, based on what they represent. Once classified, each of these groups can be treated in an appropriate way for visualization, meshing, etc. (ignore outliers, mesh walls and floors with spacing X, mesh cables with spacing Y, etc.).

I’ll assert that, no matter how sophisticated, no automatic segmentation method is ever going to be 100% effective. Therefore, having a core set of efficient manual and user-guided segmentation tools is critical and much more important than trying to develop or implement the latest experimental automated segmentation algorithms. To support this assertion I’ll point to the medical imaging community. Not only are the stakes and complexities often greater there than just about anywhere else segmentation is used extensively, their research and development efforts are also often funded at much higher levels than they are in other fields. Although they use automatic segmentation where it is effective, you’ll find that the majority of their production tools rely heavily on manual and user-guided methods. Human operators are amazingly good at identifying patterns and acting on them. Good segmentation software should not strive to replace them it should strive to make them more effective and efficient.

At the most basic level, manual segmentation requires being able to navigate to a specific area of a point cloud, select a group of points, assign them to a class, and be navigating to the next area within a few seconds. This implies a fast viewport and good navigation. I think Sequoia is already pretty good in these areas and with some of the improvements suggested elsewhere could be even better. For the selection itself having polygon, freehand, and potentially a brush selection tool would be great. Often it isn’t possible to select the desired points in one action so having the selection tools operate in multiple modes (Replace Selection, Add to Selection, Remove from Selection) similar to Photoshop is very helpful. Assigning points to classes could probably be done effectively in a few different ways. The number of unique classes being used on a project typically isn’t very large so something like a pie menu or similar palette of classes might work well. Having an active class that a hot key would assign selected points to would probably also work. In order to quickly assess which area to move to next it must be possible to visually identify the classification status of points. It should be possible to assign materials to classes so that, for example, unclassified points are gray, wall points are blue, floor points are green, cable points are orange, and outliers are hidden. In many cases during the initial segmentation pass it might be most useful to temporarily hide all classified points so that the focus can remain on the points that still need attention. As these visualization states may need to change many times throughout the process this implies adjusting class visualization should be easy.

When doing manual segmentation it is often necessary to temporarily hide everything except the small area you’re working on. The existing Region of Interest object is effective at doing this but its UI makes it better suited for marking out regions more permanently. Again I want to focus on one area, make some changes, and move on within a few seconds. The clipping view also has potential for being able to step through a point cloud making changes in a more systematic way. I like the way it is defined as Near Distance + Range. Turning on Prevent Camera Orbit in Ortho Views, switching to one of the preset views, and adjusting the clipping settings provides a reasonable way to step through the point cloud, while still being able to zoom in and pan around. It would be even better if you could have a similar constrained view but with the orthographic camera in an arbitrary orientation. It would also be great to have something like a Step button that would adjust the Near Distance in fixed increments. For example, you could set the Range to 1 and use a step size of 0.8 to step through Near Distance values of 0, 0.8, 1.6, 2.4… This would give you the ability to systematically work through the data in slices while keeping a small amount of overlap between them. The big limitation that I see with this camera clipping method is that it only works for a single viewport. It would be ideal to have two viewports open. One with a constrained orthographic view perpendicular to the data slice and another with an unconstrained camera that could orbit around the same slice.

After the manual tools my next priority would be simple automatic tools that will usually give the expected result. Examples include: Select by Color/Luminance. Select by neighborhood properties (e.g. identify outliers as points with few neighbors, identify redundant points as points with many neighbors). A lot of the algorithms you find in the Point Cloud Library and CGAL are good candidates for this stage.

Once the basics are implemented and working well there are countless other algorithms that could provide additional value, including various automated classifiers that would be more suitable for running against large data sets using a Deadline farm. Certainly the airborne community has been well served for years by various spike removal classifiers similar to MCC-LIDAR (geosciences.univ-rennes1.fr/ … rticle1284) that are better suited to ground-based lidar also show a lot of promise, not only for classifying vegetation but as general purpose tools. Of course there are also various types of region growing algorithms that can bridge the gap between manual and automated tools.

Obviously I have some strong opinions about this stuff, this post could easily have been much longer. I also, however, realize that my experience is necessarily limited so I’m really curious to hear what other testers think about the utility of tools like this for their work and what kinds of things would be most useful for them.

Good post :slight_smile: I too desire a bit more segmentation ability. I am glad you mentioned CANUPO and Pointools, as the methodology is promising.

Yep, I wish we had ability to segment by contrasting geometry, color, intensity etc. in every point cloud processing program.

I am trying to wrap my mind around the process you described here. It seems like the process used by Bentley Descartes “model by section” which can step a clip along an axis or path by a user defined increment and view/edit along the way from multiple simultaneous different view-ports.


I am not saying, there is no need to include that type of tool in Sequoia, because it is already in Descartes. I am saying that it is a great tool, and possibly a good example of the process you described.

One of the last thoughts on my mind, is that if were are able to use segmentation tools to break up the data in Sequoia, then how would that segmentation be retained after the meshing process is complete? For instance, I would like to separate the data into logical components like grass, trees, concrete, buildings, vehicles, and other parts, but I would not want the meshing process to join these objects back together again. I guess you could mesh each layer separately then unify it all back together again prior to export, but I loathe that workflow. Keeping them separate(somehow) without actually separating the meshing process would be better. If any of this is possible, the hacksaw cells may complicate the parts by accidentally dividing them :frowning:

Anyway, this could be a great feature for Sequoia as a well organized set of smaller meshes are much easier to handle then one gigantic mesh.

I haven’t used this in Descartes, but it sounds like exactly what I have in mind. You also find similar functionality in various airborne lidar software. Merrick’s MARS has something like this and I just found this video showing something similar in VRMesh:

https://youtu.be/5NceAb-wkLo?t=3m

If we could bake point attributes from point clouds to meshes then a class attribute (grass, building, vehicle…) could be assigned to each vertex in the mesh based on the points that are closest to it.

With the right UI I think this could actually work pretty well. I agree though, I wouldn’t want the unifying to happen automatically without being able to control it. I don’t particularly like how the Simplify tool essentially does this now. To expand on your example, what I would really like to be able to do is setup a node graph something like this:


In practice a human would likely need to intervene and do a little editing between the mesh generation and merging steps. In general though, the more of those automated steps I can string together the happier I’m going to be, as long as I still have the option of splitting out the data from one of the intermediate steps if I need to.

Totally on-board with your train of thought :stuck_out_tongue: The graphic is perfect, and the idea of separate exports for the individually meshed classifications is exactly what I was hoping for. Even taking it one step further and separating individual parts would be nice, but I am not sure how it would be accomplished easily. I think what I am trying to say is that being able to separate each vehicle or tree would be better than one mesh containing all vehicles or trees. For most cad software I use this organization is through levels or layers, but it is typically not contained within a single mesh. In other words one large mesh that contained every car, could have layers or identifiers to separate each car as car1, car2, car3, but I don’t think a mesh can contain that information :confused: Please correct me if I am wrong.

This problem sounds similar to the idea of baking in attributes like you talked about, but my gut tells me the current exportable mesh formats might be the limiting factor. I am happy to get just vertex color in the mesh, but I agree that having classification, intensity, and more would be a step forward.

Yep, Sequoia essentially already has the ability to do this. When you load a Hacksaw mesh that mesh is really made up of a bunch of separate partitions stored in separate .xmesh files on disk. They’re just presented to the user as a single mesh object within the UI. It doesn’t seem to big of a leap to have each of those .xmesh objects represent a logical object rather than an arbitrary cube. You’re on your own though loading the meshes in to your CAD app in a sensible way though. :wink:

I totally agree.
I am enjoying this post. Thank you jedfrechette and jcoco3 for posting.
It would be wonderfull to have a houdini for pointclouds.

Yes, thank you very much for your suggestions and discussion!