Mesh quality analysis

Since the release of Beta 16 and its introduction of the operators for converting to single sided meshes that conform to the source point cloud we’ve been testing that functionality to evaluate if it can replace our current mesh generation tools. This is very much a test of the tools as they exist today so I have no doubt that the results will change as time goes on and I look forward to watching them improve. Since this will be long I’m going to break it in to multiple posts. This one covers background info about our use cases and needs, next will be test results on a synthetic data set, followed by examples from real scan data.

Part 1: Background

Although we’re using meshes generated from lidar point clouds for a number of applications, recording props and sets for VFX has the most demanding requirements so most of this evaluation is written in that context.

Our data collection is primarily done with tripod lidar scanners (mostly Faros). Data collection occurs under any number of conditions and we often don’t have much control over them, for example: We may need to scan while other people are working in the area. Target objects may be made of materials that don’t scan well and we may not be able to treat them to improve data quality. We may not have access to all areas of a location. The time available to collect data may be limited. As a result of these factors the point cloud data has the following characteristics. I would argue all lidar data shares these characteristics but the above factors tend to exasperate them.

  • The data is noisy and much of that noise is high frequency.
  • The data contains many outliers.
  • Point density has very high spatial variability.

To a certain extent these things can be mitigated by processing prior to meshing, but they can’t be completely eliminated so any useful meshing algorithm must be able to cope with them. Most of the things we scan are composed of continuous surfaces so an algorithm’s ability to accurately reconstruct those is most important. Thin or discontinuous features like cables and vegetation are special cases so lower performance is acceptable. Thin continuous surfaces, e.g. sheetmetal, scanned on both sides are also a special case. Ideally the software would be smart enough to choose the best algorithm for point cloud regions with certain characteristics (we need good point cloud classification tools). Finally, compute time is cheaper than human time so I will always choose long running automated solutions over manual ones that require excessive human babysitting.

With those considerations in mind my ideal meshing algorithm would fulfill the following requirements.

  1. There must not be systematic deviations between the mesh and the point cloud.
  2. Continuous surfaces are reconstructed completely without holes, overlapping faces, or bad topology (Yes I know bad is relative).
  3. Point density variations due to collection procedures should have a minimal effect on meshes.
  4. Mesh resolution is limited by the point cloud resolution not parameters of the meshing algorithm. See number 3.
  5. Few user controls. I would like to be able to use the same settings for almost everything that we scan and have confidence that when I come back in the morning after sending my jobs to the farm the results will be usable. See number 4.
  6. Resilient to small misalignments between overlapping scans.
  7. Resilient to high frequency noise on surfaces
  8. Resilient to outliers.
  9. Plausible reconstruction of “special” features that can’t be sampled adequately by the scanner.
  10. Good compute performance and scalability.

Part 2a: Synthetic Data Test

A reference mesh was modeled by hand to contain features that would be difficult to reconstruct from noisy point cloud data. These features include tabs, slots, and cylinders of various widths, as well as overlapping faces offset by various amounts to simulate the effect of scan misalignments. The source point cloud was created by randomly sampling the reference mesh at a density of 40,000 points per square meter, or a linear spacing of 0.005 m. Gaussian noise with a mean of 0 and standard deviation of 0.003 m was then added to the point positions (partitioned evenly between the 3 cartesian coordinates). Next the point cloud was randomly subsampled to reduce its density in four circular regions. Finally, surface normals were recalculated. The source data and mesh results can be downloaded from:

dropbox.com/s/hls05swlrfdwc … 4.zip?dl=0

Meshing Tools

The following tools were used to compare meshing algorithms:

Screened Poisson Surface Reconstruction (Version 8.0) [SPSR]
cs.jhu.edu/~misha/Code/Poiss … ersion8.0/

Sequoia Zhu/Bridson (Version 0.1.15, Beta 17) [ZB]

Floating Scale Surface Reconstruction (Snapshot 20150916) [FSSR]
gris.tu-darmstadt.de/project … ace-recon/

Each tool’s default settings were used as much as possible and no simplification was performed. For SPSR an octree depth of 8 was used, which corresponds to a minimum edge length of 0.0086 m. An octree depth of 9 (0.0043 m) was also tested but although it resolved sharp edges better surfaces exhibited noise on the order of the FSSR results. For the ZB reconstruction the radius was set to the suggested value of 0.0097 and Cull Faces and Conform operators were applied. FSSR was run after assigning a scale of 0.005 to all points. The FSSR mesh was post-processed using the associated meshclean tool with default settings.

Results


Because SPSR always attempts to build a watertight mesh it filled all gaps in the input cloud and extended the surface beyond the bounds of the point cloud. This behaviour allows it to easily deal with variations in point density, however, it is not desirable when there are actual gaps in the surface, such as the slots in this data set. Although not done here this can be mitigated in post processing in a number of ways, most simply by selecting triangles with long edges and iteratively growing the selection until it reaches the bounds of the input data. SPSR performed very poorly on the thin tab and cylinders and produced a moderate amount of rounding on sharp edges.

ZB produced the best result for the thin features and was the only method that produced a plausible result for the smallest cylinder. It also produced the best reconstruction of sharp convex edges. In contrast, the most obvious feature of the ZB mesh is its very poor reconstruction of sharp concave edges. Its reconstruction of only the uppermost surface when faced with overlapping data is also notable. The ZB algorithm is also very sensitive to variations in point density and started showing significant degradation when the density was reduced by only 50%.

The FSSR mesh was also badly affected by density variations. I’m not sure this test accurately reflects it’s behavior with real scan data though. For this test each point’s scale was fixed at 0.005, however, for real data this value would be larger in low density regions allowing the algorithm to adapt to the density variations better. Although the FSSR mesh is noisier it did reconstruct sharp edges better than SPSR. It didn’t do as well on convex edges as ZB, but more importantly it exhibited the same performance for both convex and concave edges.

Part 2b: Synthetic Data Test Continued


The FSSR mesh contained the most holes, however, most of them are very small and could easily be filled automatically as a post-processing step. Using more appropriate scale values for the input points may also have helped with this. The ZB mesh contained the most intersecting faces by a significant margin and also had more degenerate faces than the others. All of the meshes had comparable amounts of disconnected faces (faces connected to less than 1000 other faces). Overall SPSR produces the best mesh topology and is the only algorithm tested that offers the ability to build meshes from quads rather than triangles.

----------- ------ ----- ------------ ---------- ------------------ -------------- -------------
Source      Faces  Holes Intersecting Degenerate Disconnected Faces Deviation Mean Deviation Std
----------- ------ ----- ------------ ---------- ------------------ -------------- -------------
Point Cloud NA     NA    NA           NA         NA                 0.000007       0.001719
SPSR        298824 0     0            4          608                0.000062       0.001284
ZB          559297 390   322          10         545                0.000472       0.000803
FSSR        784397 7604  6            4          435                0.000134       0.001372

Looking at the statistical deviations from the reference mesh, it is reassuring to see that the mean deviation of the source point cloud is close to 0 and the standard deviation is approximately equal to the noise added to each of the cartesian coordinate components (0.003/sqrt(3)); as is expected for geometry composed primarily of planes oriented perpendicular to the axes. Overall SPSR had the best statistical fit and it is interesting to note that it did this while using approximately half the triangles needed by the other methods. With a value of 0.000472 ZB had the largest systematic offset compared to the reference surface. By compressing the color map and setting negative deviations to gray it can be seen that almost all of the faces on the smooth portions of the ZB mesh are very slightly above the reference mesh.

Summary

Right now SPSR still seems to fit our requirements best. It performs very well on continuous surfaces and produced the best statistical results of any of the candidates. It also does this while producing the simplest and cleanest mesh. On balance, I see its ability to gracefully adapt to varying data densities as a plus even though that means post-processing to reopen incorrectly filled gaps. The fact that the user only needs to set one, easily understood, parameter (minimum face edge length) is also very nice. The biggest issue with SPSR is its inability to generate plausible reconstructions of small features that can’t be adequately sampled. These small features are where ZB really shines. However, ZB’s very poor performance on concave edges is a bit of a show stopper for general purpose use. Its sensitivity to density variations also worries me. Although not demonstrated with this data set, its sensitivity to outliers is also a pretty big handicap when working with real data. Although as I will show in the next post this can occasionally be a good thing in specific situations.

This is the first time I’ve played with FSSR and I am intrigued by it. Although its mesh is quite messy and complex it is accurate. I really like the fact that, in principle, the user doesn’t need to know anything about the input point cloud. I will definitely need to spend more time looking at ways to deal with some of its limitations. Although not tested specifically it did appear to be significantly more efficient computationally than SPSR, as its authors claim. Anecdotally, ZB is faster than FSSR is faster than SPSR. The difficulty of using SPSR on large datasets is definitely a challenge. In practice we just deal with this by meshing in chunks. I haven’t looked at it closely enough to know if that is an inherent property of the algorithm or if it is just due to the implementation we’re using.

Part 3a: Real-World Scan Data

Finally let’s look at some real data, in this case collected with a Faro Focus 120. I want to thank apickel for doing the leg work to put this section together. The first example I want to look at is a small section of a point cloud containing a car and a few typical urban objects.


This data set presents a couple of challenges. First the car’s specular surface yields a very noisy point cloud, especially when scanned at close range. The black rubber of its tires also tends to produce noisy returns. There are also big variations in point density. The density is most obviously low in the circular patch below the scan position on the right side of the image and underneath the car, both areas that were only visible in scans that were some distance away. The bike rack in the lower right corner also exhibits both problems, with small data gaps due to incomplete coverage and a significant number of outliers.

For this comparison I’m going to neglect the FSSR algorithm and solely focus on SPSR and ZB. SPSR was run using a minimum edge length of 0.006 m and ZB was run using both Sequoia’s suggest radius of 0.0202 m and a smaller radius of 0.0101 m. It should be noted that the area shown here is part of a larger data set. SPSR was run on the entire data set and the region shown was clipped out of the full mesh. However, for performance reasons and to give it the best opportunity to estimate a reasonable meshing radius ZB was only run on the small section of point cloud shown. These differences account for the slightly different mesh bounds you see in the following images.


Running ZB with the suggested radius (0.02) produced a very soft mesh that also showed significant noise. Noise that is most obvious near the bottom of the car doors. The poor performance on concave corners that was observed in the synthetic tests is also visible here where the tires sit on the pavement and around the base of the fire hydrant and bike rack. Decreasing the radius to 0.01 reduces the problem with concave corners and allows ZB to resolve more details. It also produces a much noisier surface. SPSR was much more resistant to the noise in the point cloud while still being able to resolve fine details, like the gas tank cover and grill, that ZB had trouble with.

Using ZB with the larger radius of 0.02 does do a reasonable job of handling varying point densities underneath the car but did leave some gaps along the curb below the scan position on the right side. As expected using a smaller radius for ZB produced a mesh with many more holes while SPSR was largely immune to these density variations.

Looking at the bike rack in the lower right hand corner, we see that ZB did not bridge the small data gaps for either of the radii that were used. As with the car body the ZB mesh was also quite sensitive to outliers and noise in this region. Looking at another view we can see that ZB connected the ground to the bottom of the bike rack when the larger radius was used.


This view also shows that ZB with a radius of 0.01 did a slightly better job of representing the small retaining chains on the caps of the fire hydrant. Perhaps not surprisingly, neither method does a particularly good job of representing discontinuous surfaces like the tree’s foliage and the shrub. ZB’s representation is arguably more faithful to the structure of the tree, but perhaps the bloby melted plastic SPSR version is more desirable if you want something in the background to project textures on to.

Part 3b: Real-World Scan Data Continued

The last example I want to show is a case where ZB did amazingly well with very little good data.


The point cloud measured on this chain link fence is extremely sparse with no data on the lower half and only scattered points on the upper half. Although the point cloud hints at the chain link structure, I would not consider it a complete representation by any stretch of the imagination. Nonetheless, after trying a few different radii, ZB did a surprisingly good job of reconstructing the diamond pattern where even small amounts of data were available. Even though the ZB radius was nearly three times larger than the minimum edge length used by SPSR, ZB clearly performed better in this situation.

Summary

These real world examples (and others I’ve looked at but haven’t shown here) have largely confirmed what I saw in the synthetic tests. ZB does well in certain specific cases such as pulling out small features that are poorly resolved by the point cloud. However, it is very sensitive to noise and doesn’t adapt well to varying point densities. Concave edges also cause major problems. ZB does have several parameters that can be adjusted to tune its results, so with some tweaking it may be possible to achieve better results than what I’ve shown here. In a production environment though we simply do not have time to test numerous parameter combinations for each unique region of a scene to find the one that provides the best balance between high detail, low noise, and good surface continuity. The ability to achieve good results with little user intervention is highly desirable. Given these observations, SPSR still fits our requirements better than ZB or anything else I’ve tested to date.

Thank you for your excellent report!