Since the release of Beta 16 and its introduction of the operators for converting to single sided meshes that conform to the source point cloud we’ve been testing that functionality to evaluate if it can replace our current mesh generation tools. This is very much a test of the tools as they exist today so I have no doubt that the results will change as time goes on and I look forward to watching them improve. Since this will be long I’m going to break it in to multiple posts. This one covers background info about our use cases and needs, next will be test results on a synthetic data set, followed by examples from real scan data.
Part 1: Background
Although we’re using meshes generated from lidar point clouds for a number of applications, recording props and sets for VFX has the most demanding requirements so most of this evaluation is written in that context.
Our data collection is primarily done with tripod lidar scanners (mostly Faros). Data collection occurs under any number of conditions and we often don’t have much control over them, for example: We may need to scan while other people are working in the area. Target objects may be made of materials that don’t scan well and we may not be able to treat them to improve data quality. We may not have access to all areas of a location. The time available to collect data may be limited. As a result of these factors the point cloud data has the following characteristics. I would argue all lidar data shares these characteristics but the above factors tend to exasperate them.
- The data is noisy and much of that noise is high frequency.
- The data contains many outliers.
- Point density has very high spatial variability.
To a certain extent these things can be mitigated by processing prior to meshing, but they can’t be completely eliminated so any useful meshing algorithm must be able to cope with them. Most of the things we scan are composed of continuous surfaces so an algorithm’s ability to accurately reconstruct those is most important. Thin or discontinuous features like cables and vegetation are special cases so lower performance is acceptable. Thin continuous surfaces, e.g. sheetmetal, scanned on both sides are also a special case. Ideally the software would be smart enough to choose the best algorithm for point cloud regions with certain characteristics (we need good point cloud classification tools). Finally, compute time is cheaper than human time so I will always choose long running automated solutions over manual ones that require excessive human babysitting.
With those considerations in mind my ideal meshing algorithm would fulfill the following requirements.
- There must not be systematic deviations between the mesh and the point cloud.
- Continuous surfaces are reconstructed completely without holes, overlapping faces, or bad topology (Yes I know bad is relative).
- Point density variations due to collection procedures should have a minimal effect on meshes.
- Mesh resolution is limited by the point cloud resolution not parameters of the meshing algorithm. See number 3.
- Few user controls. I would like to be able to use the same settings for almost everything that we scan and have confidence that when I come back in the morning after sending my jobs to the farm the results will be usable. See number 4.
- Resilient to small misalignments between overlapping scans.
- Resilient to high frequency noise on surfaces
- Resilient to outliers.
- Plausible reconstruction of “special” features that can’t be sampled adequately by the scanner.
- Good compute performance and scalability.