So I spent the weekend working with Faro Scene and a collection of 7 data scans. After getting the scans aligned and optimized, my options for export over to Sequoia are either the .E57 or .XYZ format option. I don’t see a way to transfer point normals, as the .POD (Point tools format) is the only format with a normals option, and it’s currently unsupported in Sequoia.
So is there a way to use the scanner position info from the Faro Scene file? I have those locations down to the millimeter, so if I can re-generate normals from them I should be good.
(Using Faro Scene 5.5.0.44203 and scanner is the Faro Focus 3d 330X model.)
I would expect E57 to be able to transfer the normals, because there is an extension for surface normal data published with the standard, (libe57.org/E57_EXT_surface_normals.txt) The E57 should also be able to transfer the scanner position info. From what you’re writing, it sounds like Faro hasn’t implemented normals support for E57?
The XYZ format would not retain the ScannerPosition info since it does not contain any metadata.
E57 supports metadata, including scanner positions. We support multiple scanner positions in a file, and tag each point with an index to its corresponding scanner position. So in theory you should be able to export an E57 out of Faro Scene, load in SQ, cache as SPRT, and you should see all the Scanner Positions in the UI. You would then add the Operator to generate Normals from Scanner Position, and resave as a new SPRT file to bake that info. You should be able to visualize the “Normals” as color in the Viewport to see if they make sense.
We are also looking into adding an option to simply place a helper at the apparent Scanner Position and pick it in the Operator in place of actual metadata, but that would only work with a single Scanner file, and is not in the current build yet.
Remember that the result is a “line of sight” vector as opposed to a true surface normal, but in most cases it should be good enough to remove the back side of the mesh.
Do you mean estimated point normals? As far as I know SCENE doesn’t ever estimate point normals so it can’t export them. You’ll need to generate them in Sequoia as describe above. I’d love to be proven wrong but I’ve never been able to get normals out of SCENE and their SDK documentation (developer.faro.com/) doesn’t indicate that the functionality is available anywhere.
Note I’m not sure that “Normal from Scanner” is actually working in the latest Sequoia build. I’m able to load e57 files (including from SCENE) and the Scanner Data is populated correctly in the Object Attributes. However, Adding the Normal operator to the PointLoader doesn’t seem to have any effect, even after rebuilding the SPRT. The viewport rendering doesn’t change and Use Lighting is still disabled because of a Missing Normal Channel.
Did you try Exporting the Point Loader to a new SPRT, then loading that? The Viewport Display rollout seems to have problem seeing the newly generated channel when the system is live, but should see it when it is coming from the resaved SPRT.
I just tested with 20 PTGs combined into a single SPRT - it contained 20 Scanner Positions and a ScannerIndex channel I could visualize (with a Scale of 0.05). Then I added the Normals From Scanner Position operator, resaved to a new SPRT, loaded in a new Point Loader and the “normals” were there.
However, I could successfully mesh both the “live” Point Loader with the operator, and the Point Loader with the resaved SPRT, and both culled the backfaces the same.
Exporting a new SPRT does save normals, but they seem to be buggy. With the new SPRT loaded changing it’s Viewport color channel to Normal doesn’t have any effect. If I export the SPRT to a CSV file and import that I can set it’s Viewport color channel but the normals don’t seem to make much sense:
They are not buggy, they are just not actual surface normals. They are “scanner line of sight vectors”. Basically we reuse the Normal channel for storage (because currently the Mesher looks for the Normals channel by default to do the culling and there is no way to switch to another channel), but we put the vector connecting the point’s Position to the ScannerPosition in it instead. So this is a radial vector field pointing at the point in space the laser beam originated from. If a face of the mesh is pointing in the general direction of the scanner, it is considered “front”, and any face pointing away can be removed.
We found out that for single side mesh creation, this field is plausible and much faster to generate. The real Normals Generation operator produces surface normals, but currently has no way of knowing the correct in and out rules. So we are considering using the ScannerPosition approach as a hint for the other Normal generation method in the future.
This is a known limitation - switching to ANY channel but Color requires a push of the Update Point Loader button, otherwise the display color channel does not get reloaded in the viewport system. It is logged, but I am not sure it will be fixed soon.
Ok, this naming is little confusing but I guess just a temporary workaround? Since they aren’t surface normals it also means that lighting of points doesn’t work properly with them.
Beyond how Sequoia is currently using them I can see good applications for both surface normals and line of sight vectors so I assume the ultimate plan is to be able to have both stored in appropriately named attributes.
This is the Particle Normal Generator that works for organized and unorganized point clouds right? I did notice that operator is quite slow, especially if you set the radius to large. For organized point clouds, e.g. points on a spherical grid like we’re talking about in this thread, would it be faster to do a grid search where you fit a plane to the points in something like a 3x3 grid neighborhood rather than using a spatial search radius?
You would want to have angle and/or edge length cut offs to ignore planes that are highly oblique to the point’s line of sight, but it should be easier to find sane defaults for those values than the search radius. From experience, 85 degrees and 0.1 m works pretty well for almost all of the stuff we scan regardless of what it is, which scanner we use, or which settings we choose. Choosing a good search radius, on the other hand, is dependent on each point cloud’s density, something that its-self shows dramatic spatial variability, so is much tougher to get right.
Ah-ha, that sounds promising - is there some trick to exporting that Scanner Data information? Or is there some preparation needed in Scene beforehand? My Object Attributes remain blank no matter what I do.
I didn’t do anything special in Scene. I am however using 5.4. I’ll try to get on our 5.5 test machine later today and see if it works there. It’s certainly possible that Faro broke something in the e57 export with the point release.
You could also open your e57 in CloudCompare and dig through the File Structure to find the data3d/pose to confirm that its rotation and translation values have been set correctly by Scene.
Stumbling around in Scene last night, I finally managed to get a correct e57 file export out, so at least I know it can be done.
The file size was VERY much larger (800MB vs. 350 MB) but the scanner camera was immediately indexed in the Sequoia UI, even before I saved out the cache. The cull by normals then worked.
Only problem is I’m not sure what triggered the correct export. I was trying lots of random things and transferring files between machines took about an hour so, so I’d lost track of what I actually did by the time it actually worked. It may have been something to do with “fixing” the scan before export. I’ll try again today. I don’t know that it’s a case of doing anything “special” in Scene, it’s more like you have to do everything in Scene in a set methodical way which puts the data in the right frame of mind.
I’ve got a support call in with Faro. My “support request has been received” (five days ago) so I’m on the edge of my seat with excitement to see what happens next.
Interesting, did you generate scan and/or project point clouds in Scene? I’m wondering if they were getting exported rather than the actual scan data. That might account for the different file sizes since I believe those processes will remove some of the redundant points. We don’t really ever use or generate those clouds in Scene, which could explain why I’ve never run in to that issue.
Good luck with Faro Support, I would be interested to hear what they say. As an aside, last time I needed help from Thinkbox I got a response to my email with a solution in 6 minutes. You guys rock!
Faro support still hasn’t gotten back to me. Kinda sad really.
So looks like if you export a Scan cluster in Scene it WILL export camera position as well. Unfortunately this exports the entire pt cloud (ignoring any clipping I’ve applied) but at least I can now backface cull meshes.
Heh, and now I’m back to the fundamental problem - this blobby style of generating meshes really doesn’t tackle our needs. It’s really the elephant in the room here if you’re trying to assess what “v1.0” looks like.
case:
I’m scanning a film set and with the lidar points I’m getting v. high precision placement for the entire volume. If I passed a blobby version over to Layout with +/- a couple cms in accuracy, I’d get an immediate kickback. I have the exact position of everything on the set, so my blobby approximations should quite rightly be rejected.
We were just testing VRMesh and while much of it is clunky as heck, it’s “Wizard for Point Cloud to Mesh” is pretty neat. Man, I could do a photogrammetry photoshoot and get a one-sided mesh with textures, so it’s daft my Lidar can’t provide the same.
There’s enough siggraph papers out there for techniques of meshing one-sided point clouds … surely one of them could be implemented in Sequoia?
Just checking, are you conforming the single sided surface to the point cloud after removing the backside faces?
The operator is called “Conform To Particles” (which should be called “Conform To Points” in upcoming builds), and should be added after the “Cull Faces…” operator. The resulting single-sided mesh should then pass through the surface defined by the point cloud, not at a distance from it as the blob mesh does.
We have looked into the “single side Poisson Surface Reconstruction” methods other applications are using, and those have their own set of gotchas. We have on our ToDo list a PSR method implementation for post v1.0, but it does not seem to be a magic bullet.
I am now - that’s a helluva improvement! Sequoia’s not happy trying to simplify the mesh though, get stuck halfway. Are there any gotcha with order of operators?
The optimization is built in the Mesher, but it is a node in the node graph. By default, it gets created when the Mesher is made, and will be applied before the Culling / Conforming operations. So you will be creating a blob mesh, optimizing it (which implicitly resamples relevant channels from the point cloud), then culling it and conforming it. However, you could reorder the graph by moving the Mesh Reduction operator to the top of the Operators list, and it should be applied after the culling is performed. I have not tried that though.
If you can provide an example dataset / scene to our developers where the optimization and culling don’t work together, it would be very helpful for our debugging efforts!
Ah, I’ve pinned down the problem - I’m finding that mesh reduction is breaking when “Cull Faces by Normal Similarity” is turned on.
The internal progress of simplifying_mesher then hangs.