I’m new to the beta but excited with what I’ve seen so far, unfortunately we need to have some good segmentation tools, which I have dreamed up here…
So that’s a piece that was accidentally selected because it was hard to see, so we just drag it to the point cloud where it belongs
Oops, I meant “selected”
Thank you VERY much for the great feedback, we are looking into this as it would fit nicely with some other development in the Sequoia core we have going on right now…
The idea of showing layer thumbnails like that is interesting, it seems like it might get a little unwieldy though with lots of segmented objects. How many different segmented objects do you think you would usually end up with?
I’m also wondering what the best way to update the thumbnail would be after changing the data with the drag and drop in step 6.
The thumbnails could be optional (you could toggle them on/off as needed). If the user gives them meaningful names, they could be used just like Layers in Autocad and elsewhere without looking at images. Looking at the viewport is often enough. If we do use thumbnails, we might highlight the segment in one color, and still draw the rest of the points semi-transparent in another (e.g. some shade of gray) for a clearer visual reference.
Updating the thumbnails would be similar to what we do with Bookmarks - they can be updated in a background thread by rerendering the relevant points, if necessary at LOD 0 for speed. At that size, a few hundred thousand points are typically enough. We don’t store Bookmarks in the SQ file, we refresh them dynamically on load. I suspect the same could be done with the SPRT file - if you load it and it contains segmentation, the thumbnails (if enabled) could be rebuilt on the fly, with a manual option to rebuild them at any time. The main problem would be picking a good viewpoint from which to render a meaningful representation. So they might turn into a form of in-SPRT Bookmarks with stored View TM… Kinda like the Scanner Positions we already store in the SPRT.
That being said, the thumbnails are just a UI level design detail - we need to develop the core functionality first
Right, in the case of step 6, you could just remember the camera position and rerender with the new data since the added data was still within the original thumbnail’s view. However, what if the Main uncleaned scan was added back in to the Untitled Cloud? Most of the new data would be outside the field of view of the original thumbnail. I suppose you could just use the view when the layer was created then leave it up to the user to define a new thumbnail view later if they care about it.
Very true.
Something I’m curious about when working with manual selections like this on large data sets, say 1 billion points, is how quickly after the loop is closed can the selection be completed, the display status of points updated and control returned to the user? For the software I’m using now the answer is several seconds even for much smaller data sets, but I don’t know if that’s just their implementation or if it’s a more general limitation of doing that kind of operation on a large number of points. Assuming it can’t be done instantly, maybe it would make sense to have a selection buffer similar to Mari’s paint buffer so that at least you could define multiple selection areas and refine them from the same view before needing to wait for the actual selection to complete.
For those not familiar with Mari, it is a mesh texture painting application. The way it works is that you effectively paint on to a clear piece of glass in front of the camera. Then as a separate step your paint strokes get projected on to the mesh from the camera view. By default, the projection happens automatically when you start moving the camera so it appears that you are painting directly on to the mesh. However, you can also set it up so that you need to manually apply the projection. To see how it works you can checkout this video starting at 12:25 and running for about 3 minutes: youtu.be/amdM173mg7U?t=12m25s
The number of layers you’ll have depends on the number of features on the “scene.” One could collect all pipes of the same type into a single layer or treat them separately. For our attraction BIM models, we want layers containing projection surfaces (could be a rockwork wall,) projector devices, vehicle tracks, infrastructure elements that might be ignored as we build the VFX, and maybe individual animatronic characters or automated weenies, so not down to the “bracket and bolt” level of subdivision, although someone else might easily want to divide to that level - a refinery scan for example. I wouldn’t worry about how many layers result, because some BIM models are going to actually have that many items to deal with, and when you are spending a quarter-billion dollars on a 25-year lifespan physical environment, you have to pay your dues and deal with everything.
For a movie set one might need to simply divide the object enough to create UV-able segments of a model that someone down the line will need to process and animate. For Avengers, for example, we divided scans into trees, floors, walls, columns, vehicles, (which might be divided into tires and body etc.) Each of these “Features” end up reflecting in the bid cost to do the scan and each piece must be remodeled for VFX use and is rarely used as-is from lidar data.
The thumbnail layer views can be generated from several viewpoints as a background task, and one could simply right-click on the thumbnail to switch to the viewpoint that means something to the user, e.g. an iso view or plan view, whatever works.