Looking ahead

Here are a few things we still have planned:

Projection
• Improved alignment: we are working on automated and semi-automated techniques for aligning projection imagery
• Projection texture baking: we are also exploring ways to bake the projection texture information onto the points and onto the mesh for later reuse in other applications.

Batch Processing
• We have always intended to support batch processing both from the UI and in a remote compute node scenario (eg command line processing on a farm). This is still a big to-do item for us. Initially this will likely mean performing format conversions of points and meshing, and performing batch meshing operations. For example, partitioning the point cloud and meshing blocks of the data.

Improved viewport display
• Our new .sprt format is under active development and as is the loading system that is built on it. You can expect that we will improve the performance of Level of Detail loading and in general point display performance over the coming releases. We will have a new post talking about our internal point formats in more detail as well.

Document Explorer
• The multi-document interface is still a bit young right now and with each improvement it gets closer and closer. For example, a close document option will be showing up soon!
• To aid in navigation and document management a Document Explorer panel is planned. The features of this panel will be fleshed over of the coming weeks and we hope it will cement the multi-document nature of Sequoia and how that can improve your workflow.

We are expecting other improvements and features as well, but those will largely be dictated by you so please chime in with your thoughts as you use Sequoia.

Ian, I think Projection can reduce mesh holes.
The Projection tool can patch over occlusions in the point cloud and give a continuous mesh. Faro Scene has a feature called a “virtual scan” - this has you cut and paste an image file from a camera (e.g. a jpeg) and fit it to an area with a laser point cloud occlusion, like a window.
See this Virtual Scan tutorial by Eugene - youtube.com/watch?v=vlrJME7 … l3pR_ajmiw
The virtual scan only works on planes in Faro Scene.
If the projection texture is baked to the point cloud, it can wrap over the mesh holes and provide the right 3D shape to the hole filling. So it could create a virtual scan for plane and bending shapes.
This would be nice to have for Sequoia as meshing software - it could deliver speed and a minimal number of mesh holes.
I’m not sure if the baking will happen using xyz values or matching RGB values - but it has the potential of creating a new point cloud file format. :smiley:

David thanks for the link!

That is an interesting use of projection placement. We are intending to build out a primitive placement feature to quickly placing various primitives in the scene to help in hole filling in particular. Would you want to then generate synthetic points from the primitives for later re-use? (Plane in this case) Or is it better to simply use the primitives to create new-mesh data stitched with the current mesh? Or both?

Having generated primitives opens up doors like the projection placement does.
If it’s possible to build the mesh from a series of layers i.e. 1) point cloud 2) generated primitives 3) projection (virtual scan) it could be at least possible for a user to outline holes in a Sequoia mesh and get 2-sided hole filling.
Absence of a hole filling tool in Frost lead to the need for sculpting work in tools like Meshmixer.
If the team is able to generate hole-free meshes by accurate projection placement (maybe color continuity can be a test for a hole, or shadow algorithms) then holes could be filled by Sequoia pre-emptively. That would be a huge time saver and would make meshing pretty automated.

I’ll also comment of the use of primitives in the Revit post. One of the strengths of Zhu Bridson is that it is strong in feature recognition - and give a source for the many primitives that are in a scene. Even if it can deliver architectural primitives the usefulness is huge.

Indeed. Hole filling is high on to-do list, but I am not sure at this stage when it will show up. The potential for having sculpting brushes and similar things to provide synthetic points sounds great to me, but primitive placement and similar tools are much more likely in the near term.

Until Sequoia can do the hole filling on its own, Autodesk Memento has algorithms to fill holes and delete particles in an automated manner. It makes a good post-processor to Sequoia’s output.

I want to put the texture holding capability of VRML out to be considered as a way to add resolution to 3D scenes. It definitely moves the Faro scanner into capture of very high definition virtual worlds. The VRML file spec might also extract geometries out of the point cloud.
This thread has fascinating examples:
laserscanningforum.com/forum … =49&t=8209

It’s a Easter egg within Faro Scene since it requires multiple outside steps to create the VRML environment, but at the end of the day Scene does a good job of rendering it. Possibly some of the mesh formats in Sequoia will do the same thing and it seems the image information in the spherical pano is worth pulling into the mesh as texture.

I like this workflow - it creates a lot of potential deliverables.

To take this thought a bit further, are you interested in generating environment domes? Cubes? etc? or creating vrml versions of the point cloud mesh.

Both are certainly possible. Right now we haven’t spent much time thinking about the environment surrounding the point cloud which could certainly have been captured in imagery. Adding this king of funtionality would be an interesting extension of the image projection features we are working on right now.

We can certainly look at adding vrml support as well. It is likely not a good format for the generated triangle mesh but I can see it being useful to have in the tool box, in particular when texturing primitives (planes, cubes, cylinders, etc). If there is a lot of interest in this format, we will add it!

One of the interesting new features of Scene 5.4 is the ability to create a view of the environment from any scan point.
With the Thinkbox market being mainly FX, the improved photo textures in VRML this is a way of creating better looking videos. My own use of the Faro Video App here: youtube.com/channel/UCgu47X … lTxtnd_C-w all comes from interpolations from maybe 6 viewpoints.
The app renders to 1080p HD, but the photo texturing would be even better in VRML. The leap would be the ability to construct a sphere AND boxes (depending on the geometry projected onto) with what is captured in the scan/camera cycles and registered in Scene. Scene renders above 4K now. :smiley: To answer your question Ian, I’d use matte painting practices as a guide and project outdoors to a sphere - which feels more immersive.

Thanks for the feedback David! That is an interesting new addition to Sequoia. While we started this point meshing in mind, there seems to be growing interest in the environment creation. For now, we are getting closer to getting the basics in place for an initial release. The advanced workflows are on the horizon. There seems to be a lot interest and potential in the projection system and what that might bring to people’s 3D workflows.