Finally got my hand on Sequoia to try. At the moment I’m putting a proposal together for going to a location and trying to capture it fully in 3d.
So we’d shoot a series of 360degree domes, loads of photos for photogrammetry and hopefully we’ll get a LIDAR scanner as well. Probably a FARO unless anyone can suggest a better bit of kit?
Is there a workflow yet for taking photos aligned from Agisoft Photoscan in to Sequoia? If you had the point cloud data from Agisoft could you do a rough registration to align the 2 datas together and then use these photos for image projection onto the LIDAR generated Mesh in sequoia.
I haven’t tried to do this in Sequoia but have done similar things in other software. Basically we used the lidar data to define scene geometry and control points. The control points were then used in Agisoft to constrain the photogrammetric reconstruction and place it in the same coordinate system as the lidar data. The reconstructed camera parameters were then exported from Agisoft and used to project the photos in another application.
Being able to script both Agisoft (Python) and the application we were doing the projection in was pretty important since you’re dealing with so many photos. The last couple of Sequoia builds introduced the ability to write tools in QML so it might be possible to do this now. I haven’t dug in to how much functionality is exposed via QML and I don’t think there is any documentation yet so I can’t help on details, but I’m sure one of the Thinkbox guys will be along shortly to comment.
Regarding scripting, we are in the process of refactoring Sequoia to expose as many of its features as possible to scripting.
However, performing registration is not something planned as a feature of Sequoia v1.0, and scripting support will be initially limited to what is already possible through the UI. Of course, if some transform data was written to a text or JSON file by another application, it might be possible to read it and set transforms inside of Sequoia to align objects, place cameras and projections etc.
But let’s first wait and see how much will end up being exposed in v1.0…