Image Projection from Scanner Panorama

While the image projection tool appears to be pretty powerful, I am not a big fan of the need to capture separate images from a digital camera to provide high resolution texture for the mesh. What follows here will be more of a question/feature request to extend its abilities. Would it be possible to use the spherical color panoramic image from a laser scanner to map high resolution textures to the final mesh? This is a fairly common desire expressed many times over on the laser scanning forum. Here is the most recent post on the subject matter:http://www.laserscanningforum.com/forum/viewtopic.php?f=69&t=8585 It would also be wonderful if the alignment was automatic in Sequoia as the color image calibration can sometimes be derived from the scans. We have noticed this to be true with Faro scans imported into Autodesk Recap prior to colorization in Faro Scene. I can only guess that because Recap uses the Scene SDK it can properly align and colorize the point cloud without having to perform this task in Scene. It may be true for other manufactures, but I don’t know.

If it is not possible then some form of planar simplification or smart reduction could be utilized to reduce the number of mesh faces while retaining the original count of vertex colors. This may allow us to create a mesh from a near full resolution scan then reduce the size of the mesh but still retain the detail contained in the original color resolution. This would be similar to the workflow and mesh texture quality of photogrammetry, but with the advantages and accuracy of laser scanning.

Sorry for the ramblings, just throwing out an idea with the hopes that it is heard by the people with the imagintion to make it happen :smiley:

The ‘Panoramic’ projection mode on the Texture Projection node should be able to accomplish this (be sure to set Horizontal FOV to 360 when using this mode), assuming I’m understanding what you are saying correctly. Have you tried it, or is there something else you feel is missing from the image projection toolkit to accomplish this?

Attempting to align image data from the color data in the point-set alone seems like it might be a little limited (i.e., reliant on the specific lighting conditions under which the photo and the scan were taken), but it might be good to have as an additional option down the line. We’d have to experiment and see if we can make it work.

Yes, but I am too ignorant to get it right :laughing:


I watched Ian’s video about 5 times, read the manual and made a few attempts. This was the best I got, so I think I am missing something. It didn’t matter that if I set the image to Panoramic or spherical it was not even close after picking 4 points for alignment. Maybe I need more points…or more brain power.

I don’t think I explained this correctly the first time. What I meant to say is that scanner already captures its own color panoramic image separate from the laser scan. That 360 degree spherical color pano alignment is already baked into the raw .fls scan(well at least in the case of the Faro scans). Using the Faro SDK, Autodesk Recap can take a scan straight from the scanner and apply the color to the points correctly without any manual alignment being performed by the user. That means the scanner knows the location of each image it captured relative to the scan data already.

What would be ideal is if Sequoia could also apply the color without having to pick points for alignment. Furthermore, if we could reduce the mesh then re-apply the original color pano from the scanner automatically to the mesh as a texture we would be in heaven :smiley: The final goal, is a low poly count mesh with a high pixel count texture. Well, for some applications that would be the goal.

The scanner captures a 70 megapixel image from 85 individual 3 megapixel images, and this is what the process looks like from the perspective of the scanner:
https://www.youtube.com/watch?feature=player_embedded&v=Bmhw16kvkwI

This is what one of the 85 images looks like from the scanner:

And here is the scanner’s panoramic image as exported from Faro Scene:

I hope that made more sense, but I doubt it :confused:

Regarding your scene: Hmm, its not impossible that its a bug, but either way, something needs to change because we want this to be as easy as possible! Do you think it would be possible to send us your scene and dataset and we can see what’s up?

Regarding automatic alignment: Ah, I see, so you’re talking about actual position/orientation data from the scan itself. Yes, that indeed would be ideal (for point file formats that have that metadata at least), though from what I understand not many formats capture that data (though apparently Faro does)? We are definitely looking at how to integrate that into our workflows for the formats that do have that data, so assuming its there, we can probably make use of it.

I have a couple more questions:

I would like to make sure that I understand what you’re suggesting. Are you suggesting using the color information that is already baked into the point set, and visible in the Sequoia viewport? (I.e, creating a texture from the point cloud’s color information?)

Does your software allow you to export your scans and images together in E57 file format?

I think Jonathan is asking for the ablity to automatically reproject the spherical panorama captured by the scanner on to a mesh texture map. That way the texture resolution only depends on the resolution of the photos and is independent of the resolution of the underlying pointcloud or mesh geometry. However, I think it would also be useful to be able to bake point attributes to a texture map. The photos may already have been baked in to the point color or photos might not be available so using an intensity texture could be useful.

Faro’s SCENE software does export e57. The ‘gargage.e57’ sample file from libe57.org was apparently provided by FARO and has both color and intensity point attributes as well as a sidecar panorama image file. The pose of the panorama is defined in the e57 header. I haven’t tried it but I’m guessing that with the current tools and a minimum amount of pain that pose could be manually assigned to a Sequoia ImageProjection node to get it to match.

Unfortunately, I don’t know how to export a similar e57 file from SCENE. Whenever I export e57 from scene I get baked point colors and/or intensity but no panorama image file or pose info in the e57 header. The manual doesn’t seem to mention anything useful, but perhaps someone else knows the secret.

This feature is currently in development (there was a prototype available in the UI at one point, it may have been disabled for the public releases though).

Do you mean using a texture projection to modify the point set? Or do you mean adding additional (non-color/texture) data to the mesh?

EDIT: nevermind, I think I understand. The idea is that you want to put the data from the point cloud into the mesh’s texture, using proximity to the individual particle, rather than using an image, as the data lookup method? If so, then I think this will be a very good feature to have (for a number of reasons), and I like the idea of extending it to more than just color data.

Exactly.

Like I said doing this with “intensity” rather than “color” is probably the most obvious use but I would really like to be able to work with point clouds that have arbitrary attributes.

Even being able to bake geometric properties, e.g. z value, could potentially be useful. If the mesh is ultimately going to end up in a DCC app then there are easier ways to setup a material in that app which gives the desired rendering effect. However, if the mesh is going to end up back in some other point cloud or design app with a less well developed material system backing out a texture map might be the only option.

Whoa! Sorry I am late getting back to the party :laughing: Jed, I was hoping you would jump in an explain for me. I don’t speak exactly the same language, so you did a great job as an interpreter. I think you guys understand what I was trying to say now.

  1. Use the scanner’s panoramic image(that is typically a higher resolution than the scan) to create the texture.
    or
  2. Use the vertex color or intensity(baked-into the scan at this point) to create an initial texture from a very high resolution mesh, save it for later, then use planar simplification to reduce the mesh and subsequently re-apply the initial texture.

I keep mentioning planar simplification(or some form of smart reduction), because while Sequoia can create some beautiful and incredibly high resolution meshes, I often need a much lower resolution with high texture detail to import into simpler programs.

Paul,
with regards to your question here

Would you like the .sprt file, the .e57(with problems Jed mentioned) or something else? I can also supply the original spherical panoramic imagery, but in terms of imagery this project may have been a little less than perfect. It was overcast with light rain, which caused some ugly anomalies in the imagery. Not really that important if all you want to do is to trouble-shoot my problem with alignment.

The .sprt file, the spherical image, as well the .sq document you were using would be great.

All are on their way :stuck_out_tongue:

Hey there. Sorry for the delay, I finally had a chance to look at this. There was definitely a bug that was causing problems with the alignment (specifically bad calculations involving the image aspect ratio). I was able to successfully get a decent alignment on the scene once that issue was resolved (note that I used 5 alignment points, 4 at the corners of the building, and then one near the center for stability). The fix should appear in our next public build.

Excellent! That makes me feel a little better about my inability to get a proper alignment. Will give it another go on the next build :smiley:

Hey guys. I’m new here, but interested in how this turns out. We’ll be trying to project pano color images from Faro Scene on to geometry as well, so I’m curious to see how this progresses.

I should probably experiment a bit before asking, but… Does the Image projection just map the mesh UVs to the image space? Or does Sequoia try to layout non-overlapping UVs in 0-1 space and re-project the image into the mesh’s UV space?

If the image projection creates UVs in the space of the projected image, I’m guessing any workflow that uses more than one image projection would require you to assign which faces belong which image, right?

Another consideration… The pano images exported from scene are not actually 180° vertically. They crop out the bottom ~30°, so the image itself isn’t properly mapped to a lat-long sphere. This will need to be accounted for by adding some pixels at the bottom (changing the “Canvas Size” in Photoshop), or might be able to be accounted for completely in Sequoia by adjusting the Image Settings> UV Offset / UV Scale parameters.

I’m super curious to see what you guys come up with. Please post results!

Regarding image projection:

Currently, the image projection generates a set of 0-1 UV coordinates for the mesh (or use existing UV coordinates if they are provided), and then uses those to construct a single texture image for the mesh. Note that our UV generation workflow is still a work in progress, and might produce sub-optimal results at the moment.

While the idea of mapping the UVs directly into image space was considered, it is a somewhat trickier proposition for a number of reasons, and also can produce incorrect results with spherical/panoramic projections, since image interpolation is no longer linear. That being said, manually mapping faces to specific projections might become part of the workflow at some point, depending on how well our current solution works. Currently, projection texture colors are assigned based on a priority system, where all locations inside the near/far range of a higher priority texture projection will always override those in a lower priority texture projection.

Of course, if multiple texture coordinates on a single mesh were more commonly supported, this would be another story…

Regarding the pano images:

Indeed, we might consider adding specific pre-sets for known cases like this (it seems that the 30 degrees off the bottom is actually fairly common, though not universal). Our current recommendation is to fill the bottom section of the image with transparent pixels until the image is a 2:1 aspect ratio. It is also possible to achieve the same results by tweaking the V-scale/offset: for 30 degrees off the bottom, it would be a Vscale = 180 / 150 = 1.2, and then set the aspect ratio to 2.0 (in general, for full panoramic images, aspect ratio should always be 2.0).