We will be posting more preview videos of various bits of functionality over the coming days and weeks. If there is something that you would like to see of video of, please let us know!
The first demonstration of the hacksaw meshing approach is posted on the site. This is the first stage to support meshing very large datasets on any machine. Before this technique, Sequoia needed to load all the points into RAM to complete the meshing. This method carves up your point set into chunks for processing into meshes, and then reconstitutes them into a final mesh when the pieces are loaded from disk.
I’d like to see how to manipulate image projection on demo videos.
Also, Loading speed is getting better, but moving point clouds is slow…
I’m not sure we can clip only working area and working with the clip view only.
can you try limiting the number of points you have loaded to a lower amount in the point loader? check “imit amount” and set the percentage until - say 25% and try it there. if it works then you can increase the limit of points you are working on.
i’ll let someone else chime in from the dev team with details
I tried with your suggestion, but It still have same issues.
It looks like caching first and start moving.
When it starts moving point clouds, no problems until I stop and move point clouds again.
What format are points stored in? Are you loading the converted SPRT file? or using one of the other supported formats?
Even with a fair number of points, the SPRT file loading should be responsive as it drops the number of drawn points based on how many frames per second our viewport can draw. I would expect you to be able to draw ~150-160 million points in 4GB of GPU ram., We have tested with various beefy cards, but not the k5000 so we might have run into an issue. Incidentally, we have recently fixed a bug that only affected faster higher performing machines so it can happen.
If you are using SPRT and seeing lots of problems, it may be what is essentially a bug. Would you be willing to share the dataset with us so we can test internally? (we are happy to not share the dataset with anyone and most of our internal test data is confidential).