Looking at example03.cpp, I am wondering how KrakatoaSR is able to decide that the 1,000,000,000 does not need to be loaded if it is not within the view frustum of the current camera view.
Is there some underlying Bounds/Frustum intersection test ? Does it mean that all 1,000,000,000 are loaded into memory first then the test of the overall bounds begin ?
Is there some way to tell KrakatoaSR not to commence loading ?
Currently the Krakatoa rendering engine does not do frustum clipping.
It is something that I could add directly to the loading engine. I can add this to our future features wish list. It could also be done on your end via the “particle_stream_interface” class if you already using that class. If you’re not using that class, then it might be a pain to do.
The clipping operation is slightly more complicated than checking the view frustum. Particles that are also in the view frustum of any lights can’t usually be removed either, because they can shadow particles that are in the view frustum.
additionally, the particle_stream_interface should provide the following
float detail() const; // fractional value of how much the bounds occupies the NDC (camera, lights etc) space
This gives the developer some idea of how much particle information to provide for example, if a stream is really far away from view that occupy only a couple of pixels, no point providing the full details of the 100 millions of points to the renderer.
How ? Are you able to provide some pointer or example ?
Lights should be treated like a camera with “view frustum”, I think the renderer should then decide (after integrating all entity with view frustum) what the loading strategy and optimal spatial occupancy is
I believe this will allow Krakatoa to render even larger production FX scenes.
These are definitely all good ideas. Up until now we haven’t had the need to do loading based on view frustum or level of detail, because of the fact that the particles always came from within a 3d application, and were likely to fit in memory. However, we have done in-house benchmarks for out-of-core datasets that have been quite successful. Tools for handling very large data is something we will be adding in the future. You are right when you say that it would be very useful to use an adaptive loading strategy in the renderer when dealing with large data.
The reason “stream_bound”, and “detail” aren’t included in the stream object is because of the nature of streams. It is true that this information is often available if loading from a file (as file meta data), but Krakatoa typically has no idea of the final bounds of the stream until it has been fully retrieved (for example, if someone has modified the stream with Magma, etc.). So, in our own plugins, to get the stream bounds we would have to scan through all the particles prior to actually loading them. That may be necessary for very large data sets, but the option isn’t provided currently.
The way that the “particle_stream_interface” class works is that it allows the programmer to provide raw particles one-by-one to the renderer. It is used in cases where you would want to provide particles directly from a Houdini particle system without a file intermediate, etc. The upside to using that class is that the particles don’t have to be all in memory. So, you can do your own bounds clipping within the code, or do your own frustum clipping, or LOD loading. The downside is that it is a low-level interface, so if you’re loading particles from disk, you’d have to write your own file I/O.
One wishlist item that I’d like to add is a programmable particle stream “modifier” object. It would allow you to provide a custom function for each particle in a particle_stream object before going to the render (Kind of like what Magma is for artists). That way you could cull on-the-fly, etc. It would just be a convenience function, since all this can be done with the “particle_stream_interface” directly. Is that something you’d fine useful?
It might help me if I had an idea of what you are doing with Krakatoa. If you don’t mind mentioning it that is.