We are preparing a first Alpha drop for you to play with soon.
In the mean time, we wanted to give you something to look forward to.
Both sides of the movie show the same 2 million particles rendered in Krakatoa 1.5 with the same lighting.
The only difference is the render mode. Simply flip a switch and…
wow…wow wow… I just felt down from my desk… Amazing sweet stuff! I think I will wait and try this version on Tsunami/wave’s foam cloud particles… cool stuff!
It is a bit early to talk about what the render times will be. We had to make it work first before looking into making it fast.
Right now, still single-threaded and unoptimized, it is approx. 10 times slower than point rendering.
The good news: It is great for multi-threading, unlike the point rendering which is rather tricky to run in parallel.
And then there is the fact that voxel rendering scales differently than point rendering. Point rendering scales more or less linearly - 10 times more particles, 10 times longer rendering. Voxels on the other hand scale based on the number of voxels to be processed, since the particles are encoded into the voxels really fast, so 10 times more particles with the same voxel size does not necessarily mean 10 times longer render times.
The one million particles in these tests took 3 seconds in point mode and around 30 in voxel mode. These times could go down depending on the voxel size and the voxel filter radius, affecting quality a bit.
I will have to run some more benchmarks to see what happens when rendering 10 or 100 million particles using both methods. We hope to post an Alpha build this month so you could play with it yourself…
Completely understood, my curiosity always gets the best of me, lol.
Sorry for stating the obvious but the “niche” renderer will no longer be a “niche” renderer, this just opened the doors to a whole new realm of possibilities
I know the whole point is to just flip a switch to populate the voxel array with density and color from the points, but will there be methods for intercepting the voxel data so we can apply maps to it? Or to completely skip the point-to-voxel conversion and just provide the renderer with our own pre-cached voxel array?