Krakatoa can use a geometry object to cull points, and does so quickly by raytracing to see if it hits the positive or negative normal (or something like that, I’m generalizing).
And with the voxel rendering, we can convert geometry vertices to points, then sample those points in the voxel array.
Why not combine the two? Can you test a voxel for inside/outside based on the same code used for culling? And if it tests true for inside, could you not get the color/density/etc. from the object’s local space?
If you have an object that is 100x100x100 units in size (based on bounding box) and your voxel size was 1, you’d only have to test at most 1 million voxels to determine if they are inside, and suppose 40% of the voxels returned true, you’d end up having to sample the object’s material 400,000 times, which probably isn’t bad (the multithreading helps a lot! Thanks!). No points needed.
Definitely!
After talking to some people that know more about the code then me (Darcy and even Mark Wiebe who was visiting for the holidays), it appears to be pretty straight-forward. There are some special cases that are troublesome, for example a plane is a completely valid culling mesh for explicit particles, but would be a big no-no for voxels as any voxel on its back side of the world would be considered “inside”, resulting in an infinite number of voxels, so we will have to ensure the geometry makes sense or impose a voxel limit or something… But in general, we like the idea.
That’s why I said you needed to specify a bounding volume. Otherwise, you’d have an infinite number of voxels to test!
A bounding box is probably the most straightforward to use.
If you didn’t specify a culling geometry, you could skip the culling phase and just initialize the whole voxel array based on the voxel size, and populate with data assigned from the material assigned to the bounding box node.
You know how the Particle Culling has a distance from the surface option? That could be useful to define the voxel filling, too. If you select a sphere and voxelize it, a value around the voxel size would create a “surface” of voxels without filling out the volume. If you enter a high distance threshold larger than the radius of the sphere, it would get filled up. If you animate the value, you could “build” the sphere from the surface inwards. If you pick a plane, it would “fill” only voxels that are within the distance, thus removing the necessity for a bounding box test. This would also allow us to do the “Inside/Outside” switch, so you could populate voxels that are closer to the surface than the threshold, but not in the volume but outside of it…
Then if you create a PFlow with 1000 particles, assign sphere shapes with varying radii and animate them nicely, you would get something very similar to the results of Afterburn…
If you assigned a material with a nice 3D map, like a Darktree, then you could get all sorts of things that are much better than AB (which only allows a single noise function for both color AND density).
Would also simplify the problem of the points all being the same size (zero!). Good for users not accustomed to making the complex point-spawning PFlow setups we’ve done with Krak <1.5.
I suspect this would use some sort of modifier though. Otherwise how would you define the “bounding geometry” and the “culling geometry”? The PRT loader only lets you define one. If you had “bounding box” as the method for defining the voxels to test, and the culling rollout for, well, culling, then you wouldn’t need to change anything in the UI. Just have a checkbox that took “empty” PRT loaders and used them as nodes to make the voxels.
But a “Krakatoa Voxelize” modifier that could be applied to any geometry… Hmmm…
Speaking of which, any plans to allow us to make Krakatoa modifiers?
My idea was a “dummy” modifier like the one we have for Cameras which just provides some parameters Krakatoa can read, but does nothing on itself.
It could have a switch to turn the object into Voxels or disable that and let its vertices render as voxels instead (current behavior) and probably a None option so you can disable an object completely from rendering without hiding it.
For PFlows, you would probably use a Mesher with our modifier on top.
As for modifiers, you can already write particle modifiers, even scripted ones. All the PRT Loader does is pass the particles to any deformers on the stack, so any simpleMod scripted modifier that changes only positions will do, or its SDK equivalent. If you mean a more general modifier for modifying channels and such, that is still on the ToDo list and won’t be likely in 1.5, but possibly right after that.