Wishlist for Krakatoa 2.0

Hi, just wanted to throw out some ideas i would love to see in the distant future :slight_smile:

Krakatoa is such a great particle saving/loading/rendering system, it does not have to be limited to its own renderer. There are many proxy plugins, VrayScatter, iToo Forest, etc. but none which work with particles.

  1. Why dont you allow the krakatoa particles to be shapes and shape instances to be rendered with VRay, Scanline etc… I could then render millions of particle geometry much quicker, then any other way. Also, it would allow me to modify-deform the particles which no other system allows. I could also use the magma flow to shade the particle geometry. This system could be a powerful particle “proxy” plugin which would not be limited to rendering points but real geometry. I think it would be a very powerful addition, and would only make krakatoa even more useful and attract an even wider audience.

  2. My second wish would be the manipulation of the visual output. Right now we are limited to single points and the color/lighting which they are assigned. The “Large Dots” in the viewport is already a way to displaythe particles bigger, but we cant render it out like that. Why not give users the option of deciding themselves what look they want? Let us decide how large the points should be. Krakatoa could become increasingly popular for motion graphic artists. Not just for realistic fx- particle clouds of fire smoke dust, but for abstract visual graphics. I would like the options of controlling how each particle is rendered out, by using the option of changing size and shape. The difference here to the first wish is, that krakatoa would render it. Expanding the 1dimensional point to 2dimensional “sprites”- just would expand the option of the look and allow for very quick motion graphic renders, where geometry is not needed. It would change the way we look at krakatoa and stop reducing this powerfulsystem to just points.

I dont think i would be exaggerating when I say that you would double the amount of customers you currently have for the software by adding these options. :open_mouth: :stuck_out_tongue: :stuck_out_tongue:

First, thanks for the input!

Interestingly enough, these wishes are not new, we have seen (and successfully ignored) them since before 1.0 hit the market :wink:

First, regarding the shape rendering in 3rd party renderers. These (VRay, finalRender) can render millions of instances very easily, but if you throw at them millions of unique objects, it gets complicated. Similarly, Particle Flow has the option to either send one huge mesh, many smaller meshes or one mesh per particle to the renderer, but we are still talking about millions and potentially billions of polygons, and that has always been an issue. It is not our job to solve this problem, we decided to circumvent it by not rendering polygons at all. You can use the caching part of Krakatoa to create PRT sequences, deform, cull and MagmaFlow them using a PRT Loader and bring back into PFlow to assign whatever shapes you want. How you render that output though is not our concern, if somebody wants to fix PFlow to work with VRay and finalRender at instanced geometry level, then great, but it won’t be us.
We have an inhouse object that can load PRTs and do many things with them including replacing each particle with a mesh shape, but the result is still one large TriMesh sent to the renderer, so that has the same implications as PFlow. Max does not provide a simple standard way to do what some 3rd party renderers are good at (loading one object into memory and rendering it a billion times) and we haven’t really investigated deeper. It might be possible and we will keep it in mind, but it is not on the short term plan. That plan tends to get affected A LOT by production requirements, so if a movie comes along that requires the use of such scattering technology, things might change quite quickly :wink:

Regarding the Krakatoa particle shape, while we could theoretically draw a particle in any shape, there are complex implications related to how volumetric lighting is calculated from such data. If you splat a circle instead of a point, does that circle represent a 2D shape or a sphere and how does that shape affect shapes that are behind it as light passes through? It is not impossible, but it is a problem that requires a good solution, because the users won’t want simple abstract splatting of shapes, they will ask for correct volumetric lighting too.

An alternative approach we have discussed is the replacement of each particle in a PRT file with MANY particles from another PRT file. This way, you could create “shapes” using particles and then distribute them using other particles in a hierarchical PRT Loader setup. For example, you could create a planet-shaped PRT with its own shading, continents, oceans, atmosphere etc, all expressed in 100K particles. Then create a PRT with 1000 particles swirling around and you end up with 100M particles rendered as a bunch of planets floating in space. Obviously, this is an easy way to get an explosion of particle counts, but it would be a nice approach to the problem as each particle remains a point, but you can cluster many particles together in shapes that can be driven, scaled and otherwise affected by a master particle system.
Of course, the “shape” PRTs could be many, so each particle in the master system could pick a different shape from a list of sources based on the ID channel or some other particle channel etc. I personally am highly interested in such a system. Given the Voxel renderer which could use relatively few particles to produce quite solid-looking objects, that would be a step in the right direction.

Once again, thanks for your input! Our current strategy is to improve the existing system (both the performance and the workflow) for those people who need what it does best right now.
Also, there is a much lower hanging fruit when it comes to expanding the market and we might go there first… :wink:

For the first one, you can do exactly that. Load the PRT’s into PFlow, assign the shapes, and render. Everything that can be read by PFlow can be written by Krakatoa into a PRT. I’ve done this before, and it works REALLY well. I’ve even stored data to control Box#3 Shape Control output.

For the second one, see above.

As to directly rendering things with “size”, there’s a LOT of optimization that would be lost in doing so. Right now, one particle only occludes another particle if they are directly co-linear with the camera/light. If the particles had a size, that would no longer be true. Same thing with normals, right now you only need to define one normal per point, and because it’s a point, one incident angle. Same thing with pretty much any channel. You want a 1:1 relationship of values to points.

Would something more flexible be better? Sure. Would sales still double if Prime Focus had to quadruple the price to recoup the cost of hiring a half dozen more Darcys? (Darcies?) Dunno.

  • Chad

Oh right, forgot about the other point options… We’ve discussed in the past both the “replace particle with PRT Loader/Volume instance based on ID” Bobo mentioned, plus the idea of a “replace particle with disk/sphere of many particles based on scale vector and orientation quat”, which is akin to how DoF works now, and the same idea can be seen in other particle systems like Eyeon Fusion. Haven’t tried the former, but have done the latter in Box#3; it DOES work, but could be faster if it was done by Krakatoa at rendertime.

Neither one of those options outright breaks Krakatoa, they just introduce a new step in the rendering, somewhere between getting all the particle data and sorting, which doesn’t currently exist. DoF is done after shading and only while rasterizing, so it’s the right idea, just in the wrong place.

Oh, and one last idea that has also been floated… Overriding the DoF scaling so that you apply a gain or offset or some other data channel that sets the screenspace “spread” of the DoF particle. So it’s not strictly based on the distance from camera, but could be based on a data channel.

  • Chad

Ohhhh, I kind of like that idea! Will pass it by our resident Darcies… :slight_smile:

Really? I swear we’ve already discussed this… Back when I was saying it would be cool to encode connectivity ID’s so we could make screenspace triangles. Just add 2 Int32[1], or if you wanted to make worldspace point filled tetrahedra, a single Int32[3].

But even with just the existing DoF method, in the existing Krakatoa rendering steps, you could sample the position channel and the camera position in a KCM and make non-photorealistic DoF. Log or curve or whatever. Maybe make the DoF one-way, so things nearer the camera blur, but things further from the camera than the focal plane stay perfectly sharp. Maybe sample a texture in the KCM in screen space and “paint” on your DoF? Or just blur the crap out of an undersampled portion of a PRT. Blur based on particle age, and you could emit little points at the start, and they would swell to big circles as they rise up into the clouds?

You could do elliptical blur too, by storing 2 scalars, and rotating according to the orientation. Maybe make one scalar small and the other large and orient to the velocity vector and make “image motion blur”. I know, we’re getting crazy now…

  • Chad

Anyway, to summarize, Krakatoa is a point renderer AND a particle management toolkit. It think the point renderer should stay as such, and just find more ways to render points. The particle management toolkit part allows you to do some crazy particle controls prior to meshing for other rendering systems. I know we’d never be able to use Box#2 as-is unless we had Krakatoa. The caching in Box#2 (and PFlow in general) is flat out broken. Without Krakatoa, we wouldn’t be attempting some of the crazy simulations we do (and eventually render in Brazil or whatnot). We use Krakatoa to make the shots manageable, and it works a treat for that.

  • Chad

That sounds uber-cool! Wow I can imagine many interesting possibilities with that :slight_smile: LOL not to mention Matthias would have a heyday!

Somehow I remember all the other details, but not the “let’s control the size of the disk by a particle channel” portion. So now I logged it: Ticket #244 for your future reference.

Oh, right, I just assumed everything was entered in the wish log as “let’s control the X of Y by a particle channel”, just as we have the specular power, eccentricity, and our lovely lighting as emission… Yay channels!

you guys always blow my mind when you go on your explanations… fascinating really…

Nah, Bobo explains stuff, but I just make it up so I sound smart.

  • Chad