Just a note that Beta 4 is making some nice progress in both speed and features.
First, we made voxel rendering a bit faster, especially in some situations. In our production case that required us to render an otherwise nearly unrenderable FumeFX simulation, the render time went down from 2 hours to 22 minutes a frame. In other scenes, the results vary depending on the configuration of the filled voxels on the grid. The more empty voxels there are, the better the speedup (since the new method skips the testing of pixels that would hit an empty voxel on the current scan-plane).
Then we tackled matte objects, arguably the slowest part of Krakatoa. The results were nothing short of mind-boggling. In another real-world production scene with millions of polygons (and of course many millions of particles), the matte objects preparation time went down from 4m12s to 3.6 seconds. This is 70 times faster, for those of you counting.
I created a simpler benchmark scene with a box of 10M particles which load, sort and render in around 12 seconds. (Note: we have temporarily turned off the multi-threading of the PRT Loaders due to some work being performed on top of them, so the final render times will be shorter). I added 9 teapots with 64 segments each as matte objects between the camera and the box, or about 2.4M faces. With the old default method and >Use Depth Maps on, it needed 58 seconds. With the old method and >Use Depth Maps off (pure raytracing), it took 266 seconds. With the new method, it rendered in 16 seconds, in other words the matte objects calculations took about 4 seconds.
Finally, we have a really cool (or rather ice-cold!) new feature in the works which will hopefully blow your minds
Stay tuned!
Upon re-examination I quickly discovered the Meaning of life, the universe, and everything is really not 42 but a… and hence forth I am awestruck by the discovery! All I can say is WOW , well done, I shall refrain from using any expletives!
A bit of a read that one is…looks like I’ll be frying some brains cells trying to figure this one out
…another peak over the edge into the caldera, this is crazy, so this will manipulate every aspect of the particle channel data(all of the channels listed anyway), I am understanding this correctly? So i.e. I have the acceleration channel data and I would like to speed it up, I can multiply its current value by a scalar and pipe the resulting data back to the accelaration channel?(or literally send it to any other channel specified)
Basically box#3 for channel data?
I am probably off here (which is why I am asking ), but can I drop this modifier on a PFsource? and acquire an applicable data channel?
It would not work with Acceleration because Krakatoa does not currently use it for anything, but you could do that with the Velocity channel, so your motion blur could be exaggerated even if the particles wouldn’t actually go there on the next frame because their position is hard-coded in the PRT sequence.
In a way, it works similar to Box #3, but it is not history dependent (at this point), so you are working within the data of the current time. Thus, it is not designed to change the way particles move between frames, but it is designed to change the way particles look on a specific frame. And it has huge implications for the coloring of particles, since it allows you to completely redesign their appearance after they have been saved to PRT.
Each modifier manipulates one channel, but you can have any number of modifiers each modifying a different channel. You can also stack modifiers manipulating the same channel.
At this point, the modifier works with PRT Loaders only, but this is just the first proof of concept. It might do more in the future. But I wouldn’t expect it to ever work with PFlows because a PF Source is, surprisingly, not the source of particles, but its Events are being asked for particles by Krakatoa, and you cannot add modifiers to the Events easily.
Also, the KCM is WAY faster than Box #3 as far as I can tell and since it is history-independent, you can go to frame 1000 of your pre-saved PRT sequence and rework its channels in real time (about 1M particles per second in the single-threaded version, with most of the time going into drawing the points in the viewport). I will do some benchmarking and see how it compares to Box #3.
You can also load a BIN file and access any of its RealFlow channels and do magic with them directly, something you cannot easily do with other means.
Ok, I benchmarked 10M particles with Box #3 vs. KCM. Keeping in mind KCM is currently single-threaded (since it is only 2 weeks old and under construction), it is still about 2 times faster than a comparable Box #3 DataOp. Workflow-wise it is not even possible to compare the two since KCM works on already saved PRT files and thus removes any need for pre-rolling all the frames to reach the current frame, so it is pretty much instantaneous. On the flipside, KCM does not provide geometry access yet, so performing Geometry Lookup functions is not possible at this point (but is planned).
I updated the page with some more simple examples and the Box #3 benchmark results.
That’s a lot of documentation for a unfinished beta feature. You’re got a knack for that sort of thing that I would love to learn. Kudos.
I only see “color” output in the examples and screen grabs. What output channels are available?
The color output, that changes the vertex color channel on the object? Meaning we could put a vertex color modified map at rendertime using the changes from KCM?
This would make a nice sweet spot in functionality for us. Less features than Box#3, but possibly more than Vertex Color based maps. Speed comparisons will be difficult between the KCM and the texture map lookups because the maps in max are all over the map speed wise, but for common things like Falloff or Output or ColorCorrect, should we expect one method to be faster than another? Actually, don’t worry about it, I’ll be trying that as soon as we get beta 4 in our hands.
You can read and write out ANY channel. In the viewport, Color makes the most sense since it is immediately visible. But you could easily push particles along normals by reading position, adding normal times some factor and outputting to Position.
The Color channel is the one that Krakatoa renders, so yes, that’s equivalent to the Vertex Color channel of Max. There is no need to render it via Vertex Color Map because it renders straight as color in Krakatoa, unless you want to combine it with other maps.
We fixed the saving of the Lighting channel which was broken in previous builds so you can now read that, multiply by a Material or solid color or whatever, scale its value to tweak the intensity and basically tweak the lighting after the fact.
We intend to move most of the PRT Loader functionality like Culling, Normals Acquisition etc. into the Channel Editing pipeline. With the current implementation, we already took care of that “Copy Density Into Map #” and “Assume Density of 1” - since you can do these with a KCM, we will likely remove them from the PRT Loader’s UI.
As for the documentation, it is a mirror of our internal Wiki - as we write stuff, we tend to document it in parallel, so we don’t forget details. Some of the features on the list are not done yet, but most of the UI and basic operators and inputs work.
We are still missing Trigonometry functions, Random number generators, Noise functions, Boolean Logic, Matrix operations, Material and Script evaluation, but we hope to add them in the following weeks. Then we will have to add ways to create kdtrees and do geometry lookups to grab data from arbitrary objects. Also we want to move the culling to the KCM so you can apply any logic to the culling process and also place it wherever you want on the stack, but we will have to wait and see if that will work well…
I was trying to figure out the raw speed of both, but in daily practice KCM will “feel” faster.
I also updated the page with yet another example - we have now all vector and trigonometry math there so I wrote a test with a sine wave along the Z axis of the particle system pushing particles along their normals. It exposes Frequency, Phase and Amplitude and I animated the phase from 0 to 10 to make the wave run along the system… Fun-fun-fun!
So this raises another silly question, how often does it evaluate, every tick, frame, second? is it adjustable? Or I guess more appropriately whats the time-base? LOL, I image the sine on the teapot at ticks may look look a caffeine overdose…
Normally, Krakatoa updates its particle sources once a frame - after all, there is only one PRT file per frame the PRT Loader could supply.
So the Modifier will also update once a frame for rendering. Obviously, the sine wave would not support motion blur unless you are running Max Camera MPass MBlur which will advance time at sub-frames and get the particles pushed accordingly since the Phase input is keyframed using a regular controller and that will push them differently on each sample.
When working interactively in the KCE with the Auto button checked, an update it performed when the flow changes - a variable is changed, a node is added, deleted, connected or switched. So not too often, but often enough. If it is too slow, one can uncheck Auto and update manually by pressing the Update button. It also updates whenever the modifier stack needs updating, for example if you change PRT Loader parameters, or advance the time slider.
I added yet another example with the same sine wave on the normal push and color blending.
Gotcha, I see this and get stuck thinking about particle flow, ie how you can adjust your scalar speed by tick,frame,sec. among other things, especially when actually changing positions, I usually start of using too large a value and saying to myself where did my particles go? Having said that, the interest is merely for the timing of the effect, more or less what I was wondering.
So the KCM is modifying what data exactly? In the case of position and vertex color, could this not be applied to standard meshes? And for other channels, is that data available to us outside of the KCM in the form of other (custom) modifiers? Oh, I suppose we could have TWO KCM’s, one to convert to position/vertex color and one to convert back out. But that’s assuming we have blank channels to work with…