If I use the same .prt in multple prt loaders (not instanced), will it reload the file over and over? Or is it smart enough to only load into memory once?
B.
If I use the same .prt in multple prt loaders (not instanced), will it reload the file over and over? Or is it smart enough to only load into memory once?
B.
For memory conservation reasons, each PRT file is loaded independently.
It cannot be smart because the same PRT data can end up in memory quite differently depending on the PRT Loaders’ settings, KCMs on their stack and especially culling. In order to do a smart load, the particle stream would have to be loaded once in memory, then duplicated for each object that uses it while passing through all the modification stages. This could potentially double the memory requirements, for example if the PRT already contains half a billion particles it would be a bad idea to load it separately, then pass to whatever objects need it. In fact, if you have culling on and all particles of a PRT happen to be outside of the volume, no memory will be used at all since all particles will be discarded during loading.
We can log a wish for a different behavior. For example, we could introduce a dedicated loader object which only loads particles and does not render or display them. It could then be referenced as a source of multiple uninstanced PRT Loaders. So if you have hundreds of PRT Loaders all using the same PRT, it would be a much better and faster method to set this up by loading once then passing the data around without the reading overhead, but it wouldn’t be the default approach, just one option to approach loading when it makes sense…
In the past, I’ve wished for a pre-render PCache, which would load the PRT’s off disk and store them in memory (compressed or decompressed?) prior to being used by PRT Loaders or whatnot. Basically doing inside Krakatoa and 3ds max what we now do with a RAM drive. When you have a small number of particles, say less than a 300M, you might have enough memory to store them both pre- and post-processed. This would reduce the I/O requirements tremendously.
But you should understand that this requires a complete refactoring of everything Krakatoa does internally. Thankfully, v2.0 is just such a refactoring effort, so who knows… We will consider something along those lines.
Everything? Wouldn’t it just apply to PRT Loaders and PRT Birth?
What I mean is that Krakatoa doesn’t have a concept of caching streams to use at render time. It has a SINGLE huge stream where all the particles that will be rendered are collected AFTER all loading/acquisition, deforming, KCM, culling and world space transforming have taken place. But a cache of PRTs would have to be in object space, unmodified, unaffected by KCMs and non-culled. There is no concept of such handling right now because, as I mentioned already, if your rendering would contain 1MP after culling, the “pre-cache” could theoretically contains 1BP and eat up all your memory. So we never attempted to do that.
But with the split of renderer and “bridge” in 2.0 where Krakatoa The Renderer would be an external app called by Max via a bridge, the Max portion could theoretically do some sort of caching. Applying the KCMs, deformations, world space transforms and culling could be performed while passing the cache to the renderer instead of once explicitly at loading time. Thus that caching could apply to anything, not only PRTs. But I am just musing, this has not been discussed in this form with Darcy and I suspect it might be out of scope even for 2.0.
That’s why I feel a dedicated PRT Cache object that would simply provide a memory stream to the real PRT Loaders and handle just the disk loading without any modifications would be useful as it could be picked by multiple PRT Loaders but would perform just one load for all of them…
Another approach we were discussing is a SourceID channel. This would not speed up the loading of the same PRT file into multiple PRT Loaders, but it would allow the changes to some channels like all shading channels for example. Say you have loaded particles from 10 sources and one of these sources gets a new material assigned. Instead of reloading all particles and recalculating all materials as in 1.5.x, we could run through the existing memory cache (the big pool we have now), find all particles matching the ID of the changed object and update them without touching the rest. Or if the object was moved, we could replace the position channel with the new data etc. This would allow you to load 100MP and tweak a subset of them via KCMs, modifiers and materials without the need to reload unless the PRT itself has changed.
That sounds like an interesting idea. So, if I understand this correctly, I want to modify a single PRT Loader/Volume in the stream of a group of PRT Loaders/Volumes that have been cached to a single Global PRT, I will be able too via sourceID. Is that correct?
Not exactly.
Let’s say you have a PFlow, two PRT Loaders, three PRT Volumes and some Geometry Vertices loaded and rendered. Currently, if you hit the Render button again with PCache off, all these particles will be reloaded from scratch. If you check PCache, all particles will render the same as the previous render, except for things that can be changed that include Density and Lighting (if LCache is not checked). But you cannot change the material, the KCM, the deformations or the culling of one PRT Loader without reloading them all.
If we would introduce the SourceID channel, the NodeHandle of the PFlow Events, the PRT Loaders, PRT Volumes and Geometry objects (and any other particle sources like TPs, FumeFX etc.) will be stored in the SourceID channel. Now if you would assign a new material to one of the PRT Loaders, it would be possible to tell which of the particles cached in memory from the previous render test came from that PRT Loader. So we could run through those particles and call the shading code again to evaluate the new material just for them without reloading them from disk, and without touching any of the other particles unaffected by the change. Or if you moved the PRT Loader and Use Node Transforms is checked, we would have to reload all particles coming from the PRT Loader and update their positions (plus anything else after that like KCMs and Culling). This might require sorting the particles in memory to bring the changed particles to the end of the stream, then replace them with the new particles while still keeping the unchanged particles unchanged in memory. So if your PRT Loader accounted to 1M out of 100M particles, you wouldn’t have to wait for 100M to update, but just for 1M.
This is just an idea. We have not committed to it, but it is on the list of possible 2.0 features.
Oookay, I see, a Pcache enhancement, and good idea, as waiting is good for some things like a good cheeseburger but simulations and particle updates can be frustrating (especially for minor tweaks), I like how superhyperSam put it a little while back “working in stereo” work one machine, sim/update on the other…finish sim/update switch musical computers…