Any chance we can get a list of bug fixes and changes before it is released? I’m trying to plan some renders, and some key fixes could really help my schedule.
Also, which bug fixes are most important to you? I would like to take into account what is holding you back, instead of just what our production needs are.
The environment mapped voxels are the only thing we can’t work around at this point.
Omni’s in voxels can be handled with 6 spots (albeit painfully)
Non-black background in voxels can be handled (just takes up a Fusion license)
The KSW pivot offset problem can be handled (just requires us to follow very specific rules)
The crashing on display problem can be handled (provided you remember to turn off the display before saving)
But…
The biggest problem that we can’t work around isn’t a bug, really. It’s the “I would like to save shading to a PRT” wishlist/request. It’s killing us on productivity here. Without being able to “composite” in render, we have to do crazy passes to be able to nest the Krakatoa renders later in Fusion. Like how do you put transparent Phong shaded particles intermixed with transparent isotropic particles? When they are swirled together, it’s really complex, not just FG/BG. I’d love to be able to just to the composite/rasterize in a render, and get the lighting or shading from the particle data channels.
Chad
EDIT: I suppose I could work around the environment mapped voxels, too. A combination of unlit (but mapped) particles with an additive Fusion composite would do the trick.
Thank you. Some things on there look really tantalizing…
“Unified internally the volumetric and additive modes into a single cohesive system in preparation for coming changes.”
“Added a new option to Matte Objects to output multiple image layers instead of a single image. The layers can be used to better integrate geometry inside of particle clouds by providing a background and foreground layer.”
“Added the ability to use a PRT Loader as the source of a Krakatoa File Birth and Krakatoa File Update operator. The render stream of the PRT Loader will be used to create and drive the particles.”
For the ParticleOStream, that only writes to disk, right? I can’t get that to directly open in a PRT Loader or be directly evaluated at rendertime, right? I suppose dumping to a RAM disk would get me some pretty decent speeds anyway.
I could write a new stream that generates particles by running a MaxScript … I’m not sure if it would have reasonable performance, but it would be kind of neat. Bobo seems happy enough to run a script that saves to disk then loads with a PRT Loader. Are there potential benefits to an more automatic system that I can’t see?
If you can modify the PRT Loader with a maxscript, to have it grab the new PRT file, or trigger the “update view cache”, then you can package that all up together and be done with it, I think. Not a big deal, have to try it out for a bit to see how it works in practice.
Chad
EDIT: Oh, and that’s the other thing… This writes out particles, as opposed to modifying existing ones. So I can’t make a maxscript that flips the normals from a PRT stream or something clever like that, right? I mean, the new Pflow operator would be able to do almost anything I can think of right now, so I’m just thinking aloud.
Well, the moment Darcy wrote the ParticleOStream, I asked “but how about reading particles”? He said eventually we will do it. So we should be able to grab an existing particle source (anything Krakatoa would render) and get access to its particles. Modify them, then dump to disk. Or open a PRT File from disk (via PRT Loader or directly via a stream) and read a particle at a time, make changes and save a new PRT. All that jazz.
I’m not sure what changes you are talking about, but exporting shading to color (or a special shading channel) would not only let you do some better effects, but it would let you optimize scenes, such that you could render the lighting or shading once, and then choose the raster size, filtering, depth of field, motion blur shutter, etc later. You could also combine different smaller renders together, since you would only need postion, shading, and velocity to make a render, not all the other channels, like normals, lighting, etc, so memory consumption would be lower.