We are working on a prototype for render elements in Krakatoa, and I want to run it past you guys. I figure there are 3 render element categories:
[]Shader Elements - These are the various sub-components of the shaders such as Diffuse, Specular, Transmission, etc.[/][]Data Elements - These are passes that display data about particles like Normals, Velocity, etc.[/][]Renderer Elements - These are passes that visualize parts of the rendering process. This is the most vague, but includes things like Raw Lighting (light reaching a particle before the shader is run), Self Shadow (Extinction due to iteraction with particles before light reaches a particle), and possibly many more[/]
Unlike Scanline or VRay I don’t think I will hard code a specific render element object with a unique class id per possible pass. Krakatoa is too flexible to ever make that worthwhile, since we can have arbitrary data, etc. My current plan is to make 3 objects that will appear in the normal Render Elements list, one for each category I listed above. Each of these objects will have appropriate UI elements to select the specific data to render. For example a Shader Element will have a dropdown to choose the shader components exposed by a specific shader. For Phong shading this would be Diffuse and Specular. The Renderer Elements would simply have a list of elements that are supported by Krakatoa. This is more or less a hard-coded list because it requires code to be written in the core of the renderer to support them. The Data Elements are the most interesting since they can support arbitrary data that Krakatoa doesn’t know about at all. Currently you could render these channels by carefully crafting KCMs to assign Color, Emission etc. and the correct lights. The proposed system will expose the Magma interface via. the render element, drawing whatever the output of the KCM is for a given particle. I’m not sure what sort of compositing modes make sense for this particular type. Currently I am rendering the ouput with full alpha, but I’m sure people want the normal Density based approach as well.
I wonder what Renderer Elements are possible… Could we get say “Spatial Density” output that would show the number of particles per pixel? Same thing with the voxel renderer, output the number of particles in that voxel, or the number of particles sampled by that voxel in the filter?
Without layers, the Data Elements are going to be just first hit unfiltered, right? Might not be ideal, but the alternative would be to add layers and increase pixel samples, right?
With Shader Elements, would it be possible to evaluate more than one shader at a time? I mean, could you output Phong Specular and Schlick Scatter in the same render?
I can make this work if all particles render with both shaders. Having some particles render one shader and some render another is a tricky problem. Its pretty straightforward for particles (use an integer index to assign a shader), but really freaking hard in a voxel (It needs a float PER-Shader-PER-voxel to store the relative weighting of each shader). I guess that’s not hard as much as it is prohibitively expensive.
That’s definitely something that can be done. In fact, I would do that by using additive compositing and just rendering the Density channel. You would have to have all particles with density 1.0 and a global tweak of 1.0 (instead of 5.0e-1 default). You need to be aware of how the filter would affect this too, but I’m sure you know that
Yeah. This is something that no one can adequately explain to me. What the hell does a Normal pass for a semi-transparent set of objects look like? I personally think its useless, but having that around seems to make some people happy. Multiple layers could be achieved somewhat like the shadows I guess, but Max doesn’t provide any way to do that within the Render Element framework. All we get to do is fill in a Bitmap object, which definitely doesn’t support all the fancy stuff I was doing with OpenEXR images.
Shader per particle… Interesting… An index IS all you would need, but you’d need to provide additional channels for the parameters, unless it was global, but if the parameters could map, you’d be able to re-use the channels. As to voxel being expensive, aren’t you only storing one 2D sampling plane at a time? Doesn’t seem that expensive if that’s the case…
I was assuming rendering twice, once for for each shader; I hadn’t considered the per-particle shader idea.
The normals pass on the semi-transparent objects are useless, but if you rendered out a small slices, it wouldn’t be so bad. The filtering in Z would be only so small as the filtering in XY, and to some extent having filtered normals is OK when representing point data that is supposed to be representing a continuous surface. The particle rendering already does render camera aligned slices, so if each one could be output to a layer…? I just don’t know how the renderer sets all that data. Almost like you’d need to do it independently of the regular render output so you could save each slice as it rendered (as opposed to storing a huge buffer of hundreds of full float images).
Hmmm… Makes me want to just have Krakatoa rendering in Fusion so I don’t have to worry about the rendering until I was compositing. Rendering API?
For the Data Elements, would you be able to get ID output as a 32int (that’s what it uses, right)?
Also for the Data Elements, we could save KCM’s and add them to the list?
Could we “nest” KCMs? Meaning, could we output a KCM that converted position to RGB, and one the used that RGB output to compare to the camera TM and output a different RGB? Or would we have to have each KCM work from scratch?
What we did for Beta 3 (but did not expose to the UI in the build) was just the KCM implementation similar to the Global Overrides. We have removed that now and will host one MagmaFlow per Data Render Element in the Render Elements tab of the Renderer as per Max specs.
These MagmaFlows can access only channels available to the Renderer. For example, to output any normal data, the Normal channel should be in memory either because Env. Reflections or Phong Shading is selected. We were playing with the idea to allow you to force a channel into memory even if not directly required by the current settings, but we abandoned that. Thus, you cannot create intermediate data to pass between RE MagmaFlows, you can just load channels available to the Renderer at render time, perform calculations with them and output a COLOR value. So you will have to repeat the processing for each MagmaFlows if they share some code. But with Copy&Paste, BlackOps and Flow saving, that might not be that bad.
If you would convert the ID channel to a Float32[3] color, you could probably dump it to a render element…
the data element sounds interesting. If I understand correctly, I build a kcm, for instance, that grabs the velocity channel data, manipulate the color by speed, and outputs a colorized velocity pass? Is that on base?
Yep, I forgot to remove the “Render Elements” node from the Schematic Flow (if you are still looking for that Easter Egg).
Velocity pass would be one example. You could also build things like Z-Depth (both real Z-depth and camera distance), Normal and Tangent passes in various spaces etc. Each pass has to draw its own “final pass” using the particles in memory, but you load and light the particles just once, so it is a lot faster to output 10 passes in one go. It is quite cool.
Call me dumb – just trying to follow along without my mind exploding – could you make a pass for reflection as well? Or is this already possible through other methods (i’m not exactly an expert with the KCM). Is it also possible to have a motion vector pass? Not sure exactly how to explain this but I know there is a maya -> aftereffects shader that passed a specturm back to the plugin for post blur cotrol. Dunno long shot, and also maybe some other way of getting that information.
here’s a link to the mentioned plugin/shader alamaison.fr/3d/lm_2DMV/lm_2DMV.htm
i’m guessing this is possble to replicate with the KCM as is, but as far as useful passes thought it was worth mentioning
Uhm, yes, velocity pass for post-process motion blur is quite common and could be done (kind of) using these Render Elements. Just keep in mind we are shading VOLUMES so what these passes will output will likely describe the surface of the volume and not what is going on inside. That’s the point of the whole thread actually - figuring out what people want/need. Btw, we sell a 2D post-processing package for Eyeon Fusion that has a Depth Motion Blur module that does both DOF and MBlur based on passes, so we have some experience with the use of some of these, as do the guys from Anatomical (Ben and Chad).
As for reflections, funny that you mention it. When it comes to the Environment reflections, I think it should be relatively easy to do as it is an additional illumination pass on top of the main pass (as seen in Voxel mode where it just renders in camera space in a separate go). But we have been looking at getting real reflections working (or at least we logged a bug/wishlist against it today), and there might be other ways to deal with reflecting scene geometry in particles in the future (eventually in v2.0). So stay tuned…
While MDBl (MotionDepthBlur) works reasonably well at a very sticky problem, the normal output from Krakatoa (sandstorms) don’t work that well. The abnormal output (chameleons) works OK, but the cases where it works are so few that it’s almost always more time effective to render in camera. It IS possible to render depth slices using camera aligned clipping objects, but it’s not time effective, just control-freak effective.
Well, considering the common use cases (reflections on churning/falling water) reflections probably don’t have to be super sharp. If a KCM could sample from a surface, we could just bake reflections to a proxy object generated from the particles and sample from that. Or, if we could sample from a second set of particles, we could could evaluate reflections on a low-density proxy pointset and resample to the larger pointset. \
I assume you are talking about creating the actual geometry reflections in another renderer, right? Meaning, it’s pointless (har) to reflect a sphere in a Krakatoa if the sphere doesn’t have the intended shader that it does in the direct camera rays.
The Data Elements evaluate on one large chunk of points, right? So we can’t input “$.wirecolor as point3 / 256” and output a float16[3] and get a wirecolor pass, right?
Chad
EDIT: seems to evaluate $ as whatever the currently selected object is, which is interesting, but I think not going to be helpful.
Since the Input node runs a generic MAXScript expression, $ is evaluated as the current selection and could cause problems if nothing is selected.
Particles have (right now) no knowledge of the object they came from, although we hope some day to have a source channel that would contain, say, the Node Handle of the object the particle originated from or something like that.
Then we would need a Property operator to grab per-particle a property of an object based on its Node Handle instead of a manually picked one, and you would be able to grab the wirecolor corresponding to the PFlow Event, PRT Loader, PRT Volume or whatever.
But this is not possible right now. Also, it is not possible to run a script per particle, the Input Script is run just once for all particles in the stream.
So there’s no way for a KCM to query the object it is assigned to? Ignoring Global KCM’s and Data Elements (though that’s what I would eventually want), I can’t have a BlackOp that queries something on a per-object basis (not per-particle) without exposing the object selection? Guess I’m saying, this hypothetical operator with an “Object” output type, could it be assigned to “base object” or similar?
It might be possible in the future.
We will look into it if we come to the “Object” and “Property” workflow I mentioned before. So you could specify “Base Object” as the Object Input and the Property operator would access properties of the node the KCM is assigned to instead of a picked object…
Have to discuss this with Darcy first.
We are already doing something like this by allowing a Texture input to use the Diffuse Map Of the Material assigned to the node the KCM is added to (if available).
Yeah, and I can totally see why this wouldn’t work for render elements, so we’re talking under the wrong topic, but it seems like sampling per-object (but not per-point) wouldn’t be too expensive, as you can see with the script inputs doing things like distance and such. Something to keep in the wishlist, I suppose.
Would it be possible to get a per-light rasterization as a render element? For point rendering, all the lighting is accumulated serially to the LCache and rasterized all at once. But if we could rasterize each lighting pass by itself, as the voxel render does, and then just accumulate the rasterized result, that could be a very useful render elements setup for us. The rasterization step would add render time, but would in many cases save re-renders when a change in the settings of one light is required. It’s possible to render each light pass now, but in some cases the PCache building isn’t as fast as you might like, and you have to go through the step of accumulating the render output in Fusion or whatever afterward.
Currently we support a Lighting Render Element (see KrakatoaRendererElement) which saves the lighting contribution of the LAST light processed. Similarly, we support a Shadow element which does the same for the last light’s attenuation. We looked into adding an option to select which light to output so one could add as many elements as there are lights, but there were some issues with that code and we reverted back to last light for now. In the long run, we have plans to support all lights.
So right now you would have to check the PCache on, the LCache off, disable all lights but one, change the Lighting and Shadows render elements’ names to reflect the name of the light, render, then disable the light and enable the next one, rename the elements’ output again and render and so on. A PITA, but doable (and could be somewhat automated via MAXScript)