Alembic support

I think it would be a nice feature to add alembic support to KSR. Replacing the OBJs will be extermly efficient, especially when rendering with motionblur enabled since the OBJs don’t have a vertex velocity.

Aliasing the velocity to another channel might be helpful, since they’re just matte objects, no?

Alembic support is definitely something we would consider. I know that .obj files are kind of nasty. Krakatoa SR does support an alternative, which is our own XMESH format, but I know not everyone’s using that.

A note about motion blur. To get mesh vertex motion, two (or more) meshes can be specified at differing shutter times. Currently our scene exporters don’t do this automatically, but it is supported. The scene file would look like this:

import KrakatoaSR
ri = KrakatoaSR.Ri()
#...
ri.FrameBegin( 1 )
ri.Format( 640, 480, 1 )
ri.Display( "render_output.exr", "file", "rgba" )
ri.Shutter( -0.00833333, 0.00833333 )  #30 fps, 180 degree shutter
ri.Projection( "perspective", "fov", 90 )
#...
ri.WorldBegin()
#...
ri.MotionBegin( -0.00833333, 0.00833333 )
ri.MeshFile( "firstmesh.obj" )
ri.MeshFile( "secondmesh.obj" )
ri.MotionEnd()
#...
ri.WorldEnd()
ri.FrameEnd()

Hi Conrad, on the topic of motion blurred matte objects, while the Maya Exporter does support exporting subframe objs for each motion blur segments now, changing topology does not seem to be supported by KSR. (please see attached). The mesh sequence in this test is changing every frame (and possibly the subframes, too, since it’s interpolated from Naiad’s Maya plugin). What kind of strategies can one use to get usable matte objects with changing topology? Will KSR allow topology change with XMesh matte objects?

To get vertex motion from a mesh, Krakatoa SR needs two meshes of the same topology. There is no general way of generating vertex motion blur from two meshes when the mesh topology is changing.

Preferably, we would use vertex velocity to determine motion, but instead we chose to do it by differencing two (or more) meshes. I don’t like the idea of generating vertex motion from two meshes, but the reason this is done is because .OBJ mesh files do not support vertex velocity channels.

The way we’ve always solved the inconsistent topology issue is to generate two meshes within the same subframe. The reason this works is because most mesh generators, when requesting a mesh within the same frame interval (for example frame 1.5 to 2.4999) will have the same topology. This is done specifically for motion blur purposes.

Is there some shutter time interval in which you can generate your Naiad mesh where it will have consistent topology?

The other solution is to use a mesh format that supports vertex velocities. For example, or own XMESH format is already supported. XMESH supports vertex velocities, so when using XMESH meshes, you would not need to specify multiple meshes in a motion block. However, this would only work when saving meshes from 3dsmax at the moment as our XMESH savers for other 3d applications are not as fully featured.

Hope this helps. Please post more info on the issue and I’ll help any way I can.

In addition to what Conrad explained, our Frost particle mesher provides a dedicated mode (Frame Velocity Offset) to generate consistent topology within the shutter interval. Basically, it creates one mesh at the center of the interval. Then it calculates the influence of the particles closest to each vertex to deform that mesh into a plausible representation of the subframe mesh without changing topology. So XMesh is able to generate vertex velocities from this mode, and all renderers can get consistent topology for direct motion blur generation. Given that the whole purpose of this is to blur the resulting image, it works pretty well although the subframe mesh does not look exactly the same as the actual remeshed particles.

I am not sure whether Naiad provides this option, you should consult with Exotic Matter.
Note that we are also developing a stand-alone version of Frost that could be used to mesh Naiad particles in the future.

If it’s just for matte objects (and not for volume point generation), couldn’t you also allow the loading of a large number of mesh samples? Like load in 100 obj’s per frame and just weight their opacity? I suppose you’d have to group them according to the shutter timing, but it would at least allow you to support standard obj’s from pretty much any application.

In short, yes, we could technically do it without interpolation of meshes since Krakatoa does multi-pass motion blur, in which case we would not need consistent topology. However, given the way Krakatoa is set up, this would be impractical at the moment.

Given the user settings (a shutter interval, and a user-defined number of motion blur samples), if the right number of meshes were specified at the right times within the shutter open and close, we wouldn’t need to interpolate the meshes. The first issue is that the interface is more general than that (since I copied RenderMan style motion blocks) and allows meshes to be specified at any times. The other issue is, to the user, it is not obvious at what time Krakatoa will actually render its passes (it doesn’t actually render at the beginning and end of the motion blur interval. For example, if you specified a shutter of -0.1 to 0.1 and two motion blur samples, the internal rendering engine would actually composite two renders at times -0.05 and 0.05).

Thus, we would have to come up with a completely different interface to make meshes with inconsistent topology usable. Such an interface would probably be to allow the user to specify the exact times of each pass in the multi-pass motion blurred image, and allow the users to only use those times when specifying anything animated.

I don’t think I’m going to do that at this point, since it would require a lot of changes to Krakatoa’s core engine. If there is a real need for something along these lines, we can discuss it.

As far as I know the naiad emp meshes imported to maya have a check box called “Experimental Blur” to enable their vertex velocity. The problem is that the topology of these meshes change at the whole frame. So the topology is consistent from frame 1.00 to 1.99.

I would like to render 10 or more particle segments. But I don’t like to export OBJs of my highres mesh 10 times. So in the next step I think I rewrite the Krakatoa Exporter to export the matte OBJs only for the first and the last substep.

Since we use Maya as our main 3D application, the XMESH format is not an option for us. Perhaps the frost particle mesher could work for us. Is there a way to render both parts (frost and krakatoa) together? Is there already an open beta version we could try? Maybe it would be nice if frost is able to generate an alembic mesh sequence?

Beyond particle meshing with naiad or frost, I think e.g. exporting a highres character (interacting with the water and the krakatoa splash particles) via alembic would make sense. This was my intention for my initial post.

Frost as a standalone mesher will be entering beta soon (like within a week or two)! We are currently finalizing a few things and doing some initial QA on the linux builds. In terms of possible output formats, those will be addressed during the beta program.

As for XMesh for Maya, we have started the xmesh plugin for Maya with support for both saving and loading xmeshes. While it isn’t where the max plugin is right now, it is fairly functional and has been used in production. If you would like to try it out let us know!

-Ian

Hi haggy,

Thank you so much for the information on emp mesh’s topology!
At the moment, I have changed the Maya Exporter to allow tagging individual matte objects to disable subframe sample exports as a (pretty dirty) workaround. In this setup, objects (such as an emp mesh) tagged with the new “enableTopoChangeWorkaround” attribute would only output 1 .obj for the current frame and repeat the ri.Mesh() line with it as many times as the blur segments. Obviously by doing so, the tagged matte object would have no motion blur.

I have attached the modified script and the a test render here in case anyone wants to see it.
HOWEVER BE WARNED, these changes are based on the Maya Exporter version before the latest one. I haven’t had time to work it back into the latest version posted here: viewtopic.php?f=116&t=7327

I suppose the next step is to modify the “enableTopoChangeWorkaround” to export frame 1.00 and frame 1.99 instead of just the current frame.
KrakatoaExporterMEL_izlin_03222012.zip (1.8 MB)