AWS Thinkbox Discussion Forums

Deep Images

I’m pretty interested in using Deep Images http://www.deepimg.com/ in some form to solve the Krakatoa compositing problem (ie. some particles in front of geometry, some partially behind). Do you guys have any experience with tools based on these ideas? What sort of support is out there in compositing packages like Nuke and Fusion?

What sort of techniques are you currently using with Krakatoa to handle aliasing around matte objects?

Has anyone had success with using the OccludedLayer render elements to deal with this?

Assuming the current rendering system? Meaning doing a straight “painters algorithm” to a fixed raster buffer? Hmm…

We’ve got a basic raymarcher in Fusion (though it is more advanced than what’s in those videos), and we could easily load a small uniform sampled volume set and have it running pretty easily. In the most common application, a view aligned voxel array would be best, so you’d have something like a 4096 x 2048 x 16 array with emission and absorption fitting in under 2GB. That sounds like a pretty low number of slices, but if you have an approach like Deep Opacity Maps it might be feasible. For HD renders, or of lower frequency render, you could get more slices in less memory. The OpenEXR implementation you have for Deep Opacity Map shadows might work for saving out the actual render, at least for now. Of course, if you don’t mind using more memory, like with a CPU implementation, you could get more slices, but caching gets to be a pain pretty quickly. Not sure how much you’d gain in quality without some testing. There’s also something like Sony’s Field3D which might let you save out a compact voxel format, but that would complicate the rendering process. The idea being that you aren’t drawing to the frame buffer, but just sampling in a voxel array and compressing the result?

We’ve also done some work with pointcloud rendering in Fusion as well. Haven’t done PRT’s, but have pulled in CSV particle clouds. It’s about like you would see in 3ds max viewports, where ~1 million points works OK, but it’s not able to do Krakatoa render type counts. So either you have to resample to camera space or interpolate in the comp (which might not be so bad).

Guess the question would also be, are we pulling the processed/sorted particles into the comp as either points or voxels or are we pulling the rendered result over?

Another question would also be, what are we compositing with? What are the other elements coming from, and could they also generate suitable images? I know Scanline can with RPFs, but I don’t know about others.

Regarding the implementation details, isn’t Pixar’s Deep Shadows patented? I wonder how an implementation described as they do would be affected by this?

Marginally related… For those of you doing production volumetric data, how much of it is actually based on either color lookups of scalar data or on projected texture maps? If it’s either of those cases, the data load to do this would be a LOT lower. Instead of storing RGB for emission and RGB for absorption, you would be just storing one or more scalars like density or whatever. Like for FumeFX or the like, you’re doing such a lookup, so it might be wasteful to go from scalar to color to render and store that color, then have to manage that color in the comp. Would be MUCH easier to just pass the scalar values to the comp and to the color and integration there. I just don’t imagine that much Krakatoa renders are done from truly unique color data. Comments?

Nuke 6.3 will have support for Deep Compositing (there’s a vid from the masterclass on youtube if i am not mistaken).

Generally we do use DeepComp here to some extent using inhouse tools. What i’d be interested most in is actually getting all the fragments from a point rendering as opposed to the whole volume.
For volumetric renders you couldnt skip cells or vary your stepsize anymore as changing in post might reveal parts that the renderer skipped, so that might add quite some overhead. For multiple
Fragments in a point render that should be only minimal overhead as the samples are generated anyways. So getting multifragment output including fragment weights would be great!

Regards,
Thorsten

Idea…

Could we supply a map to a future Krakatoa where we specified what depth each raster output buffer would be? I think we tried this a long while back where we supplied a z-buffer image to the renderer and that’s where it set the culling for the objects. But in this case, the map would define the distances where the accumulations would happen, and by layering these maps together you could define which output buffer what used for each pixel at which depth?

It wouldn’t “help” compositing except that it would provide all of the results to the compositor that they needed. And if the compositor made the depth sampling maps itself and triggered a new Krakatoa render on Deadline, it would let the compositor make the decisions about what depth passes are needed.

I’m thinking the map could be just a float greyscale image, and if you accumulate each map together, you could get the buffers you needed. Or you could use two channels to specify a start and end depth.

  • Chad

If I understand correctly, you propose using a series of float TexMaps to determine the cut off depths for rendering multiple images?

For example, putting a zdepth render for matte objects would create one image in front of the matte objects, and one image behind.

How is this different than the layers in an .rpf image buffer? Most of the point cloud compositing, as seen at quick glance of the images in that paper, look like a better viewer on the data. They let you see the position of the pixels in 3d space rather than as a color channel. This reminds me of the unwrap uvw editors when they first appeared in the late nineties. There was uv mapping before then but you couldn’t look at it except in the results.

I’m just skimming on this topic so I might be missing something. We discussed this at siggraph two years ago, I believe. The concept is sound. But I would look at supporting the major compositing packages for guidance on implementation.

Ben.

I suppose my question is really: “Where should I look in the major compositing packages for stuff like this?”

I’m not seeing much in terms of robust standard methods in the comp packages. They don’t go much beyond a fog pass. They assume the object is opaque. This is something where the renderers need to provide more data before the comp packages, and the compositing artists, in general, would know what to do with it. It’s more likely to appear in Houdini or 3d apps than AE, fusion, or nuke, unless there is some camera that could record this type of footage.

B.

Correct. You could have a series of images, or a bunch of layers in one image. Whatever is convenient. The idea being that compositors are fairly good at making the images you would need, so they could generate the requests as images and have Krakatoa generate the various passes. The results just get composited together as per usual.

We tested this before when rendering lighting to emission was first enabled. We used geometry clipping, not map based clipping, but the results (rendered as half float) composited in Fusion to a very high degree of accuracy relative to a full 32-bit render in Krakatoa. The render was a bit slow, since we had to re-sort the particles for each clipping pass. If the sorting could be done all at once, then the particles could be binned based on their position vs the maps.

This would be a dead simple means of making good composites. It only covers the cases of how you do Over type alpha compositing in cases where the object both occludes and is occluded by Krakatoa points. But that case comes up a LOT.

Of course, the same approach could be done with bins generated inside Krakatoa from the actual geometry in the scene, not unlike what you do with the matte objects now, just done for more than one overlap. I just thought that the map idea would be easy to use with the compositing software.

If we’re trying to have Krakatoa export data to allow relighting/reshading in a compositing package, or do to stereoscopic correction of Krakatoa renders then yeah, that’s a huge void, but I don’t think it’s a particularly common one, especially since lighting, shading, transforms, etc., CAN be done in Krakatoa, so maybe it’s enough to just have nice means of getting the requests back into 3ds max/Krakatoa. That’s just a pipeline problem.

  • Chad

I’m definitely looking at this a solution to the compositing problem as opposed to relighting/shading in post. With the OccludedLayer render element, you get a background layer in addition to the beauty pass. Your suggestion would allow for more layers AND also allow for alternative geometry renderers (via custom TexMaps).

How were you handling these problems in the past? Did you create culling geometry from depth images and do multiple renders?

Oh, I don’t think you’ll see it out of the box.

Like in Fusion, you can load an RPF and extract any layer you want. And last I checked (which was many versions ago) it did work as advertised, merging the FG and BG with coverage and depth correctly and without artifacts (except in the literal corner cases). The issue for pipelines was that few renderers could actually make the RPFs properly, and you ended up with a huge I/O problem anyway. Fusion will do depth compositing if you supply the Merge tool with the appropriate channels, and I THINK you can make those channels in Krakatoa (in theory, we can’t access coverage or weighting now, but we could if you added that, right?).

Making pointclouds or displaced camera projections can be an easy way to represent the color+depth data in a 3D comp package like Fusion or Nuke. But I’m not sure those are going to be useful for Krakatoa except for viewport proxy.

It’s possible we could make a PRT loader for Fusion, or do some voxel output using some Frost-like sampling. Either of those might be useful for some cases, but they aren’t out-of-the-box solutions.

Yes, that’s what I was doing. It’s tedious. But I know it works (and I know that none of the major compositing packages care about RGB absorption. :slight_smile: but that can be fixed easily.)

So yes, my suggestion would be to just handle more layers and to do them all in one render, so the user doesn’t have to manually queue up the culling objects.

The idea about the map based input was just an idea on how to allow a compositing package to provide culling/occlusion information without making geometry. Nuke and Fusion will let you send out FBX geometry, but if your comp is not based on 3D meshes, but on z-buffers, then the best way to get the composite to generate the requests may be to import float depth maps.

  • Chad

While I still think my idea for the depth slices would be easiest all around, another option to consider is rendering to a voxel array, or deep opacity maps, or Field3D or whatever. Let’s take the simple voxel case…

You define a voxel bounding box, and a resolution. Then after the particles are lit, you splat and accumulate them to the voxels. Assuming the particles are sorted to the longest axis of the voxel array, you can just stream them out, so you don’t need to have both a massive pointset and voxelset in memory at the same time. You would populate the appropriate channels, just like you would when saving a PRT. So maybe you save out density, emission, scattering, and absorption. You don’t need position, just a transform matrix for the whole array. You might end up with 30 points in one voxel, while another voxel gets none. When you finish splatting, the voxel array is optionally compressed. This will help with the empty voxels.

Now that this dataset is on disk, we can post-process it. We can convert it to deep opacity maps or deep shadow maps, blur it, mesh it, upsample, downsample, composite, clip, etc. Loaded into Fusion or Nuke we can render depth of field, relight, composite with geometry, composite with other voxels or deep opacity maps or deep shadow maps. Back in Krakatoa, we could use the voxels to resample to a new pointset, or we could use the voxels to render reflections/refraction from our favorite raytracer.

Right now you’re probably thinking “but aren’t voxels huge?” Sure, but we could do clever things, like warp the voxel grid so it was packed into the view frustrum, or was hierarchical, or an octree, or we could have a Deadline job that post-processed the raw voxels into deep shadow maps (or deep opacity, I’m not sure where Pixar stands with this darn patent). I don’t think the size is really an issue, just like rendering a billion particles at a time isn’t that big of an issue anymore. But you gain a LOT of flexibility, especially for filtering and compositing.

And if ALL you do is the naive splatting (and we know Frost can splat!), we’d still consider that a pretty cool option that TD’s could use with their voxel-centric pipelines.

  • Chad

Jumping into this conversation late…

Recently Pixar started offering free (as in no-charge) access to their “librix” library, which includes access to Pixar file formats, such as .dtex deep images, .ptc pointcloud files, etc. The library can be linked to without triggering any licensing code.
However to get access of librix, you have to talk to Pixar’s RenderMan team directly.

Since Nuke 6.3 now support .dtex out of the box, it would be very cool to have this data from Krakatoa as well.

Privacy | Site terms | Cookie preferences