This image was made with DOF, and a Bcam with an f-stop of 1.2. For whatever reason, the spinner stops at 1.2, even though it is possible to have a lens with a larger aperature, albeit a rare and expensive lens.
We’ve talked about using a point based renderer before for the flexibility it gets us with animation. It’s not efficient by any means for things that don’t animate, like you would get by just raymarching through a volume.
Problem is we were just waiting for something to do it in as a backbone. Brazil seemed likely, but B2 has been so far delayed and is way below our expectations on this front. We use Deadline, so MR has been price prohibative, and the rendering engine we currently use, VGMax, has been promising us a new version for about 4 years now. It’s just been a waiting game.
Inbetween particles CAN be generated, but “elegance” is in the eye of the beholder. Would be better to have Krakatoa scale particles by distance, but we can spawn and offset positions based on distance to camera and local density. It’s just not the most obvious thing, and it increased rendertimes while decreasing flexibility to do things like move the camera.
Same thing happened when I tried to change the diffuse falloff. Works, but it’s slow and complex to offset something you are used to having in the renderer by itself.
Oh, and of course closeups with voxels aren’t practical anyway in general. You can raymarch though a voxel that takes up 10% of the screen and it looks lousy just because the detail isn’t there. Better than disappearing, sure, but we don’t usually do closeups anyway.
Your VGMax pipeline is really cool, but how complex it would be to create a voxel shader for max?
The one for MR (http://www.shaders.moederogall.com) looks nice, but as not only MR licensing, but also Material-Node-Integration aren’t optimal I failed to use this one…
The particle based volume has a really nice, “furry” look, so I really found this a great idea for some voxel problems…
We can skip VGMax entirely for some things. Segmentation is done mostly in Fusion now, and we can export the VGMax compatible files directly from Fusion. At some point we would like to make tools that share data between 3ds max and Fusion so that LUT’s in Fusion or filters in Fusion are duplicated in the Material Editor or in Box 3 sub-operators. We have the Box 3 SDK, so it’s easy for us to add our own filters there. We’re still fleshing out the pipeline for this sort of thing, seeing what works well, what works at all, and what is missing. So what we are doing is still in our eyes very primative.
The “Nurf” look works well for some things. Bones, lungs, skin, etc. Speculars are needed for things like livers or kidneys. But we have that sorta working.
Ben Rogall’s stuff is nice. We could have extended that with a lot of new code, but the MR licensing wasn’t something we wanted to get into. If we did, though, we’d probably then try to hire him to work on our stuff.
Note to Frantic: I’ve asked for this before, to numerous people working on Deadline. Please allow a “state” based system alongside the “task” based system. Backburner, Fusion Clustering, and all the satellite rendering systems, as well as simulators like NanoHive all need “state” based monitoring. So a slave isn’t assigned a task, it is told to run an application until it is told to stop. The job runs continuously until some other condition is met, likely the user suspending the job. Priorities are maintained, so that you can have Fusion Clustering running on all machines with a priority of 10, but if a new 3ds max render job comes in with a priority of 50, slaves in the 3ds max pool stop clustering and start doing task based rendering. This setup would let Deadline users get free MR rendering as well as allowing support to numerous DC applications.