Few Wishlists : Some new features request

Hello,

Some features that I wish can be possible in new release of Krakatoa.

1.> PRT loader can have motionblur property, so if possible to make some PRT loaders to disabled to motionblur and some have motionblur property on.

2.> When having mutliple PRT loaders, I wish I can render them seperatly. if it’s possible PRT loaders to have “object based property” like it works if I uncheck “visible to camera” but if I want to render only one PRT loader but want to have shadow of another PRT loader (and that second PRT loader should be inivisible to camera) , so if I uncheck “Visible to camera” to make that PRT loader invisible in view but have to cast shadow on that first PRT loader’s plate. so that way it’s possible to render mutliple PRT loader with shadow information.

3.> it’s again on PRT loader, I would also call krakatoa sculpting particle tool… like if we saved particles in volume box, is it possible to sculpt particles based push/pull brush? … as PRT loader supports other modifer and also FFD modifiers, I wonder if it’s possible to have push/pull brush based approach to sculpt particles and if it can be animated over time? It will be usefull making sand dunes rising, sand sculpture exploding etc.

I wish if this can be possible.

I look forward to new Krakatoa release.

Regards,

Jignesh Jariwala

As discussed already elsewhere, it would be possible to get the LOOK of it, but not the speed-up you are expecting.
In other words, we could have an option to ignore the velocity channel in a PRT Loader, resulting in particles rendering without motion blur when motion blur is on. But there is no way to skip the rendering of some particles in some passes while rendering others to make it faster. As result, you would get some particles to blur while others won’t, but it would take exactly as long as in 1.1.

We were planning some core changes in the next version of Krakatoa to allow for handling particles per object, but at this point, due to the huge changes in other areas, we cannot tell whether those changes will make it in. In short, in the current 1.1 design, all particles are loaded into memory and handled as one big particle cloud. There is no channel marking a particle as member of a specific source (mesh, PRT Loader, PFlow event etc.), thus we cannot handle particles differently during the lighting and shading based on their origin. We were planning to add such a channel to allow things like changing material of specific particles in-place without having to reload them into memory, supporting light inclusions/exclusions, the camera visibility you just mentioned etc. We have it on our list, but we will have to wait and see whether it will make it into the next version.

If you would sculpt a mesh defining the particle deformation cage, then yes, next version will allow you to deform particles by arbitrary deforming meshes, so you might be able to do that. Right now, you could use a Displacement modifier with animated textures to get something like rising dunes, in the next version you will have a universal Krakatoa Skin Wrap WSM which will deform particles (or vertices of other meshes) using any animated mesh. We are currently using it in production to generate animated particle clouds and deforming them to follow a character animation, so we can have walking particle clouds.

Hi bobo,

Thanks for your reply and explanation. I ended up rendering all PRT loader together.

I look forward to new displacement and that skinwrap features.

Also I would love to see density resolution based on camera distance or it’s possible in current version of krakatoa. So foreground particles will have more density and it decreases at it goes in background.(when camera is very near to foreground particles)

I don’t know if it’s possible in current version of Krakatoa.

The problem is less the density and more related to the way the density is represented in the final rendering.
We have had looong threads about this in the past. We have two possible ways to deal with this - drawing particles as larger splats than just pixel/subpixel sized dots or rendering a voxel representation of the particles. There might be also some way to combine the best of the two in a hybrid mode.

The first approach would cause particles to become bigger as they come closer to the camera - this can be somewhat achieved with the current DOF code but tends to slow things down. A real implementation would just draw a disk or another shape just like regular polygon renderers do with facing particles, but much faster as we wouldn’t have to shade polygons. The Size or Scale channel of the particle system could be used to control the world size of each particle. When flying through a cloud, you would still see particles, but they would become much larger.

The second approach would result in very smooth representation of the cloud and allow you to practically fly through the cloud without ever seeing a particle as such. Depending on the voxel size, it might result in loss of detail at the edges of the cloud, although small enough voxels could look just like particles, while still giving a consistent density at closer distances. This method will also result in smoother shadows with less moire artifacts. It could be used for the lighting pass only in combination with regular point rendering in the final pass.

I am happy to inform you that version 1.5 will feature at least one of these two methods :smiley:

If you could save the lighting out, then it would be possible to do something like that, where you render both particle systems together to get the lighting, then render each one separately. This way you could mix motion blur with not, specular with not, etc.

  • Chad

@chad : Thanks for your reply. Yes that sounds quite possible solution. Anyway I had already rendered it. But I will do some experiments on it in my freetime.
Btw sorry for being away 1 year, I lost my password and just got recovered by bobo’s help. I saw your post in imagecontest archived thread. Thanks for appreciation.

As you probably know already, lighting saving and loading is on our list for 1.5.
I am pretty sure the saving is more or less implemented, we just have to make it load again.

Right, which is why I pointed it out. It’s at least one method that would solve the problem that’s already in the mix.

Just to clarify, I think we need to not just save the lighting (which would be great) but also the SHADING. So the Lighting channel says something like “I’m lit by [12.5, 12.6, 10.2]” but the shading says “I AM [.5, .51, .4]”.

Like in the pre 1.0 days, when I did specular highlights using Box#3. I just set the color of the particle based on the normal vector, the light vector, the camera vector, and whether the point was occluded by other points. The unlit particles just had explicit color that was evaluated by Krakatoa as the exact colors to render.

If we save lighting, then we can store the diffuse/isotropic lighting, and if we store shading, we can store the view-dependant lighting. Both would be useful, just depends on whether you are trying to save the light that hits the particle, or the light that reflects to the camera.

I want to be able to make a partition of particles that are Phong shaded and a partition that is Schlick shaded and have those render together with correct density based occlusion.

How do I intend to have the Phong particles cast shadows on the Schlick particles and vice versa? Currently, I would need to embed an extra data channel into the PRT, like MaterialIndex, then test against that for culling (which I would currently need PFlow for, but a Krakatoa Select By Channel modifier would be cool too). OR have some better control over the saving of partitions, so I can say “PRT Loader 1 Phong” gets saved to “PRT_LD1_.prt”, and “PRT Loader 2 Schlick” gets saved to “PRT_LD2_.prt”. Then I just render twice (once with Phong, once with Schlick) and delete the PRT’s I don’t need (PRT_LD1_Schlick.prt and PRT_LD2_Phong.prt), or there would be some checkbox controls over which ones render out to disk.

  • Chad