Wishlist: Virtual Normal Particle

Wishlist:



For each particle, create a “virtual particle” a user specified distance along the normal of the real particle.



Pass both particles to the modifier stack.



At render, orient the normal vector of the real particle to align with the virtual particle.



This will allow for normals to be modified by any deformer. By allowing a user specified distance from the real particle, the normal can be tweaked according to the shape of the deformer.



A maximum transform threshold could also be defined. If a virtual particle is beyond this distance from the real particle (caused by some discontinuity in the deformer) the real particle will search for the nearest virtual particle and orient the normal to that.


  • Chad


>Wishlist:
>This will allow for normals to be modified by any deformer.

Fixed in v1.1.0

 

It wasn’t really broken in 1.0. Is this a new feature then?


  • Chad

>It wasn't really broken in 1.0. Is this a new feature then?
>
>- Chad

Deformers did not affect normals in 1.0, only velocities.

Now they do. If you bend a cloud of particles, its "surface" normals will also bend.

So I should have said "Added in 1.1.0", not "Fixed..." :o)

We also added an option to see the normals in the viewport so you can get an idea how they are deforming.

Since deformers don’t pass rotation, how are you determining the new normal? If you are sampling a second point, where is that located in relation to the first point?

>Since deformers don't pass rotation, how are you determining
>the new normal? If you are sampling a second point, where
>is that located in relation to the first point?

Instead of passing two point clouds through the deformer (position and velocity vector), we now pass three. The normal is represented just as a new point in space offset from the original particle's position. I *assume* it is offset at a distance of one unit (because it is a normal, after all), but I will let Darcy answer the technical details.

The reason I ask is that there could easily be situations (like with a volume select followed by an xform) where the “real” point and the “normal” point do not get transformed together, but get spread FAR apart (which would be bad).



You would either have to have the “normal” point be VERY VERY close to the original, which won’t fix everything and could have precision issues, or have a threshold where if the distance between the points exceeds the threshold, the “real” point would have to interpolate it’s normal from nearby “real” points who have “normal” points below the threshold. This would be cool, but slow, I suspect.



I just didn’t know what was going on currently.


  • Chad

The reason I ask is that there

could easily be situations

(like with a volume select

followed by an xform) where

the “real” point and the

“normal” point do not get

transformed together, but get

spread FAR apart (which would

be bad).



This is a good point, I wonder if we should explicitly deal with this particular case? One way to do that would be to add a little processing pass between each modifier we run, copying the selection (soft or hard) from the primary vertices to the offset normal vertices.



Are there any other examples which have similar potential problems?



-Mark

Would be nice if that “Copy Selection Weight to Normal Point” operation was a modifier. Then we could insert it if needed, say after the volume select. So you could use it for lots of cases.



There isn’t as Skin modifier that works yet, but if there was, you would have the same issue with bone weights, either through envelopes or weight painting. Guess we would need to know how to access these “normal” points in the PRT Loader in order to implement a modifier like that.



FFD’s that are set to “inside volume” could run into this issue where the “real” and “normal” points don’t both fall inside the volume. FFD’s are so useful, but so antique inside max…



Off topic, but with the new “get normals from nearest face” operation, it would be cool to try that with your PRT Mesher. Dependancy loop, no doubt, but would be a fast way to assign normals.



Further off topic, any chance we could parent culling objects to PRT Loaders? It’s a dependancy loop now, but since the culling object does not affect the transforms, it doesn’t really need to be.


  • Chad

Affect Region would also be a case where this wouldn’t work, when the “ignore backfacing” is checked, but just checking now, it doesn’t work anyway. Seems all the points have a certain “direction” and the “backfacing” makes the affect region only work through a hemisphere.



Something I didn’t know before I tried it right now.


  • Chad

Would be nice if that “Copy

Selection Weight to Normal

Point” operation was a

modifier. Then we could

insert it if needed, say after

the volume select. So you

could use it for lots of

cases.



I was imagining doing this after every modifier, so that this would never be a problem. Would there ever be a need to turn it off?



-Mark

Further off topic, any chance

we could parent culling

objects to PRT Loaders? It’s

a dependancy loop now, but

since the culling object does

not affect the transforms, it

doesn’t really need to be.



I can’t really think of a way to do this, and low-level mucking with the dependency graph is generally unpleasant. I think the standard workaround of parenting both the prt loader and the culling objects to a dummy, to which the animation is applied, is the best approach.



-Mark

I can’t think of any reason why you would want to have different selection weights, no. The idea is to have two samples of position, not of selection.



The dummy thing is the workaround we’ve been using, yes.