For each particle, create a “virtual particle” a user specified distance along the normal of the real particle.
Pass both particles to the modifier stack.
At render, orient the normal vector of the real particle to align with the virtual particle.
This will allow for normals to be modified by any deformer. By allowing a user specified distance from the real particle, the normal can be tweaked according to the shape of the deformer.
A maximum transform threshold could also be defined. If a virtual particle is beyond this distance from the real particle (caused by some discontinuity in the deformer) the real particle will search for the nearest virtual particle and orient the normal to that.
Since deformers don’t pass rotation, how are you determining the new normal? If you are sampling a second point, where is that located in relation to the first point?
>Since deformers don't pass rotation, how are you determining >the new normal? If you are sampling a second point, where >is that located in relation to the first point?
Instead of passing two point clouds through the deformer (position and velocity vector), we now pass three. The normal is represented just as a new point in space offset from the original particle's position. I *assume* it is offset at a distance of one unit (because it is a normal, after all), but I will let Darcy answer the technical details.
The reason I ask is that there could easily be situations (like with a volume select followed by an xform) where the “real” point and the “normal” point do not get transformed together, but get spread FAR apart (which would be bad).
You would either have to have the “normal” point be VERY VERY close to the original, which won’t fix everything and could have precision issues, or have a threshold where if the distance between the points exceeds the threshold, the “real” point would have to interpolate it’s normal from nearby “real” points who have “normal” points below the threshold. This would be cool, but slow, I suspect.
This is a good point, I wonder if we should explicitly deal with this particular case? One way to do that would be to add a little processing pass between each modifier we run, copying the selection (soft or hard) from the primary vertices to the offset normal vertices.
Are there any other examples which have similar potential problems?
Would be nice if that “Copy Selection Weight to Normal Point” operation was a modifier. Then we could insert it if needed, say after the volume select. So you could use it for lots of cases.
There isn’t as Skin modifier that works yet, but if there was, you would have the same issue with bone weights, either through envelopes or weight painting. Guess we would need to know how to access these “normal” points in the PRT Loader in order to implement a modifier like that.
FFD’s that are set to “inside volume” could run into this issue where the “real” and “normal” points don’t both fall inside the volume. FFD’s are so useful, but so antique inside max…
Off topic, but with the new “get normals from nearest face” operation, it would be cool to try that with your PRT Mesher. Dependancy loop, no doubt, but would be a fast way to assign normals.
Further off topic, any chance we could parent culling objects to PRT Loaders? It’s a dependancy loop now, but since the culling object does not affect the transforms, it doesn’t really need to be.
Affect Region would also be a case where this wouldn’t work, when the “ignore backfacing” is checked, but just checking now, it doesn’t work anyway. Seems all the points have a certain “direction” and the “backfacing” makes the affect region only work through a hemisphere.
Something I didn’t know before I tried it right now.
I can’t really think of a way to do this, and low-level mucking with the dependency graph is generally unpleasant. I think the standard workaround of parenting both the prt loader and the culling objects to a dummy, to which the animation is applied, is the best approach.