Using genome, would it be possible for each vertex to shoot a ray along it’s normal (really the opposite direction of the normal) and return a distance when it hits a another face? it then colors the vertex based on how far it had to travel to find another face…then average with it’s neighbors to soften up the effect.
I’m trying to cook up a simple, fake SSS modifier that pipes values into vertex color based on how ‘thick’ or ‘spread out’ faces/verts are. Using a teapot as an example, the spout should turn ‘white’ since the volume of space between faces is small…while the ‘pot’ would be more grey/black colored since it has a large volume of space between faces.
Here is a possible implementation.
Note that it has some problems - for example, the top of the head has a white spot because the head is not a separate closed volume, and some rays hit down to the pedestal, producing more thickness than the face rays that hit the back of the head…
I also added a Red color to notify you of hit misses (when IsValid returns False). In a Stanford Bunny mesh, there are a few vertices with bad normals where the normal might be going the wrong way, producing red color as a warning. The Buddha OTOH is nicely closed and it works everywhere, more or less…
The Exponent (Power operator) could be replaced with a Curve operator for finer falloff control. The Thickness defines the area of the gradient - any thickness beyond that value will produce white.
GNM_MeshThickness_MagmaFlow_v001.png
Here is the result without viewport shading, just the vertex colors:
A couple of questions:
I took a little video grab of the flow in action, and at the end you see when I move it away from position 0,0,0 it goes all wonky / invalid. Why is this, and can it be adjusted so that the object doesn’t need to be at the origin?
I get some little red/invalid spots on this torus mesh…it’s water tight and seems that intersect ray should be returning a valid distance for every check - any ideas why those little gremlins are in there?
Also - I hope to use this on some pretty ugly, open models. Could multiple rays, (divergent from the opposite normal ray origin) be used? In the case of the buddha’s head, rays shot at various angles might help produce better results.
Could throw ignore the invalid rays and average the valid ones with the neighbors within a variable radius? (essentially softening/blurring the resulting values)
Once again, you amaze me! I was showing this to the designers around our shop, and quite a few eyebrows when up - they are curious at what effects you could produce by piping this information into various map channels or even converting it to a soft selection to allow other modifiers in the stack to work on the ‘thick’ areas - like relaxing only the thick areas.
Okay - I think I got the issue where it was going all invalid when I moved it away from the 0,0,0 point - I just needed to switch the input type in the ToWorld to ’ Point’ - it was on ‘Normal’ by default.
I would still love to be able to shoot a opposite normal ray, plus a few other rays per vert(like maybe 4?). Ideally you would be able to hook a float input to drive an angle of divergence.
Then the ability to average the values with x closest neighboring verts to smooth it out. - Think this is possible?
Another question: Currently we are using ‘current mesh’ as the geometry we are testing against in ‘IntersectRay’ - say I wanted to check against the current mesh, plus a handful of other meshes in the scene? Can you give me an idea of how that might be hooked up?
When I have the Position channel converted to World Space, I don’t see any problems when moving my mesh around.
If you intend to rotate the object, you will also have to convert the Normal ToWorld.
As for the misses, I have no idea what is going on - it looks like the kdTree is leaking some of the rays. I will log it as a bug and pass it on to the developer.
It only happens when using the Normal channel. Using the FaceNormal channel works correctly, but produces somewhat faceted shading.
It also works if the Smoothing of the Torus base object is set to None. So there is something about the Smoothed Normals (they might be hitting exactly at edges or vertices on the other side and missing the surface in the process).
Shooting multiple rays makes little sense IMHO. But you are welcome to try.
If you just drag from the input socket of the IntersectRay, the conversion node for Position should be set automatically to [P] mode.
There is a problem running a loop in Face Corners mode, but if you set the Genome to Vertices iteration mode and set the Selection channel to the calculated value, you could add another Genome to perform some averaging of neighbor vertices using VertexLoopByEdge. I suspect that would be enough to blur it. Shooting multiple rays will be slower and probably won’t produce very meaningful data.
InputGeometry supports multiple meshes, but does NOT include the currentMesh, so you would need separate tests for the currentMesh and the rest of the scene. I will have to log this and see if we could have a checkbox “Include CurrentMesh” in the InputGeometry node…