First Look: Magma Loop Node

youtube.com/watch?v=FrHVjvx4SMU
:smiley:

Yay! This could make for some very interesting setups that were SUPER tedious before.

Would be nice if we could get counts and such out of collections, like when we InputGeometry or InputObjects or InputParticles, if we could have several objects (Like InputGeometry) and output the count (which InputGeometry does) but chose them by index (which would be an input). So we could iterate over each object/mesh/light.

Very exciting.

Now all that is missing is a bunch of functions like getAdjacentVerts, getVertsFromFace, GetFacesFromVerts, getAdjacentFaces etc.

THAT would unlock some really cool stuff.

I can’t wait to get some iterative effects going with the loops already.

We still won’t be able to loop over the inputs, right? Can’t do diffusion or relaxation or whatnot because the particles themselves aren’t looping, the instructions are?
So while Bobo’s MagmaRay setup would be a lot simpler to set up multiple hits/rays, you wouldn’t be able to do adaptive sampling.

Will it support loop exiting?

Can the number of iterations vary based on some data and not be the same number of loops for every particle?

Already does.
We have a Max. number of iterations just as a precaution in the UI of the Loop, but the actual exit condition is the top output socket of the Loop body - it loops as long as the condition is true. If the Max. number of iterations is reached, it will exit (this is to avoid locking up if someone put a constant True into the control socket).

I already rewired the MagmaRay scene and it is now adaptive - if a ray misses, it exits the loop, otherwise I can now control the max. depth of the raytracing with a spinner and a single operator that does both primary and all secondary rays.

Lookie lookie here: viewtopic.php?f=136&t=8092

I meant adaptive in that it would evaluate the neighboring “pixels” and add more rays if the difference in color (or whatever other criteria you want) is above a certain threshold. You’d need another PRT object to do this, right?

Ah, that. Well, I cannot add more rays that there are particles in the “plane” PRT Volume. So if I have more particles than pixels, I can get good sampling, but no way to make it adaptive - I would have to shoot a ray through every particle regardless what its neighbors are. But in Genome, I could check the results of the neighbor vertices and potentially shoot more rays to collect more data. In fact, Darcy bet yesterday that this will be the first thing people (meaning you) will ask for the moment we post that video :wink: So we have it on the list of things to explore…

You end up with a lock so you have to process all of the points/vertices/voxels/whatever before you can do the next iteration, whereas now you can stream the data through. But that’s what you do for the modifier stack anyway, so you could read in the result of one modifier with another on top, but there’s no way to have the modifier stack itself loop. :slight_smile:

So the guideline would be anything that needed sampling from the current object would not work, but loops that don’t sample from itself would be fine. Still opens up a lot of new uses.

Very cool. I’m still not sure if this covers everything I want or not. Looping is missing from nearly every node-based package I can think of.

I like the idea of a second flow for the loop body. I could imagine this could be a way to define functions as well. Sort of like local blops, i suppose.

I think what is missing is a way to generate some “in place data” and define what should be done to that data. In this example addition is the only operator so you don’t need to hold on to that array of results. But it would be super powerful to pass an array as something to be processed. Like for each neighbor, or for each light.

I agree that it will important to add different kinds of loops that are specialized for certain collections, but the first implementation needs to be the most straightforward so we can build out from a baseline. I foresee exactly the kind of loops you describe using the same infrastructure packaged in different ways.

With the addition of the “Iterations” parameter at the Genome modifier level, we now have control that looks like (courtesy of Bobo):

FOR O = 1 to Iterations DO
  FOR M = 1 to NumberOf(Verts/Faces/MapVerts) DO
    FOR L1 = 1 to MaxIterationsLimit WHILE Condition == TRUE DO ...

The O loop is controlled by a spinner (future development will make that optionally a per-object magma expression). Changes to the mesh will be “seen” across these iterations. For example, if you changed the FaceSelection in one iteration the second iteration will report the new value when evaluating the InputChannel(FaceSelection) node. Previously you could achieve a limited form of this effect by making multiple instances of the same modifier on the stack.

The M loop is the same 'ol standard Genome loop over verts, faces, or face corners we already have. Since we are adding access to neighbor verts/faces/etc. we now make a copy of data that is both being read and written to in the expression, so that reads are never affected by the writes within the same M loop.

With the addition of a Loop node, you get the L1 (and L2, and L3, … L# nested) loop expressions that are terminated by the user writing a Bool value to a pre-defined Condition output. Any reduction expression (ie. one that only writes to a non-array variable) can be created with these. Individual evaluations of the loop can execute varying times depending on the input values and the contained expression. I envision common usage situations to include:

[]Root finding (ex. Using a texmap as a levelset and moving the evaluation point along a vector until you reach 0)[/]
[]Summing/Combining data over all the faces that connect to a vertex[/]
[]Allowing us to work with polygons instead of triangles only (ie. we can loop over the N vertices per polygon instead of assuming 3 vertex triangles)[/]
[]Tracing trajectories with multiple bounces (ex. Raycasts for lights, particle trajectories after bounces, etc.)[/]
[]???[/]
[]Profit[/]

I see the major change and killer feature being generating “array” results. (Example neighbor lists. ) Then you map a function against that array to get a result back for the vert.

ps.
If this can be kept in the functional style, I think it will be easier to teach and adopt than the procedural style. Implicit loops would be easier to grok.

Can you diagram up the differences, as you see it?

How is progress towards a published build of this coming along? I desperately need to get the average of a bunch of normals and I know it’ll be a cinch with the loop node!

R

I’m working on getting a mock up that would explain what I mean. I haven’t determined or proven if it would give additional functionality. This is due to my not yet understanding of what your system allows. Stay tuned…

Got distracted by some Krakatoa issues. The Genome Loop stuff is mostly ready to go once I finish some Krakatoa maintenance.

ahem…

Oh hey there!

Sorry, we have some emergencies to take care of before the end of this month, on top of that I am on vacation in Europe and working remotely.
We have done some additional and pretty cool development in the area of Loop nodes (new Loop nodes dedicated to geometry operations, including simplified access to mesh data from inside the loop body). The wait will be worth it. :slight_smile:

Sorry to be a pain here, I have a shot that I was hoping to use genome loops on. If you guys think it might be a few more weeks before you get a chance to get a build out, I’ll RnD another solution. Basically, is there any chance a build might turn up in the next week or so? It’s totally fine if it’s not going to happen - I understand you have many priorities besides Genome. I’m just planning!