Few workflow questions

I am approaching the final challenge and looks like need some help regarding the best possible workflow.

I have my static 200M pointcloud that mostly needs to be processed via PFLow.
At the moment it’s broken into 10 partitions.

Particle flow looks more less as follows:

  1. Initial state
  2. Dynamic trigger of the event in PFlow from Ember + Ember Force
  3. Gravitational falling + bounces in PFLow
  4. Spawn
    5.1 -> portion to remain static
    5.2 -> the rest to FumeFX Follow

Partial save to file from PFlow to PRT sequence
Obviously I need to save the result to PRT sequence for further processing
However i think it may be smart to split my PFlow to potentially do FFX follow (5.2) in separate step

  1. All events 1-5.1 got saved to a sequence of PRTs
  2. Instead of doing FFX Follow in PFLow I save only the very last frame of the simulation that will be my source for FFX Follow.
    The latter requires 2 steps I can’t figure out:
    Step 1 -> Save to the PRT an additional value from PFlow telling at which frame the particle entered event (5.1) of the PFlow (to allow for testing against it and triggering stoke equivalent of FFX Follow)
    Step 2 -> Somehow saving only the content of 1 particular event from PRTFlow instead of all of them -> Cant figure how. I can disable display of unneeded particles in the PFlow, but did not figure out how to simulate all of them, but save w/ Krakatoa only the result coming from the last event (5.1).

Help and suggestions appreciated.

Best processing and saving workflow
Currently I have 2 PFlow flows:

  1. PRTLoader + PRTBirth where I control how much data load
  2. Main PFlow starts with calculating an Initial State from the one above and proceeds with events as described above.

What would be best option to have the work done most efficiently also taking advantage of multicore architecture (driving multiple instances, one on each core) and Deadline?

Huge thanks for help in advance.

The “saving individual events” workflow has been possible since the very beginning. We even have a FAQ about it:
thinkboxsoftware.com/krakato … -surf.html

You will notice that Krakatoa has several checkbuttons for PFlow sources - PF Geometry, PF Phantom and PF BBox. These are the render modes in the Render operator in PFlow. The Render operator has a very special behavior in PFlow - if you have a global one and some local ones in individual events, the Local Render ops will ALWAYS prevail, even if Global Last evaluation is selected (which is default). So you can have a global Render operator set to “Geometry” which will make all particles in PFlow render as Geometry, but you can add one Render operator to some event and if you set the operator to “Phantom”, that event’s particles will override the Geometry with Phantom mode and will only render in Krakatoa or save to PRTs if the “PF Phantom” option is checked in the Krakatoa Main Controls rollout.

So if you want to exclude some Event from saving, simply add a Render operator set to Phantom to that event and uncheck “PF Phantom” in the Krakatoa UI. If you want to save ONLY the Phantom particle event, uncheck the “PF Geometry” option and check “PF Phantom” in Krakatoa and you will save the exactly opposite particle set.

Since you have 3 different options, you can have 3 different “selection sets” for events - Geo, Phantom or BBox, and mark/toggle the particles accordingly.

The Render operator has a 4th mode called “None”. Krakatoa handles this mode a bit differently - it cannot be included/excluded at all, it is ALWAYS excluded from rendering and saving. But particles set to “None” will still receive an evaluation call by Krakatoa. So if you have two PFlows where the first one is to be rendered or saved, and the second one should not be rendered or saved, but is used to affect the first one (for example via PFlow Box #3 operators, or as target points via Script operators), you set the second one to “None” mode and this ensures Krakatoa will ask both to evaluate, but will save only the first one. This ensures the first one will see correct data from the second one… But this is just FYI and not applicable to your case.

Regarding Multi-threading: Deadline lets you process multiple concurrent instances of 3ds Max on the same machine. We usually recommend this when partitioning (performing the same simulation multiple times with different random seeds). But in your case you have one set of 200 million to process, so there might be no automatic way to take advantage of this approach, even though you have 10 partitions of your source data.
An alternative approach would be to run independent jobs on two or more instances of Deadline on the same machine. This will make your machine appear as two or more render nodes in the Deadline Monitor, but they will coexist on the same hardware. Two or more instances of the Deadline Slave application will be launched on the single machine and each one could process a completely different job. This requires one license per Slave instance, thus you might need more Deadline licenses to run 10 or 15 Slaves on 5 physical machines…
You can read more here: thinkboxsoftware.com/deadlin … pleslaves/

If I save my ten partitions as 10 separate 3DMax files with different file on input and different PRT sequence on output, can’t I run them on one machine and one Slave license as concurrent jobs?
Or I need tu run multiple instances of Deadline Slave for that?

Concurrent Tasks work only within a single Job (hence the word “Task”). A single job has one Max file, and in the case of Krakatoa, we set each partition to appear as a separate task, while the frames are processed in an internal MAXScript loop. Normally, tasks in Deadline represent Frames or groups of Frames, but we are using them differently in the case of Partitioning.
So you need a SINGLE partitioning job with multiple partitions managed by Krakatoa itself.
If you want 10 MAX files, this means 10 Deadline jobs, and you need multiple licensed Slaves on each machine to run in parallel.

Clear.
And just to double check, there is no option of creating partitions that way, that I take my 200M as a source, but with some magic I fool the partitioner to take random (or even better - ordered) subsets of my main dataset?

Actually, there might be.
But I am not sure if it will work in your case, because you are dealing with PFlow in the mix.

Basically one can do this:

*Add a Custom Attribute holder
*Add an Integer value named “Seed”
*Add a Magma which has InputChannel Index Modulo 10, NotEqual to Integer 0, ToFloat, output as Selection.


*Add a Krakatoa Delete modifier on top
*Create a Wire controller for the Seed attribute on the stack and connect Bi-directionally with the InputValue’s controller that provides the Integer into the NotEqual operator (the ID of the node is part of the track’s name)
KMX2_PartitioningMagmaSetup_CAwire_v001.png

As result, changing the Seed attribute in the Custom Attribute holder will modify the Magma to keep a different tenth of the particles.
Since Krakatoa has an option to change the “Seed” property of any modifier found on a PRT Loader, this means that if you use Partitioning, your full PRT Loader will be reduced to 1/10th of its count by deleting the other 9/10th of the content. In the Partitioning rollout, simply check the >PRT Loaders checkbutton.

I have tested this only locally though, and I don’t know what will happen if you are feeding all this into a PFlow. Also, if you are generating an ID channel from Index, make sure you do this BEFORE the Magma that deletes the particles so the IDs would be correct in each partition.

I just tested it with PFlow and everything seems to work as expected.
-> If Seeds incrementation is disabled each partition gets identical set of data.
-> If Seeds incrementation is enabled for PRT Loaders i get different data sets.

When partitioning directly from PRT Loaders IDs are preserved.
When partitioning from PFlow unfortunately IDs get spoiled -> Each partition gets a new, continuous IDs starting from 1.

But I don’t think it is a serious problem, at least in my case.
Huge thanks for suggesting how to fool your software! You made my day!

I am having some issues with higher particle counts though.
For some reason the system is producing only half of particles that it is supposed to produce.

I have 200M in 10 files, all loaded and partitioning to 10 partitions via Seeding Workaround as above. That should give 20M per file.
100% render partivles in my PRTLoader.
100% set in Global Rendering event of PFLow
System limit set to 200 000 000 (although I read you can set only 100 000 000).

In output I am getting only 10M particles per file.

I thought it may be connected to the limit of Pflow, so I increased the number of partitions to 20 having 10M per frame.
But now I get only 5M on output.

I don’t get it.

It is not dependent on patrticle count.
Further investigation also shows that PFlow is losing random particles.
What I did was engraved original ID from file (since its being overriden by PFlow) to color channel.
It’s one file of 10 with partitioning modifier added dividing to 5 partitions. IDs in color channel should be every 50.

As we see it’s losing random particles but generally the result is always half of the original dataset no matter the side.
it does not happen when saving PRTLoader > File and it does not happen when saving PRTLoader > Pflow > File w/out partitioning workaround.

It’s happening only when I add the partitioning workaround AND go through PFlow (because partitioning itself in Krakatoa works fine).

Really don’t know what’s going on there.