AWS Thinkbox Discussion Forums

Slow Motion Particles

When i use the prtloader with the graph to slow down and speed up “time”, how well does it interpolate?

I have just tried it, i noticed that when the timing was becoming too slow, less than 2x, it started having a skipping effect…
so im just wondering how good can i bend time with this timestretching technology? does it even interpolate or just repeat frames?

someone told me it worked like that, and i was very happy to hear that,
im trying to do something similar as bobos maxtrix script, only i need to save the particles, and not the frame renders :slight_smile:

thanks for any help

It works like this: It finds the full frame closest to the current time, takes the position on that frame and interpolates the position of the particle based on the velocity on that frame.
The drawback is that the velocity vector is linear, so if the next full frame position is on a curved path relative to the previous frame, the velocity from the previous frame will not be pointing straight at the next frame’s position and thus there will be a jump in the direction in the middle of the interpolation when the current time comes closer to the next frame than the previous one. You won’t see your particles doing a smooth curved interpolation between the two full frames because that would require reading two files in order to know both positions and velocities which would be much slower.

In short, for particles that are moving mostly linearly without significant change in velocity direction between two full frames stored to PRT, you won’t notice a big jump, but if the velocities are very different, you will see a linear interpolation for half of the subframes after the one previous full frame and then another linear segment after the half.

The following sequence shows sub-frame sampling of a PFlow particle (yellow) blown by a constant wind vs. PRT Loader interpolation (green) on the same sub-frames.

The images were taken on ticks 0,79,80,140,160,239,240, 300 and 320 at 30 FPS. So one frame is 160 ticks, 79 is the tick before the following frame kicks in.
Thus, on frames 0 to 79, the PRT Loader interpolates based on frame 0 and moves linearly along X without any Y influence, while the PFlow is already blown by the wind and moves down.
On tick 80, the PRT Loader starts using frame 1 and places the particle half a frame backwards along the velocity stored in frame 1, so it jumps down.
On tick 140, it is coming close to the correct position and orientation which it then reaches on tick 160 where the PFlow and PRT Loader are coincident.
The same repeats for the next set of images - on tick 239, the PRT Loader interpolates half a frame forward based on the position and velocity from tick 160.
On tick 240, the PRT Loader switches to reading from tick 320’s position and velocity and interpolates backwards half a frame. It then comes closer to the correct position and velocity on tick 300 and coincides with the PFlow on tick 320.

hi bobo, thanks for the detailed explanation.

so, is there a way the upcoming 1.6 will allow user to decide if he wants normal interpolation or best interpolation?
how much longer is longer?
i have scenes where i can save out pflow particles and they are written 1frame a second, so, if interpolating them correctly would take 10seconds a frame, i would be fine with that! (even 30 seconds :slight_smile:

I am often in the situation where i can only correctly bend time AFTER the simulation, and not directly beforehand.
1)Is there any way we could get improved interpolation for future versions?

2)If thats not possible- if i save out 2 or 4 “frames” per frame, i guess we can call em ticks, then my prt would hold more particle information per frame, and therefore allow me to bend time without the stuttering, but how can i save more particles per frame, and how can prt_loader work with these correctly?

Any help on a nice workflow would be very helpful! Trying to get better control of time for my bachelor project.

Thanks again for the code bobo, i will share it with the rest

[code] (
local start = 0.0 --this is the first frame
local end = 100.0 --this is the last frame
local step = 4 --this is the number of samples per frame
local cnt = start as integer --this is the counter for the current frame number
local theParticleFileName = (FranticParticles.GetProperty “ParticleFiles”) --get the output PRT file name
FranticParticles.SetProperty “Presets:SaveRenderHistory” “false” --disable history saving

   for t = start to end by (1.0/step) do --loop from start frame to end frame with the given step
   (
      local theFileName = FranticParticles.ReplaceSequenceNumber theParticleFileName ((floor t) as integer) --figure out what the saved file will be called
      deleteFile theFileName  --make sure it does not exist yet
      render frame:t vfb:off --call the renderer at a sub-frame to save a PRT with full frame
      if doesFileExist theFileName do --if it was found,
      (
         local theBaseFile = (getFileNameFile theFileName) --grab the base file name, then build the new file name with suffix SubFrame_
         local targetFilename = getFileNamePath theFileName + substring theBaseFile 1 (theBaseFile.count-4) + "SubFrame_" + getFileNameType theFileName
         targetFilename = FranticParticles.ReplaceSequenceNumber targetFilename cnt --set the target frame number based on the integer counter stored in cnt
         deleteFile targetFilename --make sure it does not exist before saving over it
         renameFile theFileName targetFilename --rename the saved file to the new file name
      )
      cnt+=1 --increment the integer frame counter by one
   )--end t loop
)--end script[/code]

just set your saving particle path and the frame range, and it will save you 4 “frames” per frame-
this is great for using the graph editor to slow down and timebend your particles without them stuttering-

I still have the question, how hard is it to implement the interpolation of each particle position in future versions (not with linear velocity)? would this be a lot of work?

I think the major issue is that it requires consistent IDs between frames in order to determine a correspondence between the particles. My understanding is that most people don’t use IDs so this interpolation mode simply won’t work.

I’ll add the request to our list and see if I can get it into an upcoming beta.

And you’d need to load 3 extra frames worth of at least positions, velocity and ID data. So for every particle, you’d be loading what, 66 bytes more? That’s a lot of I/O and memory to add on. And that’s assuming your other values, like density, color, normal, orientation, etc are constant, which isn’t terribly likely. Add those in, and you’re up to 132 bytes extra. Not impossible, but worth considering.

oh ok,
jea my particles ALWAYS have IDs :slight_smile:

thank you for adding it on the list- would make the timegraph function much more usable- we could always create a magma flow to calculate velocity based on two positions

The alternative would be to include additional position and velocity samples inside the PRT file (e.g. one sample in the beginning of the interval half a frame back, one sample in the end half a frame forward, plus the regular position and velocity in the center).
Then interpolate between the three samples to get a more curved result. If this is used just for PRT retiming purposes, the data wouldn’t have to be loaded into memory for motion blur calculations, and the major hit (besides the additional data to read) would be at saving time having to sub-sample the particle system. Of course, if we would add additional memory channels to keep that data during motion blur, it would increase the memory footprint, but the ability to produce curvy-looking motion blur streaks might be worth it.
Since the data would be per-particle and per-frame, it would not depend on matching IDs between frames. Also, those additional channels would be saved only by the few people that actually need that info, and everybody else would just save single position and velocity samples like before and no curve interpolation would occur.

jes sure, 4x the amount of particles is 4x the amount of data, but for slowmotion shots, it would be very useful,

lets say you have a 1000frame range shot, and your start slowing down time at frame 200, at 500 you are 20x slower, back to 800 to 1x speed, so we need a slowmotion range from 200 to 800, we would need to save our those 600frames 20x, even though frame 200 to 400 are only a gradient from 1x to 10x slower maybe,

so if we can just use the timegraph to bend time right off the beginning, it automatically saves the right amount of particles and doesnt create an extra amount of unnecessary ticks–

im totally fine with this script now, really glad its there! - i just think in most cases we dont have straight flying particles, so interpolating the position according to velocity doesnt seem appropriate to me, sure for speeding particles up its fine, but the stutter effect kinda cancel out its purpose when trying to slow things down—

all i can say is, im really happy to be able to save out ticks now! :smiley:

You can’t if the points don’t contain the additional positions as channels. No sampling.

But that might be interesting though… Store the data of the previous 3 frames in the current frame, then just load with a 2 frame offset and use the KCM to blend it all together?

Oh wait, KCM’s can reference time, either…

  • Chad

But at that point, why stop at motion blur? Just provide an array of positions and you can render splines for hair. Or provide an array of ID’s for connectivity and render “triangles” or “tetrahedra”.

  • Chad

dont know if it makes any sense posting this, but this is the operator oleg programmed for us to give each particle velocity (in the case where only position is animated)

speedvector.jpg

The problem is the KCM’s don’t have the “T” input.

Why three extra frames more? You only need the Position and Velocity of the two bracketing frames for interpolation. . .

ah ok,

so position and velocity will both have to be calculated through interpolation,
if its possible, waiting an extra 30minutes or storing 10 extra GB would be fine with me :slight_smile: (not like slowmotion shot is done everyday anyways)


Bobo,
i am trying to save out realflow .bin with your script,

the filename i have in the krakatoa GUI is a_intro00000.bin
but it saves me out a_introSubFrame_0100.bin

theres an integer missing-
what does

(theBaseFile.count-4)

what does that line do? i changed it to 5, but that doesnt change the numbering either-


ok i see that takes away the numbers from the original frames, but i still cant get it to output 5 digits

local targetFilename = getFileNamePath theFileName + substring theBaseFile 1 (theBaseFile.count-4) + "SubFrame_00000" + getFileNameType theFileName targetFilename = FranticParticles.ReplaceSequenceNumber targetFilename cnt --set the target frame number based on the integer counter stored in cnt

When you provide a “template” frame count after SubFrame, the function ReplaceSequenceNumber() will respect it and produce 5 trailing digits. You don’t have to set the original output to 5 digits though - just use 4. Otherwise you have to change (theBaseFile.count-4) to (theBaseFile.count-5), but it does not matter for the final name because the numbers are added elsewhere.

Yeah, you’re right, I was thinking you’d need 4 total, but that’s only if you have position. If you have velocity, then you can figure out where the last sample is.

But would 4 positions be easier than 3 positions + 3 velocities? Is there a case where having the positions NOT integrated with the velocities (meaning the velocity vector of sample 2 doesn’t point to the position of sample 3) is useful? 4 positions is 48 bytes, while 3 positions + 3 velocities is 54 bytes, right?

  • Chad

I meant that to get a particle position at frame T where floor(T) < T < (floor(T) + 1) you can do cubic interpolation with the Position and Velocity of the particle from frame floor(T) and (floor(T) + 1). Only two frames of the particles need to be in memory at any given time, unless we cache the bracketing frame data for quicker interpolation. If its cached then its the sample at T, floor(T) and floor(T)+1 in memory at the same time making the 3x memory increase you mentioned.

But I’m suggesting that you don’t need to store any velocity if you’re willing to just use the positions. Just do cubic interpolation from the 4 position samples. Storing velocity is nice because it lets you do instant speed on newly created (or soon to be dying) particles, and it’s cheaper than storing 2 positions (assuming the second position would need float32, it probably does not).

Guess what I’m saying is, couldn’t you get away with no velocity channel and still have (curvy) motion blur if you are A) willing to read 4 positions from 4 prts or B) willing to store 4 positions (3 of them at half float) in a single prt?

Writing option B would be pretty easy with Box#3, just store the position as mapping channel N, then on the next frame copy that to mapping channel N+1, then on the next frame copy N+1 to N+2, etc. Then when you go to load the PRT, you offset by 2 frames, since the position channel written will only have trailing position samples.

  • Chad

You could certainly do that as well. It seems less user friendly to me but equally valid.

Of course, with two positions and two velocities you can describe many more interpolating paths than with four positions. In practice it probably doesn’t matter though.

Privacy | Site terms | Cookie preferences