HI,
I’m using the Python API to add Krakatoa SR renderer to Eyeon Fusion. I’m unable to find function in the API that specifies the framerate of the PRT file. Is this implied somehow and am I missing it?
Please let me know.
thanks arun
PRT Files do not themselves have a framerate. For all our applications, all the time-related channels in the PRT are specified in seconds and not frames. For example, “Velocity” is exported as units/second.
Generally this works well, but it can be tricky if you export a PRT sequence with one frame rate, and import them later into a scene with a different frame rate. In this case you’d have to scale the “Velocity” channel to get correct motion blur.
We have talked about extending the PRT format to have meta data such as frame rate. However, we haven’t finalized a PRT v2 specification.
Is there a specific problem you are running into?
HI Conrad,
Thanks for getting back. I understand that the time-varying fields are specified in terms of units/second. My question is regarding how Kratatoa estimates the time gap between two frames.
In a case were a PRT volume is exported, across a time span, how is the time gap between two ‘snapshots’ of particles estimated. Say one export was done at 24 fps and another at 30 fps, how
do you specify this later to the SR. This may be important for motion-blur for example, since some of inputs are specified in seconds (eg the Shutter(start,end) function).
thanks
You’ve brought up an topic that can be a little tricky. This can be an issue in both Krakatoa SR and Krakatoa MX.
Krakatoa SR has no concept of “Frames” whatsoever. It also doesn’t have a concept of PRT “sequences”. So it does not care how many seconds are between frames. This can lead to a tricky situation. I believe I understand the issue from your example. Please let me know if I understand correctly.
In your example, you are rendering a sequence of Krakatoa images. Each frame of your sequence has a corresponding PRT file. Your framerate of the image sequence is 24 fps.
- In this scenario motion blur works correctly if the PRT file sequence was generated at 24 fps.
- In this scenario motion blur will NOT be correct if the PRT file sequence was generated at 30 fps.
In this second “wrong” case:
- The issue here lies not in that the velocities are wrong, but in that the PRT sequence generated at 30 fps has to be completely re-timed and rewritten to have correct motion when applied to a rendered image sequence at 24 fps.
- However! I understand that the easier solution is to simply scale the “Velocity” channel based on the fps difference and ignore the fact that the particles will be moving faster/slower than intended.
So, as you suggested, we could embed a “framerate” in each PRT file. That way we would have the information needed to either re-time or scale the velocities. It is something we will consider. For the time being there is no way to tell, you’ll just have to be aware of this situation. The only solutions would be to either a) have slightly off motion blur, or b) be aware of the fps difference and compute a scale to the “Velocity” channel using the provided channel operators.
I hope that helps.
Thanks Conrad. That clarifies things for me. I also realised that I missed the PointsFileSequence API, which has the option to specify the input framerate. I should be able to use this option to address this problem.
I’ll update this post if I run into any trouble with this
arun
The “PointsFileSequence” is specifically for retiming a sequence. So it won’t solve the fps issue directly. But you can use it to retime a sequence generated at 24 fps to render for a sequence at 30 fps. This would be very different from the “scaling velocities” approach though.
To retime a sequence from one fps to another:
current_frame = 14 #whatever the current frame is
prt_fps = 24.0 #the fps that the prt was saved out as
desired_fps = 30.0 #the fps of the rendered image sequence
ri.PointsFileSequence( "mysequence_####.prt", prt_fps, prt_fps/desired_fps*current_frame, prt_fps/desired_fps, True )
To not retime, but simply scale the velocities for proper looking motion blur (this is technically not the correct since the particles will appear to move faster/slower than the original exported sequence):
current_frame = 14 #whatever the current frame is
prt_fps = 24.0 #the fps that the prt was saved out as
desired_fps = 30.0 #the fps of the rendered image sequence
ri.PointsFileSequence( "mysequence_####.prt", desired_fps, current_frame, prt_fps/desired_fps, True )
The documentation explains what the parameters do a little better:
viewtopic.php?f=118&t=6352
Thanks Conrad.
In your second example :
shouldn’t the third argument -current_frame be a fraction specifying the exact time in seconds - current_frame/desired_fps.
Also, the “PointsFileSequence” seems to the only API that allows the user to specify what “time” to render. If my inputs (including the prt source) don’t change at all,
is this still the only way to alter time? It may be expensive to specify the source for each time step.
I know the C++ API have features that allows this. Just wanted to confirm if this is a restriction in the python API.
thanks
The second example illustrates how to scale the velocities based on changing fps, not how to retime a sequence. In this case, we always load the frame “current_frame”.
Also, the “PointsFileSequence” seems to the only API that allows the user to specify what “time” to render. If my inputs (including the prt source) don’t change at all,
is this still the only way to alter time? It may be expensive to specify the source for each time step.
I think we are using different terminology here. Krakatoa SR does not have a concept of “time” or a “timeline”. These are concepts of your modeling/animating application. I think what you’re asking for here is a way to cache particles in memory between renders? In your example, you have the same particles in multiple renders and you don’t want to always reload them from disk. Unfortunately, we haven’t exposed a way of caching particles in memory between renders. I am planning on adding this eventually.
I know the C++ API have features that allows this. Just wanted to confirm if this is a restriction in the python API.
thanks
I’m not exactly sure what you mean. The C++ API is basically the same as the Python module.
Ok, my confusion about the “PointsFileSequence” api is clear now. I thought the third argument was a ratio of currentframe/fps, which represents the real-world time. I now see that it just specifies the frame number directly.
Also, for some reason, I thought the C++ api allowed internal caching of particles.I see that the python API supports the same features as C++.
Looking forward to a version that allows caching particles in memory
thanks