Krakatoa SR C++ API

Indeed the reduction of the density creates blue particles again, that’s what I described above. Colors and light do not have a value above 1.0. If I reduce the lighting density the particles become yellowish. To it looks like a clipping problem.

You definitely should get colors over 1.0 in your EXR in the case with a default light illuminating particles with color [100,100,100]. Can you give me more details of what is happening in your scene?

Well, it’s nothing special what’s happening. Simple setup. I read particle data via partio, then I multiply the color by an arbitrary value (e.g. 100) and render it. The file is saved the the krakatoa exr saver. I use a default spot light with a brightness of 1.0.

I’ll try to compile a simple example to see what happens in a real simple environment to see what’s happening.

Okay, I tried it with one of the example files, and indeed I get values above 1.0 in my exr. So I suppose it’s more scene related.

We have here some more complex scenes with a few mask objects. It is possible to exlcude the geometry from casting shadows, what is really great. But would it be possible to make this feature light specific?

e.g. we have a keylight which casts shadows and we have a bounce light which should cast shadows only for a one object.

With this shadow linking, we could render everything in one pass instead of rendering several passes for different lights.

Ah, this is a good idea, yes. It’s something I wouldn’t have thought of. Currently our rendering engine doesn’t have support this, but it could with a little effort (and could be supported in our MX and MY versions also). I will put it on the list of feature requests. Thank you!

Can I set the number of threads that will be used by krakatoa? Would be great for renderings from the UI while we continue working.

I added a function called krakataosr::krakatoa_renderer::set_number_of_threads( int ); It was just added recently, and I haven’t posted it yet. For the next build it will be there. If you’d like, I can post it for you before the next build.

I think it has time until the next release.

HI Conrad,
I’m having an issue with motion blur. When I widen the shutter start and end time, say from (90 deg to 180 deg), I see the motion blur produce a more blurred output like expected.
The input to the set_motion_blur function is the time equivalent of (-45 deg to 45 deg) or (-90 deg to 90 deg).

But if I add an offset that doesn’t have any effect. So if instead of passing in (-45, 45) I pass (0, 90) I get the same output. The widening of the shutter time still has the desired effect, but for some reason the bluring is centered around 0 always.
Can you verify is this is an expected issue, if not I will try to get an example I can send you.
Thanks
arun

I just tested it here with particles that have velocity, and it works the way I would expect. A shutter of 0,90 should produce particles that are offset in time (or “forward” motion blur instead of “centered”).

Where is the motion coming from?
an animated transform matrix on the camera?
an animated transform matrix on the particles?
or from the velocity channel of the particles?

The particles have a velocity channel set. There’s an additional detail that I failed to mention in my previous post. We’re trying to embed Krakatoa SR inside Eyeon Fusion.
The velocity channel generated by Fusion is in terms of frames rather than time. So if a particle moves 1 unit between two frames its velocity should have been 1/fps (1/(1/30) in this case),
but instead it is passed into renderer as 1.

To offset this scaled down velocity, I set motion blur params in terms of the frames as well. So (-90, 90) is sent in as (-0.25, 0.25) instead of (-0.25/30, 0.25/30).
Same thing applies when there’s forward/reverse motion blur ((0,180) is sent in as (0, 0.5) and so on… ).

As I mentioned earlier, increasing/decreasing the shutter angle has a marked effect on the output. But any bias in the forward/reverse direction produces the same output.
So (0, 0.5) and (-0.25, 0.25) produce the same output. I’ll test this more today. Let me know if you need a reproducible version.
thanks
arun

I’m still trying to figure out the motion blur issue. But I have a question unrelated to this topic. I setup the renderer object once with all the particle streams and rendered once
(which triggered get_next_particle on the streams). If I render again using the same renderer object, because say the camera position changed, it calls get_next_particle on the
streams again. I’m not removing the streams at the end of the render call, so I expected the renderer to keep the particles around in its local memory and use it for the next
render call. But it didn’t seem like it did. Is this the expected behaviour?

It is quite possible that I’m doing something wrong which triggers this. But I just wanted to confirm what the expected behaviour of SR is.
thanks
arun

For the motion blur issue: I found the problem in the code. I found and fixed the bug that was causing it. Thanks for reporting the problem!

For the streams, what you are reporting is expected behavior.

This is due to the nature of streams. The renderer exhausts the streams every time krakatoa_renderer::render() is called. All the particles are consumed and the resulting krakatoa_renderer object will contain NO particles. So, if you want to use the the same krakatoa_render to render again, you need to re-create and re-add your particle streams to the renderer.

HOWEVER, we are considering adding an option in the renderer to “cache” particles. In this scenario, subsequent calls to render will not consume any streams, and instead will used the pre-saved particles. It becomes very error prone, because required channels can change when users change settings, so it could make the cache invalid if the new channels didn’t exist. We also want to free up all the memory at all costs, so I’d be worried that users may not clear the cache properly. The one scenario it would particularly be useful is multiple renders where only the lights or density changes.

It would look like this:

void krakatoa_renderer::cache_particles_after_render( bool );
bool krakatoa_renderer::are_cached_particles_valid_on_render();
void krakatoa_renderer::use_cached_particles_on_render( bool );
void krakatoa_renderer::clear_cached_particles();

Regarding the caching Conrad talked about, on the Krakatoa MX side we provide two levels of caching - PCache (all channels except Lighting) and LCache (only Lighting channel).

Here is a quick overview of how the KMX caches operate from the user’s POV:

*By default, after a frame is rendered, its particle are left in the caches.
*Optionally, the caches can be cleared after each frame to conserve memory (current SR behavior)
*In the former mode, the user can engage the PCache and LCache options AFTER the frame was rendered (because he liked the results and want to tweak them without reloading). Supported changes with only PCache on are Lighting and Final Pass Density tweaks, render-time options like Filtering, Camera and Lights changes. When LCache is on, Lights and Lighting Pass changes cannot be performed.
*When PCache is on, the LCache can be turned on and off at will. When LCache is off, you can tweak the lighting settings without reloading all particle channels. Once the LCache is on, the particles will redraw without reloading, resorting and relighting.
*If a feature is enabled by the user that requires a channel that is currently not in the cache (e.g. switching to Phong shading requires Normals, switching MBlur on requires Velocity etc.), the caches will be automatically cleared and rebuilt to provide the missing data. If a channel is already cached but the feature that required it was turned off, the cache will NOT be rebuilt and the feature can be turned on again without rebuilding the cache at any time. So testing with and without MBlur can happen with PCache on as long as the MBlur was on when the Cache was built, otherwise the Cache will be rebuilt once.
*Some features do NOT invalidate the cache and would not be reflected if PCache is on and the user made changes. We track these in the KMX UI and report them in the Memory Channels rollout. When implementing KMY using the SR API, we would probably do the same.

As a very special option, we allow network renders to use the cache system (By default we disable the cache option on the network even if it was specified. It takes a special switch to allow this). If a job is sent to the network to process on a single machine, this allows a whole camera animation within a static particle cloud to be cached on the first frame and rendered without ever reloading and reprocessing the particles.

We are having discussions about other levels of caching to allow particle data to be cached before it runs through our Magma channel editing system. But we have no concrete plans for this yet.

HI Guys,
Is the problem reported here : http://forums.thinkboxsoftware.com/viewtopic.php?f=115&t=7564&start=90#p35146 fixed. I think I ran into it again. I couldn’t see any release notes about this. thanks

I believe that bug was fixed, but I will go back and try to confirm that to see if it’s still working.

If I can’t reproduce it with your original “Test.cpp”, then it may be a different issue.

HI Conrad,
It is not crashing anymore, but I think it is still not working correctly. I’m attaching a piece of code that replicates this. I have two fields Position and Color.
Position is always float32 while color can change type. SR breaks when Color is not float32(uint8 in the example) and it gets added as the first channel (via the append_channel API)
It works if

  1. I add color at the end or
  2. I change the color type to float32.
    Let me know if the example is not clear. thanks
    Test.cpp (5.91 KB)

Thanks for the example code, I will take a look soon!