InputField bounds

So my apple is small relative the volume where I want applesauce. If I make “SIM Ember Tight” sample the MRI data, do the density, color, normal lookups, etc and pass that to the “SIM Ember Spacious” using InputField, it will populate the channels correctly, except that I want to “pad” the data somewhat. Otherwise when the PRT Ember samples the SIM Ember Spacious it will grab black for the color. Likewise, when the color is advected in the simulation, the red apple skin turns black when it leaves the SIM Ember Tight field. I haven’t tested to see what happens when the normals are advected with zero-length normals… Will do that though… Anyway, what I’m wondering is should we have something to either figure out the bounds of the InputField object or output a boolean for “Is Inside” or whatnot so when the Position input is not in the field we can do something else other than just be black. Right now I’m using InputObject and PropertyQuery, but if I’ve moved it, I’ve got to transform it, and I guess it’s not terrible, and probably bloppable. Just wondering what we should be expecting when SIM Embers don’t have the same bounds when they read from each other.

Oh waitaminit…

Shouldn’t color (and everything else) be density weighted? So if you are advecting and blend 2 voxels together, you shouldn’t be blending the colors, but the colors weighted by the density, right?

I’m finding it difficult to visualize the problem. Can you post a simple animation showing where the undefined values are coming into play?

That would make the most sense in a situation like this I think, but once again I don’t really get the setup. What exactly is being blended?

Obviously an apple isn’t going to make the best fluid, but this shows the idea… When the color is defined in the smaller SIM Ember, and that’s the only place the color gets set from for the larger SIM Ember, then you’ll be blending black. But the real problem is, all the cells outside the smaller SIM Ember have density of 0 anyway, so I was thinking the color should be blended in with density weighting. So yes, the red would blend with black, but the black would be multiplied by 0, so would have no effect.

colorblending.mp4 (137 KB)

Of course, I’m muddying up the concept of density, right? I mean, with Krakatoa, we treat density as alpha, but with Ember, density could define mass or alpha, so it might be an issue of mismatching concepts?

I think I understand the issue now. As the color is advected, the linear interpolation that is reconstructing the discretized grid is bringing in black colors around the edges. I’m not sure if the Density can be used to control that (it might, I think I get the intuition behind it). I think I understand that you would want the lerp reconstruction to avoid using data that is undefined (ie. Density == 0) and re-weight the other defined values appropriately.

When you render the particles in Krakatoa, are you seeing black edges? I would expect them to not be that big of a deal since the same lerping errors would make the Density quite low in these same regions and therefore practically invisible.

Another option you could try in this case is to advect around the original position instead of the quantities. For example, you could advect around a channel initialized like InputChannel:Position -> Output:MyChannelName and then in your PRT Ember you could use the Output:MyChannelName as the position that is fed into the InputField that is sampling your high-res apple. This would prevent the lerp from blending with undefined regions because every part of the field would have a well-defined value. This technique fails when you have some sort of animated addition of Density like a smoke emitting object.

I just don’t know if we need Density to be doing anything else in the simulation. Does it affect how much a force is converted to velocity?

They render as darker red. I supposed the density is being lerped the same way, so the blackest particles are the faintest, but the ones that are 80% original red and 20% black still end up darker and only 20% faded off.

Hmmm… I like that. I’ll give it a go.

Density doesn’t mean anything at all to the Sim Ember object, though your idea about forces has some merit in certain situations. Only the PRT Ember object relies on Density to determine where to seed particles.

EDIT:

I guess this means we need to look into ways of preventing that blurring around the edges. In this case here, you could always divide the Color by the Density value (where its not 0 anyways) since we know the density should be 1 and the only reason it isn’t is due to reconstruction errors.

The lookup point advection idea worked pretty well. It’s a lot slower since it’s running per-point, not per-cell, and for some things like normals, it won’t work, but for color it is fine.

EDIT: Normals blend to zero too. To an extent you can renormalize it and get it fixed, but as the normals get shorter, you get precision problems.

EDIT: Renormalizing in the simulation step helps.

Well, it’s not just at the edges. The interior blurs too. I just know that we don’t have a lot of options there, whereas the stuff on the outside looks like it could be managed.

So what determines in a SIM Ember what is compressible/incompressible? Like if you wanted to make a SIM Ember with a mix of water and air, how to you tell which is which? Like is there a concept of “empty”?

EDIT: Actually, what channels affect the simulation as opposed to merely get advected? Like does Pressure and Temperature mean anything to Ember solvers or are they just dumb channels that get advected?

The current solver (FFT Solver) only modifies the Velocity field to be divergence free. We expect to provide more sophisticated solvers that utilize other field values when creating a Velocity field, but those are not expect in the immediate future. Candidates for other solvers include:

[]Occlusion geometry[/]
[]Density variation[/]
[]Temperature[/]

Please suggest more that come to mind.

Hmmm… I wasn’t even thinking of a HD solver, I was thinking we just needed something HI that would just do the filtering. But I guess it’s all the grid filtering stuff that isn’t there yet that I’m thinking of.