The PRT spec allows for uint8 and other data types not allowed in the Krakatoa GUI. Any chance we could save color, density, etc. as those types? Considering how Krakatoa nicely blends millions (or billions) of points together, having 16 or 32 bit precision isn’t necessary for color or density (and possibly not even for normal, depending on your setup).
Also, the UI doesn’t seem to indicate if the data is int or float.
>The PRT spec allows for uint8 and other data types not >allowed in the Krakatoa GUI. Any chance we could save >color, density, etc. as those types? Considering how >Krakatoa nicely blends millions (or billions) of points >together, having 16 or 32 bit precision isn't necessary for >color or density (and possibly not even for normal, >depending on your setup). > >Also, the UI doesn't seem to indicate if the data is int or >float. > >- Chad
The internal memory channels are always float, and always either 16 or 32 bit. They have to be.
We could enable the saving of 8bit int colors for saving disk space in PRTs, but the memory channels will always be float16 or float32.
So if you want more types for the PRT saving, we can look into adding those. Right now, every channel has a fixed type and arity, only the depth can be changed. So MXSInteger is always Int, but could be 16, 32 or 64. I guess it would have to stay that way. I would expect that setting a channel to 8 would assume int8. I don't want to overcomplicate the UI with more buttons for the types...
In our current workflow, disk space is at less of a premium than RAM, so if the uint8 data got bumped to float16 by the renderer, then it wouldn’t really help us much. We’re trying to cram more points into a given memory footprint so that we get better color, density, and spatial averaging.
IO bottlenecks are killing us, but I’m not sure if there would be any benefit to loading a smaller PRT file if you have to convert it as you load it anyway. Something we can try is comparing a uint8 PRT with a float16 PRT and see what the hit is.
The tricky thing about this stuff is that the PRT channel creation and manipulation is written in a very generic manner to get general and flexible functionality as much as possible. In particular, a color, position, and vector all have the same underlying primitive type from the implementation, of float32[3]. A float16[3] can be automatically converted to/from float32[3] behind the scenes, so this works transparently.
In the case of using a uint8[3] channel for a color, there’s a subjective change of the interpretation, because now the integer values in the range [0,255] are mapped to float values in the range [0.0,1.0], which may involve clamping. For a density, clamping to no greater than 1.0 wouldn’t be acceptable, so we would need a different kind of mapping, likely a logarithmic curve would be the ideal choice. Something like normals might even be compressible to 2 bytes using something like this description.
So these kinds of quantizations will require different kinds of treatment, depending on the type of data stored in a channel, and we need to do a bit of design work to come up with an effective way of structuring it. It’s definitely a worthwhile idea, basically sacrificing some CPU time and quality in exchange for the ability to do more particles.
>In the case of using a
>uint8[3] channel for a color,
>there's a subjective change of
>the interpretation, because
>now the integer values in the
>range [0,255] are mapped to
>float values in the range
>[0.0,1.0], which may involve
>clamping.
This isn't as bad as it may sound. Most of the time, when we use float color >1 (or <0) it is for storing lighting information (like in a render, or lighting maps) or for utilitarian purposes (like UVW mapping or displacement mapping). Very rarely do we describe the color of a surface (or puff of smoke) in terms equivalent to "more green than you can see, but with a negative amount of red." Generally we keep thing in the CIE XYZ space for that sort of thing. Just assume a color space for the conversion to float. In those 10% cases when we actually DO want to describe some thing's color in unclamped terms, the float method is already there.
>For a density,
>clamping to no greater than
>1.0 wouldn't be acceptable, so
>we would need a different kind
>of mapping, likely a
>logarithmic curve would be the
>ideal choice.
The non-linear density function would take care of that. 0-255 could map to whatever you wanted it to.
>Something like
>normals might even be
>compressible to 2 bytes using
>something like this
>description.
That's really cool. If you implemented that method, we would certainly use it.
>So these kinds of
>quantizations will require
>different kinds of treatment,
>depending on the type of data
>stored in a channel, and we
>need to do a bit of design
>work to come up with an
>effective way of structuring
>it. It's definitely a
>worthwhile idea, basically
>sacrificing some CPU time and
>quality in exchange for the
>ability to do more particles.
That's the key question. Will sacrificing a little accuracy on the quantization of an individual point reduce overall quality if it allows for a larger number of points to occupy the same memory footprint? I know that in our renders, nothing improves quality as much as simply having more points.