For the Bilateral filter, I’m assuming that size is the spatial search radius and blend is the intensity weighting. And the filter is the weighting? Is that right? If so, what is the size scale? 1.0 = width?
Would it be possible to get per-pixel values on either of those? So that the we could map the distance weighting and intensity weighting?
Also/or, could we define with a map the intensity used for the weighting, as opposed to deriving it from the image to which the filter is applied? Meaning, could we override the intensity with a new map so that we could paint/mask out area where we want to define things as “artifacts” vs “objects”? Similar to the CARs, except instead of defining the energy map, we would just feed it a new intensity map with the “artifacts” pre-blurred, and the “objects” sharpened or contrast stretched?
The blur size is actually based on the scale used in the defocus blur. The artists preferred that the new tools worked the same as the fusion tools. I’ve been looking to change this up, but it’s proving difficult to modify the interface once the tool is in production. The tools that have a kernel interface will have version 2 released that addresses the size issue. I’ve been trying to focus on the values as a percentage of image width, so the values remain consistent when in proxy mode or on a different sized image.
The filter describes the weight, so the disc is evenly weighted across a circle, while the gaussian falls off as the pixel gets further from the center.
I’m not sure I understand how the intensity weighting would work. Do you want to control the color blend and blur size from an external created mask?
The kernel size / float slider thing is a problem if the tool rerenders. A small size will make a kernel of 1x1, which is silly, and in many cases changing the size from .5 to .6 will not cause the kernel to change size, but the tool re-renders anyway, which leads the artist to think that something is happening, when it isn’t. If they have auto-proxy on, it might even LOOK like something is happening. Could you have the tool only re-render if the actual kernel size changes, not the slider value?
Keeping it relative to the image size means that proxies and such work, which is how the rest of Fusion works, but kernel based tools in Fusion, like Rank and Custom Filter don’t. And if you wanted to work on a cropped area of the image (because the tool is slow, and you’re often zoomed in anyway) the fixed kernel-sized based slider would be better because it would relate back to the uncropped image better. Could you have a “relative/absolute” checkbox? I would almost always use the absolute “kernel size in pixels” but you would give everyone what they needed, for proxy or cropped workflows, and if “relative” was the default, it would not break existing comps.
Controlling the blur size with a mask would be awesome, but I’m thinking that might be hard. What do I know? Deriving the intensity mapping from an image different than the one on which the filtering is applied sounds easier, but both would be really cool.
There’s a bug in the “scale with pixel aspect thing” which I’ll make a nice sample for and post in a couple minutes.
I take back the bit about the 1x1 being silly. While it does nothing and shouldn’t render at all, it does allow for 1x3 or other single axis filtering. Which is cool.
Chad
edit: A 0x3 filter doesn’t render. I assume this is for speed’s sake, and because it’s nonsense. But setting the X size to > 0 to get a blur that is only in Y is confusing.
I’ve added the absolute size option to the tool resize, and a check for blur sizes less than equal to 1.0 instead less than equal to 0, and I fixed the 0x3 blur so it will process as 1x3.