What do you guys think about returning 0.0 or [0,0,0] instead of infinity when a divide by zero happens?
In the context of computer graphics this would seem to make more sense.
At the moment you have to logic check your inputs before a divide or else you get infinities and that adds a bunch of clutter to your node tree that seems unnecessary to me.
Seems like 0 is an rather big difference from infinity. Why 0?
Also, 0 happens to be really common, so it could be hard to filter for it after the divide.
Clamping the input divisor to a min could (and should) be an easy BLOP, though. Heck, even just using a Clamp operator as the only tool in the BLOP would work.
I’d rather it return the largest number it could within the datatype.
So lets say we have 1/0.000000000000000000000001. The quotient is going to be quite large, but the divisor is close to zero. So if the divisor WAS zero, I would want the quotient to become huge. So as close to infinite as possible without being infinite.
I still prefer the current system, of course, just saying that if I had to change it, it would be something huge, not tiny.