Danilo, please try this code in VO, or any other language:
FUNCTION Start() AS INT
LOCAL f1 AS REAL8
LOCAL f2 AS REAL8
SetFloatDelta(0)
f1 := 0.015
f1 := f1 + f1 + f1
f1 *= 10.0
f2 := 0.45
? f1==f2 , f1<f2 , f1>f2
WAIT
RETURN 0
Similarly in c#:
public class Program
{
static void Main()
{
double f1;
double f2;
f1 = 0.015;
f1 = f1 + f1 + f1;
f1 *= 10.0;
f2 = 0.45;
System.Console.WriteLine(f1==f2);
System.Console.WriteLine(f1<f2);
System.Console.WriteLine(f1>f2);
}
}
In both samples, you would expect f1 == f2 == 0.45, but both samples return FALSE for == and TRUE for <. This is because none of those values can be represented in binary accurately, there is some precision loss, and the more arithmetic you do with decimal numbers, the more precision is lost.
When you store 0.015 in the f1 var, the actual value stored is something like 0.0149999999791 or similar, because there is no way to represent 0.015 in binary notation that the REAL8, REAL4, FLOAT etc types use. Yes, when you see the value in the debugger or print it with a function, you will usually see 0.015, but this is because there is a runtime function used to print the value, and this is smart, knowing about the precision lost and converts 0.0149999999791 to 0.015 before displaying it. But after a number of calculations with float numbers, you end up with accumulated precision lost in each calculation, so the end result is slightly wrong, and this is why you see this strange value in the debugger sometimes and sometimes not.
For this reason, there is the Decimal type in .Net, which does not store values in binary format internally, but uses a decimal representation. This makes it much slower, but also has zero precision lost, 0.015 will always be 0.015 with this type.