Hi everyone!

I have a script which detects when thenum is 0 but here’s a short form:

```
set thenum to 1
repeat
set thenum to thenum - 0.05
display dialog thenum
end repeat
```

it goes from 0.95 to 0.8999999999999999

Any work arounds ?

Thanks

Hi everyone!

I have a script which detects when thenum is 0 but here’s a short form:

```
set thenum to 1
repeat
set thenum to thenum - 0.05
display dialog thenum
end repeat
```

it goes from 0.95 to 0.8999999999999999

Any work arounds ?

Thanks

Hi,

scripter error, no math error

from the dictionary

display dialog unicode text

```
set thenum to 1
repeat
set thenum to thenum - 0.05
display dialog thenum as text
end repeat
```

Thanks for your very quick reply, thought I outsmarted AppleScript but why does it go from 0,05 to -3,191891195797E-16

I know it’s ASS but this works though (although it gives an error at (window of theObject) but that has nothing to do with the zero thing)

```
on should close theObject
set theAlpha to 1
repeat
set theAlpha to theAlpha - 0.05
set alpha value of window (window of theObject) to theAlpha
if theAlpha as text is equal to "-3,191891195797E-16" as text then
close panel (window of theObject)
set alpha value of window (window of theObject) to 1
exit repeat
end if
end repeat
end should close
```

workaround

```
set thenum to 1 as real
repeat
set thenum to (round ((thenum - 0.05) * 100)) / 100
display dialog thenum as text
end repeat
```

Edit: Why not

```
on should close theObject
set theAlpha to 1 as real
repeat
set theAlpha to theAlpha - 0.05
set alpha value of window (window of theObject) to theAlpha
if theAlpha is 0.05 then
close panel (window of theObject)
set alpha value of window (window of theObject) to 1
exit repeat
end if
end repeat
end should close
```

Thanks it works

Like many other programming languages, floating point in AppleScript is not actually decimal based. Practically all modern machines use the binary variants of IEEE 754 and many programming languages’ either specify an IEEE 754 floating point behavior or just use whatever the host machine’s hardware does. However AppleScript arrives there (by specification or by default), it appears to use some variation of IEEE 754 64-bit binary (“double precision”) floating point.

Because of its binary nature, decimal values like 0.05 can not be exactly represented in an AppleSript floating point value (because 10 (the base of our normal number system) is not a power of 2 (the base of the binary floating point system used in most computers)).

The unexpected values you saw were due to the limited precision of the floating point system and rounding errors introduced at each subtraction.

The workaround of converting the value to text and comparing it to a particular string value may work OK, but it is a bit brittle (it depends on the floating point implementation, the floating point rounding mode, and the international decimal separator). A simple solution is to make a comparison like theAlpha <= 0. This will trigger whether the value exactly hits zero or whether it goes slightly negative due to rounding errors. Comparisons like this are often a good idea even if you are only dealing with integers. If there is a bug and you accidentally overshoot your target value an equality comparison can result in an infinite loop while an inequality will terminate even if the value does not match exactly.

```
to getTempFile()
POSIX file (POSIX path of (path to temporary items folder as alias) & "double.dat")
end getTempFile
to writeRealsToFile(rl, f)
local r, fd, m, n
set rl to rl as list
repeat with r in rl
set contents of r to r as real
end repeat
set r to r as real
set fd to open for access f with write permission
try
repeat with r in rl
write r to f as real
end repeat
close access fd
on error m number n
try
close access fd
end try
error m number n
end try
end writeRealsToFile
to getRealsInHex(rl)
set f to getTempFile()
writeRealsToFile(rl, f)
paragraphs of (do shell script "hexdump -v -e '8/1 \"%02X\" \"\\n\"' < " & POSIX path of f) -- hex dump 8 bytes per line (8 bytes = 64 bits = size of a double precision float)
end getRealsInHex
set values to {1}
repeat until item -1 of values â‰¤ 0
set end of values to (item -1 of values) - 0.05
end repeat
set end of values to 0.05
getRealsInHex(values)
(*
{"3FF0000000000000",
"3FEE666666666666",
"3FECCCCCCCCCCCCC",
"3FEB333333333332",
"3FE9999999999998",
"3FE7FFFFFFFFFFFE",
"3FE6666666666664",
"3FE4CCCCCCCCCCCA",
"3FE3333333333330",
"3FE1999999999996",
"3FDFFFFFFFFFFFF9",
"3FDCCCCCCCCCCCC6",
"3FD9999999999993",
"3FD6666666666660",
"3FD333333333332D",
"3FCFFFFFFFFFFFF4",
"3FC999999999998E",
"3FC3333333333328",
"3FB9999999999983",
"3FA999999999996C", -- the last value greater than zero, not quite the same as "direct 0.05"
"BCB7000000000000", -- the first value <= zero
"3FA999999999999A" -- "direct 0.05"
}
*)
```

[code]IEEE 754 64-bit “double float” for 0.05: 3FA999999999999A

```
raw = hex 3FA999999999999A
bits 0011111110101001100110011001100110011001100110011001100110011010
sign = 0
bits 0
```

raw exponent = 1018

bits 01111111010

significand = 2702159776422298

bits 1001100110011001100110011001100110011001100110011010

Note the rounding error in the significand. It should have a pattern of repeating 0011, but after the last full 0011 repetition, it ends with 010. When it gets to the end, it only had three bits left for the last repetition, so it was rounded off.

exponent = raw_exponent - bias

= 1018 - 2^(11-1) -1

= 1018 - 1023

= -5

value = (-1)^sign * (1 + significand * 2^52) * 2^exponent

= 1 * (1 + significand / 2^52) * 2^exponent

= 2^exponent + significand * 2^exponent * 2^-52

= 2^exponent + significand * 2^(-52 + exponent)

= 2^-5 + significand * 2^-57

= 1/32 + 2702159776422298 * 1/144115188075855872

= 0.03125 + 0.01875000000000000277555756156289135105907917022705078125

= 0.05000000000000000277555756156289135105907917022705078125

The last value > 0 found by starting with 1 and repeatedly subtracting 0.05: 3FA999999999996C

value = 1 * (1/32 + 2702159776422252 * 1/144115188075855872)

= 0.0499999999999996835864379818303859792649745941162109375

Because of the repeated subtraction, the rounding error has progressed to affect 7 bits instead of the 2 bits present in the original value for 0.05. Both of these values display as “0.05”, but they are distinct values, which is why you get the following value when you subtract 0.05 once more.

The first value <= 0 found by starting with 1 and repeatedly subtracting 0.05: BCB7000000000000

value = -1 * (1/4503599627370496 + 1970324836974592 * 1/20282409603651670423947251286016)

= -0.00000000000000031918911957973250537179410457611083984375[/code]

```
set r to "9007199254740992" as real
r is equal to r + 1 --> true !!!
```

Edit History: Added script to demo r = r + 1 (different problem, but shows that floating point values are not normal numbers).

Thanks for the info

```
set r to "9007199254740992" as real
r is equal to r + 1 --> true !!!
```

Is quite impressive

I’d think that Apple would use the same floating point in all applications. If you do the same in Calculator ( -0.05), the result is correct. Ofcourse, the Calculator is “smarter” than AppleScript but I didn’t expect the difference.

So if I use theAplha - 0.125, it’s correct because 0.125 is a power of 2 right ?

About your work around, I think I’ll go with this,

```
on should close theObject
set theAlpha to 1 as real
repeat
set theAlpha to theAlpha - 0.05
set alpha value of window (name of theObject) to theAlpha
if theAlpha â‰¤ 0.05 then
set visible of window (name of theObject) to false
set alpha value of window (name of theObject) to 1.0
exit repeat
end if
end repeat
end should close
```

theAplha can’t be negative, it has to be a value between 0 & 1

Do you think it’s “safe” on both Tiger & Leopard and with both “.” & “,” between the “0” and the “05” ?

Thanks

I think some calculator-type apps use custom software-based decimal floating-point libraries instead of using the binary floating point hardware. This provides a match for how users expect the operations and numbers to behave and it is OK if it is not super fast since the use is all interactive (no one would use Calculator to plot fractals, or do physics simulations). Although I do have a faint memory of a Software Update that fixed some kind of numeric bug in Calculator.

If by correct you mean that you will get to exactly 0 after 8 subtractions, then yes (the same for all 2^-n where 0 â‰¤ n â‰¤ 53; the smallest such value would take 9007199254740992 subtractions to get to zero, so I have not tested it). But still, it is probably best to avoid making tests that rely on such exactness.

I do not have any experience using any fraction separator other than “.”. My impression is that AppleScript program code always uses “.”. The locale/international settings are only used when converting to or from text (reading input from the user, putting values into a string for display to the user, etc.). So as long as you do not convert to/from text (realValue as text/string/unicode text or stringValue as real) I think it will be safe from locale/international issues (the bits are the same, it is just how we write the numbers that varies).

OK thanks I asked it because here in Belgium, we write “0.05” as “0,05” and maybe there would be a difference. But I’ll test it later on. I don’t have Dutch (what I normally speak) installed on my Mac so I’ll have to search for my disk and install it and then test it But I guess that it is the same.