AppleScript, like most programming languages, does not do math in pure numbers. It does its math with binary, ˜double precision’ floating point. This representation has a limited precision of 53 binary digits. Your calculations are running into that limit and thus producing unexpected values.

Although a number like 1e23 has a compact representation in decimal notation, when expressed in binary, the raw form is 10101001011010000001011000111111000010100101011110110100000000000000000000000. If we put this in a kind of binary exponential notation, we can write it as 1.01010010110100000010110001111110000101001010111101101 Ã— 10^1001100 (all numbers are binary, so the base is decimal 2, not decimal 10). This is closer to the number that the computer actually works with. But the problem is that the significand of this number requires 54 bits, but the computer only has room for 53 bits. So, you lose precision. For 1e23, we only lose a single bit. 1e24, 1e25, 1e26 lose 3, 6, and 8 bits, respectively. Eventually this loss of precision causes an error from what you expect.

Here is a simplified version of part of your calculation (the repeated multiplication by 10), with some diagnostic code added:

```
set a to 1
set deltas to {}
repeat with i from 1 to 26
set a to a * 10
tell a - 10 ^ i to if it is not 0 then set end of deltas to {i, it}
end repeat
set x to a / 1.0E+25
{x, round x rounding down, deltas} --> {10.0, 9, {{25, -2.147483648E+9}, {26, -1.7179869184E+10}}}
```

We can see that by the time we get to 1e25, there is a significant delta present between the value computed via repeated multiplication and direct exponentiation (as if even that could be trusted, since we already know that we are operating outside the bounds of AppleScript’s floating point precision). This difference is causing what looks to be 10.0 to be rounded down (because it is actually 10-1.77635683940025E-15, not 10.0).

Here is a variant in C (where printf allows for finer control over the output of the numbers):[code]#include <stdio.h>

#include <math.h>

int main(int ac, char *av[]) {

double a = 1;

double d = 1E25;

for (int i = 1; i <= 26; i++) {

a *= 10;

double delta = a - pow(10, i);

printf(“%2u: % 35.30e\n”, i, a);

if (delta != 0) { printf(" % 35.30e delta\n", delta); }

}

printf(“%35.30e\n”, a / d);

}[/code]

The output looks like this:

```
1: 1.000000000000000000000000000000e+01
2: 1.000000000000000000000000000000e+02
3: 1.000000000000000000000000000000e+03
4: 1.000000000000000000000000000000e+04
5: 1.000000000000000000000000000000e+05
6: 1.000000000000000000000000000000e+06
7: 1.000000000000000000000000000000e+07
8: 1.000000000000000000000000000000e+08
9: 1.000000000000000000000000000000e+09
10: 1.000000000000000000000000000000e+10
11: 1.000000000000000000000000000000e+11
12: 1.000000000000000000000000000000e+12
13: 1.000000000000000000000000000000e+13
14: 1.000000000000000000000000000000e+14
15: 1.000000000000000000000000000000e+15
16: 1.000000000000000000000000000000e+16
17: 1.000000000000000000000000000000e+17
18: 1.000000000000000000000000000000e+18
19: 1.000000000000000000000000000000e+19
20: 1.000000000000000000000000000000e+20
21: 1.000000000000000000000000000000e+21
22: 1.000000000000000000000000000000e+22
23: 9.999999999999999161139200000000e+22
24: 9.999999999999999832227840000000e+23
25: 9.999999999999998758486016000000e+24
-2.147483648000000000000000000000e+09 delta
26: 9.999999999999998758486016000000e+25
-1.717986918400000000000000000000e+10 delta
9.999999999999998223643160599750e+00
```

Here we can see even the error present in the value for 1e23. We also see the same error deltas that AppleScript was showing.

In the end, computers can do (some) exact calculations, but such operations are usually much slower than the floating point built into the hardware. When you want to operation on large integers the keyword you want is ˜bignum’, when you need fractions you want ˜bigrational’ (˜bigrat’), if you need the implementation to use decimal digits internally (so that (e.g.) dollars and cents can be represented accurately), try ˜bigdecimal’. I am not aware of any implementations of these for AppleScript. Maybe someone knows of one though.