Math Error

Hi everyone, I had given AS a break for a while because I didn’t have the head for it, I’m back now… anyway

I’m trying to make a calculator which will show you as much as you like, i have a problem with a piece of long division script. If you give large numbers to y it messes up.
code:

set x to 1
set y to 1.0E+25--result will equal x/y
set r to ""--result
set na to 0--remainder number to carry
set l to 1000--character limit
set p to 0--repeat number
repeat
	set p to p + 1
	try
		set n to item p of (characters of (x as string))
	on error
		set n to 0
	end try
	if n as string is not "." then
		set n to (n as number) + na * 10
		set r to r & (round n / y rounding down) as string
		set na to n - (round n / y rounding down) * y
		if na = 0 or l = p then exit repeat
	end if
	log p
	log r
	log n
	log na
end repeat
r

Try giving x 1 and y 1.0 E+24, 0000000000000000000000001 then try giving y 1.0 E+25.
Yes there is no decimal at this moment but nothing questions r so it is irrelevant.
Any ideas why this happens?

Try it with others like x=1, y=7 or even 777…

You might want to change l to about 50

AppleScript, like most programming languages, does not do math in pure numbers. It does its math with binary, ˜double precision’ floating point. This representation has a limited precision of 53 binary digits. Your calculations are running into that limit and thus producing unexpected values.

Although a number like 1e23 has a compact representation in decimal notation, when expressed in binary, the raw form is 10101001011010000001011000111111000010100101011110110100000000000000000000000. If we put this in a kind of binary exponential notation, we can write it as 1.01010010110100000010110001111110000101001010111101101 × 10^1001100 (all numbers are binary, so the base is decimal 2, not decimal 10). This is closer to the number that the computer actually works with. But the problem is that the significand of this number requires 54 bits, but the computer only has room for 53 bits. So, you lose precision. For 1e23, we only lose a single bit. 1e24, 1e25, 1e26 lose 3, 6, and 8 bits, respectively. Eventually this loss of precision causes an error from what you expect.

Here is a simplified version of part of your calculation (the repeated multiplication by 10), with some diagnostic code added:

set a to 1
set deltas to {}
repeat with i from 1 to 26
    set a to a * 10
    tell a - 10 ^ i to if it is not 0 then set end of deltas to {i, it}
end repeat

set x to a / 1.0E+25
{x, round x rounding down, deltas} --> {10.0, 9, {{25, -2.147483648E+9}, {26, -1.7179869184E+10}}}

We can see that by the time we get to 1e25, there is a significant delta present between the value computed via repeated multiplication and direct exponentiation (as if even that could be trusted, since we already know that we are operating outside the bounds of AppleScript’s floating point precision). This difference is causing what looks to be 10.0 to be rounded down (because it is actually 10-1.77635683940025E-15, not 10.0).

Here is a variant in C (where printf allows for finer control over the output of the numbers):[code]#include <stdio.h>
#include <math.h>

int main(int ac, char *av[]) {
double a = 1;
double d = 1E25;
for (int i = 1; i <= 26; i++) {
a *= 10;
double delta = a - pow(10, i);
printf(“%2u: % 35.30e\n”, i, a);
if (delta != 0) { printf(" % 35.30e delta\n", delta); }
}
printf(“%35.30e\n”, a / d);
}[/code]
The output looks like this:

1: 1.000000000000000000000000000000e+01 2: 1.000000000000000000000000000000e+02 3: 1.000000000000000000000000000000e+03 4: 1.000000000000000000000000000000e+04 5: 1.000000000000000000000000000000e+05 6: 1.000000000000000000000000000000e+06 7: 1.000000000000000000000000000000e+07 8: 1.000000000000000000000000000000e+08 9: 1.000000000000000000000000000000e+09 10: 1.000000000000000000000000000000e+10 11: 1.000000000000000000000000000000e+11 12: 1.000000000000000000000000000000e+12 13: 1.000000000000000000000000000000e+13 14: 1.000000000000000000000000000000e+14 15: 1.000000000000000000000000000000e+15 16: 1.000000000000000000000000000000e+16 17: 1.000000000000000000000000000000e+17 18: 1.000000000000000000000000000000e+18 19: 1.000000000000000000000000000000e+19 20: 1.000000000000000000000000000000e+20 21: 1.000000000000000000000000000000e+21 22: 1.000000000000000000000000000000e+22 23: 9.999999999999999161139200000000e+22 24: 9.999999999999999832227840000000e+23 25: 9.999999999999998758486016000000e+24 -2.147483648000000000000000000000e+09 delta 26: 9.999999999999998758486016000000e+25 -1.717986918400000000000000000000e+10 delta 9.999999999999998223643160599750e+00
Here we can see even the error present in the value for 1e23. We also see the same error deltas that AppleScript was showing.

In the end, computers can do (some) exact calculations, but such operations are usually much slower than the floating point built into the hardware. When you want to operation on large integers the keyword you want is ˜bignum’, when you need fractions you want ˜bigrational’ (˜bigrat’), if you need the implementation to use decimal digits internally (so that (e.g.) dollars and cents can be represented accurately), try ˜bigdecimal’. I am not aware of any implementations of these for AppleScript. Maybe someone knows of one though.

To handle very large integers with complete accuracy, you could write “alphabetic arithmetic” routines.
Basicaly encoding grammer school arithmatic rules in scripting language. like

set oneNumber to "1217"
set twoNumber to "385"
my Plus(oneNumber, twoNumber) --"1602"

on Plus(aNum, bNum)
	set carry to 0
	set {aNum, bNum} to my SameLength(aNum, bNum)
	set myPlus to ""
	repeat with digit from length of aNum to 1 by -1
		set {newSum, carry} to my singleDigitSum(text digit of aNum, text digit of bNum, carry)
		set myPlus to newSum & myPlus
	end repeat
	if carry = 1 then set myPlus to "1" & myPlus
	return myPlus
end Plus

on singleDigitSum(aDigit, bDigit, carry)
	set SDS to (aDigit as number) + (bDigit as number) + carry
	return {SDS mod 10 as text, (SDS > 9) as integer}
end singleDigitSum

on SameLength(a, b)
	repeat until (length of a) ≤ (length of b)
		set b to "0" & b
	end repeat
	repeat until (length of a) = (length of b)
		set a to "0" & a
	end repeat
	return {a, b}
end SameLength

The singleDigitSum routine uses AppleScript’s arithmetic functions.
This example uses strings and standard base 10 notation. If I had to do this, I’d use a different base.

Since applescript thinks that (10^15) = ((10^15)+1) is false, but (10^16) = ((10^16)+1) is true, this gives a limit to Applescript’s arithmetic accuracy. So,

I’d use lists of numbers base 10^7 ( less than sqrt(10^15) to give “head-room” for the multiplication routine. )

where the list {23, 896, 456} represents the number 23 * (10^7)^2 + 896 * (10^7)^1 + 456

(Recall that these grammer school methods work indipendent of the base used)

I did this in a different language. It’s increadably tedious.

I wrote operations in the order
PLUS
MINUS (do you want to support negative numbers)
EXPOTENTION (Given a and b, return a * 10^b)
MULTIPLICATION
DIVISION (I found that alphabetic division is fastest to do with repeated alphabetic subtraction rather than a binary search using alphabetic multiplication.)

I hope this helps.

A quicker option would be to use a language or framework that already provides a Decimal class (e.g. Python or Cocoa).

“^^^^ Yes! Yes! ^^^^^” , replied the once burnt, now shy optimist.

The right tool for the right job!