Time calibration shows accuracy


I took the old py timer out and compared it without it with the ‘delay’ command. The difference between the returns seemed to reflect the time it took for ‘do shell script’. Here’s the commented out part of the calibration:

run script (do shell script "python -c 'import time; print time.time()'") --dummy
set t1 to run script (do shell script "python -c 'import time; print time.time()'")
set t2 to run script (do shell script "python -c 'import time; print time.time()'")
set time_calib to t2 - t1
set t1 to run script (do shell script "python -c 'import time; print time.time()'")
delay 5
set t2 to run script (do shell script "python -c 'import time; print time.time()'")
set time_diff to t2 - t1 -- - time_calib

Am I wrong or does it seem like ‘do shell script’ takes about 0.05 seconds to call unix.

Edited: added the other zero.

Edited: now the difference is about 0.03.


I’m wondering if you can make a library script that’s faster than a do shell script.


Your code doesn’t show much, though, because delay is never going to be particularly accurate.

And loading python is what’s probably taking the time in your calls. I just ran some quick times in Script Geek, and your lines are taking about 0.022s on my Mac. But if I run something more basic, like:

(do shell script “tr a b”)

the times are around 0.005s.

But if I use a lib, let’s say something simple:

tell script “CheckModifier Lib” to checkModifier()

I get about 0.154 the first time, as Foundation is loaded, and from then the times register as 0.000s, meaning less than 0.0005s.

If you want a timing lib, you might use this:

on testIt()
	return current application's NSDate's timeIntervalSinceReferenceDate()
end testIt

Store the values and subtract them. Just remember that the very first run will be slow.

So your saying that you can’t rely on the timer I used. But, what I was saying is that when you eliminate the timer, the times for the delay averages about 0.03 secs above when you uncomment the timer. That shows that a call to the ‘do shell script’ takes about 0.05 secs. I haven’t timed that yet though, I think :slight_smile:

First I would like to say that timing on a machine in seconds to compare code is like driving two different routes with two different cars, one route with a fast car and the other route with a slow(er) car and say that the route that the fast car took is the most efficient. My point is that that timing should only be used for development phase (code optimization) or benchmarking your machine. But benchmarking your machine isn’t fair either because for example the new upcoming Intel processor has more built-in video encoders/decoders which makes video encoding a lot more efficient. The comparison on video encoding/decoding results in a hardware vs. software rather than actually comparing the performance of two machines. For development phase, timing is often used for longer processes or when many routines must be made each second. Still to make sure that the timing is precise you will put the whole process in a loop (thousands to million times) and get the average time to make your timing more precise.

The precision of the timer in C is the most efficient timing but still has a resolution. The timing, converting, where the executed code is stored, priority of execution, etc, etc all defines how precise my timing is, or how big my overhead is. Then putting a thread to sleep (delay) and wake up at a certain time has also it’s own precision. So my point is, even when using the most precise timers, when I delay my software 5 seconds it’s never going to be 5 seconds exactly.

This is going to be a long one about the process in AppleScript, so be prepared: Not only the do shell script itself needs time, firing the apple event, event handling by the event manager, sending the event back into the process, the event listener inside the process calling the code of the loaded scripting addition, the scripting addition opening a shell (include all Bash initialization), the shell interpreting your command, bash calling the program loader (in the kernel) to load the python executable with the given arguments, python evaluating the code, python loading the time module and python executing the time command takes times as well. Luckily there is a lot of caching involved during that process so we can time the command itself first before even starting to get a more precise timer. Using that method our precision depends on the time difference between executions which is more precise than the time the code is actually executed. On my machine the precision increased from 0.05s to 0.000001s with some exceptions of 0.01 (but it could be the delay command).

Here’s my code:

--We're using a scripting addition command so we need to load the scripting addition before continuing
-- Also this caches the load time of the python process the next run
do shell script "python -c 'import time; print time.time()'"

-- The command has run before so the scripting addition is loaded as everything has been cached properly.
set t1 to run script (do shell script "python -c 'import time; print time.time()'")
set t2 to run script (do shell script "python -c 'import time; print time.time()'")
delay 1
set t3 to run script (do shell script "python -c 'import time; print time.time()'")

set time_diff to t3 - t2 - (t2 - t1)

Hi DJ,

That does seem to work well and it looks nicer also. :slight_smile: