calibrated timer

Hi,

I was looking at an old script and wondered if I just calibrated the time then the script would be simpler.


set t1 to (do shell script "python -c 'import time; print time.time()'") as real
set t2 to (do shell script "python -c 'import time; print time.time()'") as real
set time_calib to t2 - t1
set t1 to (do shell script "python -c 'import time; print time.time()'") as real
--
delay 5
--
set t2 to (do shell script "python -c 'import time; print time.time()'") as real
set time_diff to t2 - t1 - time_calib

I once had 5.0 seconds as a result. Can this be more consistent somehow?

Thanks,

Model: MacBook Pro
AppleScript: 2.2.3
Browser: Safari 536.26.17
Operating System: Mac OS X (10.8)

What do you mean? the time it takes to execute an script command, to invoke an interpreter to invoke eventually another converter? Don’t get me wrong but a difference 0.001 seconds is small enough for me. Also the time difference is less than 5 seconds because the second and third command uses some cached data and therefor will execute faster. A more accurate version would be one dummy:

run script (do shell script "python -c 'import time; print time.time()'") --dummy
set t1 to run script (do shell script "python -c 'import time; print time.time()'")
set t2 to run script (do shell script "python -c 'import time; print time.time()'")
set time_calib to t2 - t1
set t1 to run script (do shell script "python -c 'import time; print time.time()'")
--
delay 5
--
set t2 to run script (do shell script "python -c 'import time; print time.time()'")
set time_diff to t2 - t1 - time_calib

That bit about the dummy was in the back of my mind. Kind of like warming it up, but I couldn’t decide how. Now to go back and read your post again.

Editted:

I was wondering why the first run was higher than the others, at times.

Thanks,

I think that the extra time is coming from the downside of the delay command.


run script (do shell script "python -c 'import time; print time.time()'") --dummy
set t1 to run script (do shell script "python -c 'import time; print time.time()'")
set t2 to run script (do shell script "python -c 'import time; print time.time()'")
set time_calib to t2 - t1
set t1 to run script (do shell script "python -c 'import time; print time.time()'")
--
delay 10
--
set t2 to run script (do shell script "python -c 'import time; print time.time()'")
set time_diff to t2 - t1 - time_calib

I was thinking of making another calibration after the delay. Somehow

I hope you’re not assuming any sort of precision for the delay command, because it specifically makes no guarantees.

Hi Shane,

I was, hoping for precision from the delay command, although in the back of my mind it was iffy. Don’t know what Apple used to create the delay. I don’t know if they used the unix or the machine.

I can’t believe that computers where this fast.


run script (do shell script "python -c 'import time; print time.time()'") --dummy
set t1 to run script (do shell script "python -c 'import time; print time.time()'")
set t2 to run script (do shell script "python -c 'import time; print time.time()'")
set time_calib to t2 - t1
set t1 to run script (do shell script "python -c 'import time; print time.time()'")
--
set x to 0
repeat 1000000 times
	set x to x + 1
end repeat
--
set t2 to run script (do shell script "python -c 'import time; print time.time()'")
set time_diff to t2 - t1 - time_calib

It repeated a million times in less than half a second while doing a calculation.

straying

They used to use the Carbon delay, which was tick-based, and may well still do. But the docs say of the AS command:

delay does not make any guarantees about the actual length of the delay, and it cannot be more precise than 1/60th of a second.

But even if were extremely precise, there are lots of other factors involved that would make the end result less so.

A Hertz is the unit for the number of fluctuations per seconds; the wavelength. So when you have say a processor that runs with a speed of 3GHz, it executes 3 Billion instructions per second.

I isn’t this simple when it comes to predictability though, since there are many processes having a share of the processing time, and if that weren’t enough, pages of memory might be fetched, as well as IO, that at least may have to be ascertained that still works, so there is a lot of overhead.

On a general basis, for the finest UI interaction, a tolerance of ±0.05 seconds is enough, as the human being can’t discern time differences below 0.1 secs. (Normal human beeings.)

This kind of precision, can be obtained wht the sleep command, which does work with decimals, once things are calibrated. Some slack has to be allowed, since we are not talking embededd systems here. :slight_smile:

The calculation is actually easier than the repeat loop itself for an computer. As you’ve maybe noticed is that AppleScript has more difficulty with incrementing a value than a loop which is weird. (Next paragraph explained)

I don’t think you can think in processor instructions in AppleScript. Also not every instruction is 1 clock especially with CISC processors. It’s normal to use 2.5 cycles for each instruction for X86 processors. That would mean an average of 1 billion instructions per second (which is still fast). When you decide to compare it with processor power you would also consider that the LOOP instruction is a macro of multiple instructions (JMP, INC or DEC, CMP etc…) while ‘set x to x + 1’ is a single instructions. Therefore a loop is slower than an incrementation of an integer.

Back to the timer Shane and McUsr mentioned the biggest problems but even the most accurate sleeper isn’t accurate. I’ve tried nanosleep from the sleep library and it’s still not accurate enough. The reason why sleep or delay functions aren’t precise is simply because they do checks with an interval, the interval itself is the precision of the delay. In C the nanosleep function has a small interval and therefore is really fast. When we’re in AppleScript everything is important except for accuracy especially because it’s an command and uses events. When we transfer all the AppleScript code Python, it is still inaccurate and shows that the sleep function in Python is accurate as it’s interval.

run script (do shell script "python -c '
import time 
t1 =  time.time()
time.sleep(1.5)
t2 = time.time()
print t2 - t1'")

This is as best as you can get I think.

Hello.

The really only interesting thing for me to time with Applescript, is how well something performs, in comparision to something else, wether it be the size of data, or another algorithm.

Nigel Garvey once made a script I believe that he named lotsa, they then used to iterate over the same instructions, like 30-40 times, to get accurate timings.

You can too do this by a simple approach of say running your code 100 times, using date before, then after, then subtract the first from the second and divide by 100. There is also Chronos that comes with Smile, and an osax for timing, if you worry that the timing code by itself might lead to inaccuracies.

@DJBW: I didn’t take macro instructions like loops into account, but was more of thinking of the primitive add, subtract and so on. The whole numbers of instructions per second is still relevant, but there are a lot more going on, like blocking io, context-switching and page faults, (Ha ha ha I forgot Apple Events!) that it is just guess work all together. I think we are on the same page there.

We’re on the same page and we’re both right. The Intel chip is an super scalar (pipeline). The pipeline works like an assembly line. GM can say that an car is made every 5 minutes because a car comes off the assembly line that fast. It’s an good selling point and it’s true, no argument there. But from the car’s perspective view it takes 2 days from nothing to be an complete car. It depends on how you count, you probably would say GM builds an car in 5 minutes while I would say the car is build in 2 days and we’re both right :smiley: