Hello.
You are right, the subtraction of the overhead should only be done in all but the first call.
But on my machine that added overhead is only 31 milliseconds, I have a 3GHz cpu,
which means that if an empty repeat loop, executed 500 times, in pure assembly should take about
on on third of a millionth of a second. So if there is very little to be timed, then the added accuracy will lead to
grossly inaccurate results, when the object of timing take less time than the getMilliSec handler.
My basic idea was that the time it took to take the time should be regarded as taking zero milliseconds.
Then the timing results between the calls may be fictive as constant time, but the time between two calls relative to each other should be fairly accurate.
The snippet is rewritten, and should work fairly accurate. One should just remember that a result giving negative
numbers probably isnât worth timing.
on getMillisec()
global _overhead, _getmillicalls
set res to do shell script "/usr/local/opt/timetools -ums"
-- you must change path to your hardcoded path to timetools,
if _getmillicalls > 0 then
set res to res - _overhead
else
set _getmillicalls to 1 as integer
end if
return res
end getMillisec
script timeTools
on reinitate() -- makes getMilliSec not subtract overhead on first call.
set my _getmillicalls to 0
end reinitate
on calibrate()
global _overhead, _getmillicalls
set _overhead to 0 as integer
set _getmillicalls to 0 as integer
getMillisec() -- gets stuff loaded into mem for later, this first call takes longer time.
set _getmillicalls to 0
set a to getMillisec()
set _overhead to (getMillisec() - a)
set _getmillicalls to 0 -- initiates getMilliSec to not subtract the overhead on first call.
end calibrate
end script
My version of your script from above would now be:
â set timer to (load script file "path:to:above:script.scpt")
set timer to timeTools
tell timer
calibrate()
set t to getMillisec()
end tell
delay 1 -- Code to be timed! {1.003}
tell timer to return (getMillisec() - t ) / 1000
script timeTools
on getMillisec()
global _overhead, _getmillicalls
set res to do shell script "/usr/local/opt/timetools -ums"
-- you must change path to your hardcoded path to timetools,
if _getmillicalls > 0 then
set res to res - (_overhead)
else
set _getmillicalls to 1 as integer
end if
return res
end getMillisec
on reinitate() -- makes getMilliSec not subtract overhead on first call.
set my _getmillicalls to 0
end reinitate
on calibrate()
global _overhead, _getmillicalls
set _overhead to 0 as integer
set _getmillicalls to 0 as integer
getMillisec() -- gets stuff loaded into mem for later, this first call takes longer time.
set _getmillicalls to 0
set a to getMillisec()
set _overhead to (getMillisec() - a)
set _getmillicalls to 0 -- initiates getMilliSec to not subtract the overhead on first call.
end calibrate
end script
The my preferred version above yours gives (1,005) as a result of timing before and after the delay 1 command above and 1.003 when referenced from a script object the way you like it.
I find those numbers good, it would have been interesting to see the originals results for the delay 1
Best Regards
McUsr