Toby Ewig wrote:
Hi, all
Frank suggested using the CPU timer, and I agree that it's a much better benchmark than the clock timer (I keep discovering gems in gpc.pas!). But I'm
My experience was that real time is more accurate then CPU time. The resolution of CPU time is limited by frequency of timer interrupt (10 ms on i386 Linux) and there are random (and sometimes systematic) errors that may chenge result by few ticks. So to get good results one have to use pretty long times. On many modern machines (notably Pentium or better) real time clock have much higher acuraccy (of order of microseconds). I have found that on unloaded machine I get repeatable results even with submillisecond run times. In general real time seem to be much more repeatable then CPU time.
getting identical results from the two timers:
<snip>
When the computer is idle except for this program, I get: Time for 100000000 randNorms: 171.03 seconds, 171 cpu seconds.
When I'm checking email, opening files, etc., I get: Time for 100000000 randNorms: 202.29 seconds, 202 cpu seconds.
What am I doing wrong here - or is GetCPUTime not working?
That looks very strange. Other processes may increase your CPU time (if you have heavy cache trashing), but to trash the cache some CPU time have to happen outside your process. I have seen systematic errors, but nothing of that magnitude...