From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.4 X-Google-Thread: 103376,cae92f92d6a1d4b1 X-Google-NewGroupId: yes X-Google-Attributes: gida07f3367d7,domainid0,public,usenet X-Google-Language: ENGLISH,ASCII-7-bit Path: g2news2.google.com!news3.google.com!fu-berlin.de!uni-berlin.de!individual.net!not-for-mail From: "(see below)" Newsgroups: comp.lang.ada Subject: Re: Ada.Execution_Time Date: Tue, 28 Dec 2010 16:27:18 +0000 Message-ID: References: <4d05e737$0$6980$9b4e6d93@newsspool4.arcor-online.net> <1wmsukf0wglz3$.odnzonrpayly.dlg@40tude.net> <6n1c5myuf2uz$.10jl3ln7il3aq.dlg@40tude.net> <8n0mgnFv2sU1@mid.individual.net> <1n3o55xjdjr9t.1u33kb75y2jfl$.dlg@40tude.net> <8n1142Fto2U1@mid.individual.net> <1o5cbm4b1l20d$.19winbma6k5qw.dlg@40tude.net> <8n4mskF7mmU1@mid.individual.net> <8nm30fF7r9U1@mid.individual.net> <1akm5muxu9zni.mu91b7pubqw0$.dlg@40tude.net> <8nrg25FoucU1@mid.individual.net> <2k07hwmh6123.1pgx57welw9of$.dlg@40tude.net> <8nsa76Fj4rU1@mid.individual.net> <1j9i6trxinqtg$.renlw9wdtpsf.dlg@40tude.net> <8nubhsF6e8U1@mid.individual.net> <9ulzg911gy1q.hztezq0qtfee$.dlg@40tude.net> Mime-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit X-Trace: individual.net ON5Ta8Zxqs3qyiCXPManpwkK/SSdf3OQLx6cdj7xKKqVBe6peZ Cancel-Lock: sha1:2YKseQXXhkZOujI1FPcu5rCo5S0= User-Agent: Microsoft-Entourage/12.28.0.101117 Thread-Topic: Ada.Execution_Time Thread-Index: AcumrBgnAWS7B/ATAEmFApYSGW8GTw== Xref: g2news2.google.com comp.lang.ada:17172 Date: 2010-12-28T16:27:18+00:00 List-Id: At least part of this discussion is motivated by the abysmal facilities for the measurement of elapsed time on some (all?) modern architectures. My current Ada project is the emulation of the KDF9, a computer introduced 50 years ago. It had a hardware clock register that was incremented by 1 every 32 logic clock cycles and could be read by a single instruction taking 4 logic clock cycles (the CPU ran on a 1MHz logic clock). Using this feature, the OS could keep track of the CPU time used by a process to within 32 logic clock cycles per time slice (typically better than 1 part in 1_000). Summing many such slices gives a total with much better relative error than that of the individual slices, of course. Dmitri reports a modern computer with a timer having a resolution that is thousands or millions of times worse than the CPU's logic clock. Why has this aspect of computer architecture degenerated so much, I wonder? And why have software people not made more of a push for improvements? -- Bill Findlay with blueyonder.co.uk; use surname & forename;