comp.lang.ada
 help / color / mirror / Atom feed
* Timing Ada programs using the DEC UNIX microtimer kernel option
@ 1998-04-25  0:00 David Coote
  1998-04-25  0:00 ` Joe Gwinn
  1998-04-27  0:00 ` Juan Zamorano Flores
  0 siblings, 2 replies; 8+ messages in thread
From: David Coote @ 1998-04-25  0:00 UTC (permalink / raw)



I got all excited when I found out about the microtimer kernel option. Timing 
measurements we did with this option installed did seem to indicate improved 
resolution. Interestingly enough, the measured overhead of various POSIX 
timing calls seemed to improve. (Judging from how long the call took using 
other timing probes.)

But I'm not sure how this kernel option works. DEC USA support and what 
documentation exists tells me that you have a tick of 1024Hz on an AS500 and 
1200Hz on an AS4100. With the microtimer kernel option installed (quoting 
from a DEC support email)

"As for micro-timer resolution - the clock resolution remains the same - 
1/1024; however, any call to clock_gettime(3) is guaranteed to be 
monotonically increasing with a granularity of 1 microsecond. (i.e. the clock 
is still ticking at 1/1024, but clock_gettime(3) calls return values that are 
unique and APPEAR [DEC's emphasis] to have 1 microsecond resolution. This is 
useful for critical timestamping.)"

Well, how do they do this monotonic increase? If the appropriate parts of the 
kernel are getting tickled every 1024Hz, how is the kernel returning finer 
resolution between ticks?

Anyone know anything about this?

David





^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Timing Ada programs using the DEC UNIX microtimer kernel option
  1998-04-25  0:00 Timing Ada programs using the DEC UNIX microtimer kernel option David Coote
@ 1998-04-25  0:00 ` Joe Gwinn
  1998-04-26  0:00   ` Jonathan Stone
  1998-04-27  0:00 ` Juan Zamorano Flores
  1 sibling, 1 reply; 8+ messages in thread
From: Joe Gwinn @ 1998-04-25  0:00 UTC (permalink / raw)



In article <6hsab5$rh1$1@eplet.mira.net.au>, dccoote@werple.mira.net.au
(David Coote) wrote:

> I got all excited when I found out about the microtimer kernel option. Timing 
> measurements we did with this option installed did seem to indicate improved 
> resolution. Interestingly enough, the measured overhead of various POSIX 
> timing calls seemed to improve. (Judging from how long the call took using 
> other timing probes.)
> 
> But I'm not sure how this kernel option works. DEC USA support and what 
> documentation exists tells me that you have a tick of 1024Hz on an AS500 and 
> 1200Hz on an AS4100. With the microtimer kernel option installed (quoting 
> from a DEC support email)
> 
> "As for micro-timer resolution - the clock resolution remains the same - 
> 1/1024; however, any call to clock_gettime(3) is guaranteed to be 
> monotonically increasing with a granularity of 1 microsecond. (i.e. the clock 
> is still ticking at 1/1024, but clock_gettime(3) calls return values that are 
> unique and APPEAR [DEC's emphasis] to have 1 microsecond resolution. This is 
> useful for critical timestamping.)"
> 
> Well, how do they do this monotonic increase? If the appropriate parts of the 
> kernel are getting tickled every 1024Hz, how is the kernel returning finer 
> resolution between ticks?

This is a common way to ensure a strictly monotonic clock, needed so that
one can use timestamps to uniquely determine order of events, even if the
available clock hardware isn't all that great.  

A POSIX.1b time structure contains two 32-bit unsigned integer fields, the
first  field being seconds since 00:00:00 UTC 1 January 1970 AD (UNIX
timescale origin), and the second field being decimal nanoseconds into the
current second.  Many UNIX systems have much the same setup, except that
the nanoseconds longword instead contains microseconds into the current
second.

The monotonic-clock implementation used by DEC and others is that every
time the 1024-Hz tick arrives, the microsecond count is incremented by
1,024 microseconds, or the nanosecond count is incremented by 1,024,000
nanoseconds.  In addition, whenever the clock is read, the lsb of the
microsecond (or nanosecond) longword is incremented using an
atomic-increment machine instruction.  So, timestamps are a combination of
a time and a serial number.  The note from DEC didn't say if the serial
number portion is ever zeroed, but I would guess that it is zeroed on
every clock tick.

Another approach (not used by DEC for the systems mentioned) is to have
actual microsecond clock hardware.  A typical approach is to have a time
chip generating a tick every 10 milliseconds; the ticks are used to update
the POSIX.1b timestruct by 10 mS each time.  Whenever somebody asks for
the time, the timer hardware is read and the result added to the POSIX.1b
time of the last tick, so the resulting POSIX.1b time has true microsecond
resolution.

Joe Gwinn




^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Timing Ada programs using the DEC UNIX microtimer kernel option
  1998-04-25  0:00 ` Joe Gwinn
@ 1998-04-26  0:00   ` Jonathan Stone
  1998-04-26  0:00     ` David Coote
  1998-04-28  0:00     ` Jeffrey Mogul
  0 siblings, 2 replies; 8+ messages in thread
From: Jonathan Stone @ 1998-04-26  0:00 UTC (permalink / raw)



In article <gwinn-2504981807010001@d45.dial-4.cmb.ma.ultra.net>, gwinn@ma.ultranet.com (Joe Gwinn) writes:

[snip]

|> A POSIX.1b time structure contains two 32-bit unsigned integer fields, the
|> first  field being seconds since 00:00:00 UTC 1 January 1970 AD (UNIX
|> timescale origin), and the second field being decimal nanoseconds into the
|> current second.  Many UNIX systems have much the same setup, except that
|> the nanoseconds longword instead contains microseconds into the current
|> second.
|> 
|> The monotonic-clock implementation used by DEC and others is that every
|> time the 1024-Hz tick arrives, the microsecond count is incremented by
|> 1,024 mic024-Hz tick arrives, the microsecond count is incremented by
|> 1,024 microseconds, or the nanosecond count is incremented by 1,024,000
|> nanoseconds. 

Just a nitpick, but it's actually 976 usecs, or about 976563
nanoseconds.  When using microsecond resolution (as here), handling
the the remaining 576 usecs gracefully requires some care.  In
increasing sophistication:

 * have a `fat tick' once per second which acculates all 576 usecs. 
   That's what the Historic BSD code did. Here, that's more than 50%
    bigger than normal ticks, which is really nasty for accurate
    time-keeping.  

 * Compute the gcd and spread the remainder over more ticks--here,
   compute  G=gcd(1024, 576), and  bump the clock by 576/G every
   1024/G ticks.

 * Extended-precision arithmetic.  Keep a counter of fractional
   ticks (here, 576/hz), and bump the usec ( or nsec) count by one 
   when you've accumulated a whole usec's worth of fractional ticks.
   The NTP kernel clock model uses about six  bits of binary
   fractions of a microsecond.

(Line-plotting algorithms arent used here because they often round up,
which breaks monotonicity and is disastrous for timekeeping.)


|> In addition, whenever the clock is read, the lsb of the
|> microsecond (or nanosecond) longword is incremented using an
|> atomic-increment machine instruction.  So, timestamps are a combination of
|> a time and a serial number.  The note from DEC didn't say if the serial
|> number portion is ever zeroed, but I would guess that it is zeroed on
|> every clock tick.

The typical implementation bumps tv_usec on each read, and maintains
`usecs at last tick' indepenently, and resets tv_usec to that value at
each clock interrupt.

|> Another approach (not used by DEC for the systems mentioned) is to have
|> actual microsecond clock hardware.

But all Alphas have a high-resolution CPU cycle-counter, or an I/O bus
cycle-counter to get usec or better resolution.  As well as that,
there's usually either a PC-compatible timer chip or a cycle-counter
on the I/O bus.  If you find the CPu clock speed (e.g., via busy-loop
counting cycles between clockticks early in boot) it's easy to
interpolate from any of these to usec or 10s of nanosec resolution.

For uniprocessors (like the machines here), you can use CPU cycles
directly.  Multiprocessors require more care to keep the per-CPU
cycle-counter value at each realtime clock tick, or the process' clock
values jump around when context-switching from one CPU to another.  Or
you can just use a lower-resolution, but shared, IO bus cycle counter.

I seem to recall Dave Mills saying that DEC gave him some hardware
precisely to get NTP and precision timekeeping correct.
It would be rather odd if DU still doesnt get this right.




^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Timing Ada programs using the DEC UNIX microtimer kernel option
  1998-04-26  0:00   ` Jonathan Stone
@ 1998-04-26  0:00     ` David Coote
  1998-04-28  0:00     ` Jeffrey Mogul
  1 sibling, 0 replies; 8+ messages in thread
From: David Coote @ 1998-04-26  0:00 UTC (permalink / raw)



[snipped lots of useful stuff.]

>But all Alphas have a high-resolution CPU cycle-counter, or an I/O bus
>cycle-counter to get usec or better resolution.  As well as that,
>there's usually either a PC-compatible timer chip or a cycle-counter
>on the I/O bus.  If you find the CPu clock speed (e.g., via busy-loop
>counting cycles between clockticks early in boot) it's easy to
>interpolate from any of these to usec or 10s of nanosec resolution.
>
>For uniprocessors (like the machines here), you can use CPU cycles
>directly.  Multiprocessors require more care to keep the per-CPU
>cycle-counter value at each realtime clock tick, or the process' clock
>values jump around when context-switching from one CPU to another.  Or
>you can just use a lower-resolution, but shared, IO bus cycle counter.

You can use the rpcc assembler instruction to access the on-chip cycle 
counter. But as you say above there is a problem here with processes swapping 
CPUs on a multi-CPU machine and hence accessing different PCC registers. You 
can use the runon command to ensure a process stays on a CPU but then you're 
interfering with the kernel scheduling algorithm. For this reason our 
customer stuck a requirement in the SRS that specifically forbids using the 
runon command in the production system.
>
>I seem to recall Dave Mills saying that DEC gave him some hardware
>precisely to get NTP and precision timekeeping correct.
>It would be rather odd if DU still doesnt get this right.

I asked DEC USA support if they could give us overhead and resolution figures 
for timing system calls. They recommended that we benchmark these ourselves as 
they don't have anything recent. I asked if DEC could add a system call to the 
OS to give in one system call the CPU utilisation figure (defining CPU 
utilisation as percentage of CPU time divided by elapsed time) of low overhead 
and accuracy/resolution below 1ms, perhaps using the PCC register. (The total 
number of cycles used by a process is preserved on multi-CPU machines when a 
process swaps CPUs.)

They indicated that this could be done but I'm still waiting :(





^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Timing Ada programs using the DEC UNIX microtimer kernel option
  1998-04-25  0:00 Timing Ada programs using the DEC UNIX microtimer kernel option David Coote
  1998-04-25  0:00 ` Joe Gwinn
@ 1998-04-27  0:00 ` Juan Zamorano Flores
  1998-05-01  0:00   ` Short DOSish note (was Re: Timing Ada programs using the DEC UNIX microtimer kernel option) John M. Mills
  1 sibling, 1 reply; 8+ messages in thread
From: Juan Zamorano Flores @ 1998-04-27  0:00 UTC (permalink / raw)



In article <6hsab5$rh1$1@eplet.mira.net.au>, dccoote@werple.mira.net.au (David Coote) writes:
|> 
|> "As for micro-timer resolution - the clock resolution remains the same - 
|> 1/1024; however, any call to clock_gettime(3) is guaranteed to be 
|> monotonically increasing with a granularity of 1 microsecond. (i.e. the clock 
|> is still ticking at 1/1024, but clock_gettime(3) calls return values that are 
|> unique and APPEAR [DEC's emphasis] to have 1 microsecond resolution. This is 
|> useful for critical timestamping.)"
|> 
|> Well, how do they do this monotonic increase? If the appropriate parts of the 
|> kernel are getting tickled every 1024Hz, how is the kernel returning finer 
|> resolution between ticks?
|> 
|> Anyone know anything about this?
|> 
|> David
|> 

     The hardware timer interrupts at 1024Hz. So the kernel has a time granularity
of 1/1024Hz. The clock_gettime call could use the hardware timer downcount
register to improve the resolution.

     I don't know DEC UNIX but in DOS the hardware timer interrupts at 18.2 Hz.
But you can have microseconds resolution if you read the hardware timer downcount
register.

      Juan




^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Timing Ada programs using the DEC UNIX microtimer kernel option
  1998-04-26  0:00   ` Jonathan Stone
  1998-04-26  0:00     ` David Coote
@ 1998-04-28  0:00     ` Jeffrey Mogul
  1 sibling, 0 replies; 8+ messages in thread
From: Jeffrey Mogul @ 1998-04-28  0:00 UTC (permalink / raw)




In article <6hu3hr$8v4$1@nntp.Stanford.EDU>, jonathan@Kowhai.Stanford.EDU (Jonathan Stone) writes:
|> 
|> But all Alphas have a high-resolution CPU cycle-counter, or an I/O bus
|> cycle-counter to get usec or better resolution.  As well as that,
|> there's usually either a PC-compatible timer chip or a cycle-counter
|> on the I/O bus.  If you find the CPu clock speed (e.g., via busy-loop
|> counting cycles between clockticks early in boot) it's easy to
|> interpolate from any of these to usec or 10s of nanosec resolution.
|> 
|> For uniprocessors (like the machines here), you can use CPU cycles
|> directly.  Multiprocessors require more care to keep the per-CPU
|> cycle-counter value at each realtime clock tick, or the process' clock
|> values jump around when context-switching from one CPU to another.  Or
|> you can just use a lower-resolution, but shared, IO bus cycle counter.
|> 
|> I seem to recall Dave Mills saying that DEC gave him some hardware
|> precisely to get NTP and precision timekeeping correct.
|> It would be rather odd if DU still doesnt get this right.

In fact, the MICRO_TIME option (in Digital UNIX V4.0 and later)
is exactly this code.  Thanks to Dave Mills, if this option is
enabled, the kernel uses the cycle counter to interpolate between
clock interrupts, and the result is provided by gettimeofday().

Dave also managed to get this to work on multiprocessors, in the
sense that if a process migrates from one CPU to another, it should
still see consistent and accurate time from gettimeofday().  There
may be some bugs in earlier versions of this code (I've heard rumors
of 12-CPU systems where it didn't quite work), so if you do have
a large SMP system and want to enable this option, you might want to
get the latest release of the operating system and proceed with some
prudent care.

See the Digital UNIX FAQ:
  ftp://rtfm.mit.edu/pub/usenet/news.answers/dec-faq/Digital-UNIX
under "P9. How can I get microsecond resolution from gettimeofday(2)?"
for more details.

-Jeff




^ permalink raw reply	[flat|nested] 8+ messages in thread

* Short DOSish note (was Re: Timing Ada programs using the DEC UNIX microtimer kernel option)
  1998-04-27  0:00 ` Juan Zamorano Flores
@ 1998-05-01  0:00   ` John M. Mills
  1998-05-01  0:00     ` Jerry van Dijk
  0 siblings, 1 reply; 8+ messages in thread
From: John M. Mills @ 1998-05-01  0:00 UTC (permalink / raw)



Sorry - no Ada content, but perhaps a useful pointer for fixed-sample-interval
DOS (or Win3.1, presumably) code, such as instruments or controllers:

jzamora@avellano.datsi.fi.upm.es (Juan Zamorano Flores) writes:
>     I don't know DEC UNIX but in DOS the hardware timer interrupts at
>18.2 Hz.  But you can have microseconds resolution if you read the hardware
>timer downcount register.

DOS needs the 18.2Hz to continue for important services, but you can easily
'hook' the interrupt, reprogram the counter, and call a user routine at
multiples of 18.2 Hz, so long as you also call the DOS service at 18.2 Hz.
I've used 90 Hz for control-loop closure.  (As always, remember that DOS itself
is _not_ re-entrant: make a local stack and be prepared for surprises if your
ISR tries to use DOS services!)

Two 'C' examples for DOS can be found on:
   ftp://jmills.gtri.gatech.edu/pub
as:
   tichandl.gz  ('C' source file example which reprograms the timer, runs a
                 while, restores the timer, then exits), and
   ballgame.tgz (whistles "Take me out to the Ballgame" on your PC speaker -
                 will drive your colleagues #@$!! nuts.)

Both are gzipped; 'tichandl.gz should be unpacked into 'tichandl.c'.

I received a copy of the 'ballgame' example, then did the simpler 'tichandler'
as my own intro exercise.  Translation into Ada and other PC OS's remains an
exercise to the reader.

-- 
 John M. Mills, Senior Research Engineer   --   john.mills@gtri.gatech.edu
   Georgia Tech Research Institute, Georgia Tech, Atlanta, GA 30332-0834
        Phone contacts: 404.894.0151 (voice), 404.894.6258 (FAX)
           "Lies, Damned Lies, Statistics, and Simulations."




^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Short DOSish note (was Re: Timing Ada programs using the DEC UNIX microtimer kernel option)
  1998-05-01  0:00   ` Short DOSish note (was Re: Timing Ada programs using the DEC UNIX microtimer kernel option) John M. Mills
@ 1998-05-01  0:00     ` Jerry van Dijk
  0 siblings, 0 replies; 8+ messages in thread
From: Jerry van Dijk @ 1998-05-01  0:00 UTC (permalink / raw)



John M. Mills (jm59@prism.gatech.edu) wrote:

: DOS needs the 18.2Hz to continue for important services, but you can easily
: 'hook' the interrupt, reprogram the counter, and call a user routine at
: multiples of 18.2 Hz, so long as you also call the DOS service at 18.2 Hz.

I didn't get the thread here, but on the subject of DOS, it is also
possible to get microsecond timing if neccesary. I did this using GNAT,
but it also should be possible using OA.

Jerry.

-- 
-- Jerry van Dijk  | email: jdijk@acm.org
-- Leiden, Holland | member Team-Ada




^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~1998-05-01  0:00 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
1998-04-25  0:00 Timing Ada programs using the DEC UNIX microtimer kernel option David Coote
1998-04-25  0:00 ` Joe Gwinn
1998-04-26  0:00   ` Jonathan Stone
1998-04-26  0:00     ` David Coote
1998-04-28  0:00     ` Jeffrey Mogul
1998-04-27  0:00 ` Juan Zamorano Flores
1998-05-01  0:00   ` Short DOSish note (was Re: Timing Ada programs using the DEC UNIX microtimer kernel option) John M. Mills
1998-05-01  0:00     ` Jerry van Dijk

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox