comp.lang.ada
 help / color / mirror / Atom feed
* delay until   and GNAT
@ 1999-05-05  0:00 isaac buchwald
  1999-05-05  0:00 ` dennison
                   ` (2 more replies)
  0 siblings, 3 replies; 37+ messages in thread
From: isaac buchwald @ 1999-05-05  0:00 UTC (permalink / raw)



  Does  someone  know  the   upper bound  on the lateness of  delay  until
and delay relative
 for  Gnat  implement.  on  Win95 , WinNT   or  Linux.

   Thanks.






^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: delay until and GNAT
  1999-05-05  0:00 delay until and GNAT isaac buchwald
@ 1999-05-05  0:00 ` dennison
  1999-05-06  0:00   ` Buz Cory
  1999-05-05  0:00 ` delay until and GNAT David C. Hoos, Sr.
  1999-05-06  0:00 ` Roger Racine
  2 siblings, 1 reply; 37+ messages in thread
From: dennison @ 1999-05-05  0:00 UTC (permalink / raw)


In article <m3_X2.48$6o.1372369@news.siol.net>,
  "isaac buchwald" <isaac.buchwald@velenje.cx> wrote:
>
>   Does  someone  know  the   upper bound  on the lateness of  delay  until
> and delay relative
>  for  Gnat  implement.  on  Win95 , WinNT   or  Linux.

I'm not sure there *is* an upper bound. What if a higher priority task is
running in a busy loop?

--
T.E.D.

-----------== Posted via Deja News, The Discussion Network ==----------
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own    




^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: delay until   and GNAT
  1999-05-05  0:00 delay until and GNAT isaac buchwald
  1999-05-05  0:00 ` dennison
@ 1999-05-05  0:00 ` David C. Hoos, Sr.
  1999-05-06  0:00 ` Roger Racine
  2 siblings, 0 replies; 37+ messages in thread
From: David C. Hoos, Sr. @ 1999-05-05  0:00 UTC (permalink / raw)



isaac buchwald wrote in message ...
>
>  Does  someone  know  the   upper bound  on the lateness of  delay  until
>and delay relative
> for  Gnat  implement.  on  Win95 , WinNT   or  Linux.
>
>   Thanks
RM 95 9.6(29) says:
An implementation may raise Time_Error if the value of a delay_expression in
a delay_until_statement of a select_statement represents a time more than
90 days past the current time. The actual limit, if any, is
implementation-defined.

And, in accordance with the requirement that implementations document
whatever implementation permissions are taken, the GNAT Reference Manual
says in the obscurely-named section entitled
"Implementation Defined Characteristics":

26. Any limit on delay_until_statements of select_statements. See 9.6(29).
There are no such limits.

As far as GNAT's time representation is concerned the GNAT  RM says:

9.6(30-31): Duration'Small
Whenever possible in an implementation, the value of Duration'Small
should be no greater than 100 microseconds.
Followed. (Duration'Small = 10**(-9)).

In GNAT time and duration are represented in 64 bits, so delay until could
be
until 23:59.999999999 December 31, 2099 -- i.e., until the end of time
as defined by the Ada.Calendar package.










^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: delay until and GNAT
  1999-05-05  0:00 ` dennison
@ 1999-05-06  0:00   ` Buz Cory
  1999-05-06  0:00     ` Robert Dewar
  0 siblings, 1 reply; 37+ messages in thread
From: Buz Cory @ 1999-05-06  0:00 UTC (permalink / raw)


In article <7gpukr$s82$1@nnrp1.dejanews.com>,
  dennison@telepath.com wrote:
> In article <m3_X2.48$6o.1372369@news.siol.net>,
>   "isaac buchwald" <isaac.buchwald@velenje.cx> wrote:
> >
> >   Does  someone  know  the   upper bound  on the lateness of  delay  until
> > and delay relative
> >  for  Gnat  implement.  on  Win95 , WinNT   or  Linux.
>
> I'm not sure there *is* an upper bound. What if a higher priority task is
> running in a busy loop?

That should be correct. There is no guarantee of *maximum* delay between *any*
two statements in Ada.

In particular, for "delay" and "delay until", the only guarantee is that the
delay will have elapsed when the next statement executes. How long ago it
might have elapsed is *not* guaranteed. This is pretty much true of any
prioritized multi-tasking system. For a time-slicing system, you might
reasonably expect that the "upper bound  on the lateness of  delay" will be
some relatively small number times the maximum size of a slice (for 10 ms
slices, it should be of the order of 1 sec or so).

-----------== Posted via Deja News, The Discussion Network ==----------
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own    




^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: delay until and GNAT
  1999-05-06  0:00   ` Buz Cory
@ 1999-05-06  0:00     ` Robert Dewar
  1999-05-06  0:00       ` delay until and GNAT - expand isaac buchwald
  0 siblings, 1 reply; 37+ messages in thread
From: Robert Dewar @ 1999-05-06  0:00 UTC (permalink / raw)


In article <7grkbb$cee$1@nnrp1.deja.com>,
  Buz Cory <hacker@buzco.ddns.org> wrote:

> In particular, for "delay" and "delay until", the only guarantee is that the
> delay will have elapsed when the next statement executes. How long ago it
> might have elapsed is *not* guaranteed. This is pretty much true of any
> prioritized multi-tasking system. For a time-slicing system, you might
> reasonably expect that the "upper bound  on the lateness of  delay" will be
> some relatively small number times the maximum size of a slice (for 10 ms
> slices, it should be of the order of 1 sec or so).


This is wrong, and all the followups so far have completely missed
the perfectly legitimate question raised by the original post. See
RM D.9.

Unfortunately, the answer in typical implementations is that the
requirement of D.9(10,11) which is what the question was about cannot
be met because it is dependent on unavailable information from the
underlying operating system, and that therefore it is not practical
to meet this requirement in this context (see RM 1)

-----------== Posted via Deja News, The Discussion Network ==----------
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own    




^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: delay until   and GNAT
  1999-05-05  0:00 delay until and GNAT isaac buchwald
  1999-05-05  0:00 ` dennison
  1999-05-05  0:00 ` delay until and GNAT David C. Hoos, Sr.
@ 1999-05-06  0:00 ` Roger Racine
  1999-05-10  0:00   ` Nick Roberts
  2 siblings, 1 reply; 37+ messages in thread
From: Roger Racine @ 1999-05-06  0:00 UTC (permalink / raw)


>  Does  someone  know  the   upper bound  on the lateness of  delay  until
>and delay relative
> for  Gnat  implement.  on  Win95 , WinNT   or  Linux.

>   Thanks.

There have been some replies to this, but let me rephrase the question, and 
then answer it, as best I can.  My guess is that he really wants to know the 
following:

If he is the only user on the system and is running his program in the 
foreground with nothing else running (except OS processes), and his highest 
priority task executes a "delay" or "delay until" statement, what is the upper 
bound on the actual delay, for some processor speed?

The three platforms you mentioned are not real-time operating systems, and 
therefore have some possible problems with this metric.  I have read technical 
reports that state that Windows NT and Linux each have system calls that, if 
called by a low-priority task or an OS process, have "unbounded" delays 
associated with them.  I have seen nothing on Win95.  Therefore, even though, 
normally the delay on the Windows platforms will be very close to the "clock 
tick" time (0.01 seconds?), every once in a great while, even if the machine 
is not connected to a network, it might take much longer.

On Linux, it depends on the runtime system you are using.  If you use the FSU 
runtime, there are even more possibilities of long blocking times, since every 
system call will block all tasks.  Using the "native" version, priorities do 
not work correctly unless you run as root (Linux threads are implemented as 
each thread being a separate process, and normal users are not allowed to run 
with multiple priorities).  And even running as root, ACT can not guarantee 
correct adherance to the Ada standard.

Please note that all of the problems are associated with the operating 
system(s), not GNAT.  The bottom line is: if you want to use multitasking, in 
any language, and want an upper bound on timing, use a real-time operating 
system.

That is one (more) interpretation of what was asked.

Roger Racine




^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: delay until and GNAT - expand
  1999-05-06  0:00     ` Robert Dewar
@ 1999-05-06  0:00       ` isaac buchwald
  1999-05-07  0:00         ` Roger Racine
  0 siblings, 1 reply; 37+ messages in thread
From: isaac buchwald @ 1999-05-06  0:00 UTC (permalink / raw)


  Thank's to all.   It was  the req. of D.9(10,11).

   So  to expand  the  ques. can you  state  any  impl.  of  Gnat ( or other
ada95 imp.)
  on  some  real-time system   with the  upper limit .

    Thanks.






^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: delay until and GNAT - expand
  1999-05-06  0:00       ` delay until and GNAT - expand isaac buchwald
@ 1999-05-07  0:00         ` Roger Racine
  1999-05-08  0:00           ` dewar
  0 siblings, 1 reply; 37+ messages in thread
From: Roger Racine @ 1999-05-07  0:00 UTC (permalink / raw)


In article <k_kY2.71$6o.2036023@news.siol.net> "isaac buchwald" <isaac.buchwald@velenje.cx> writes:
>  Thank's to all.   It was  the req. of D.9(10,11).

>   So  to expand  the  ques. can you  state  any  impl.  of  Gnat ( or other
>ada95 imp.)
>  on  some  real-time system   with the  upper limit .

>    Thanks.

The implementation of "delay", given today's processor speeds, is pretty good. 
 I have recently single stepped my way (in assembly) through it, and while I 
did not count the instructions, it had to be on the order of 100 instructions.

So, given a good real-time operating system and a reasonably fast processor, a 
reasonable upper limit would probably be close to 1 microsecond (conservative 
estimate).  

I am surprised GNAT documentation does not have this, except that the number 
will be different for every underlying OS.

Roger Racine




^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: delay until and GNAT - expand
  1999-05-07  0:00         ` Roger Racine
@ 1999-05-08  0:00           ` dewar
  1999-05-10  0:00             ` Roger Racine
                               ` (6 more replies)
  0 siblings, 7 replies; 37+ messages in thread
From: dewar @ 1999-05-08  0:00 UTC (permalink / raw)


In article <rracine.12.00086B05@draper.com>,
  rracine@draper.com (Roger Racine) wrote:

> The implementation of "delay", given today's processor speeds, is pretty good.
>  I have recently single stepped my way (in assembly) through it, and while I
> did not count the instructions, it had to be on the order of 100 instructions.
>
> So, given a good real-time operating system and a reasonably fast processor, a
> reasonable upper limit would probably be close to 1 microsecond (conservative
> estimate).
>
> I am surprised GNAT documentation does not have this, except that the number
> will be different for every underlying OS.
>
> Roger Racine

This is a highly misleading figure. The completion of a delay, in the sense
that we are talking about, requires a preemptive context switch. To expect
this to happen in 1 microsecond when running over an operating system like
Unix, or even over a light real time executive is wildly optimistic.

I am not sure what you are measuring, but it is quite wrong. We don't document
upper limits for such things in the GNAT manual, because it is impractical to
do so in almost all operating systems contexts, since we depend on the
underlying OS, and this information is not available for the OS.

Robert Dewar
Ada Core Technologies

-----------== Posted via Deja News, The Discussion Network ==----------
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own    




^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: delay until and GNAT - expand
  1999-05-10  0:00             ` Roger Racine
@ 1999-05-10  0:00               ` Joel Sherrill
  1999-05-11  0:00               ` Robert Dewar
  1999-05-11  0:00               ` isaac buchwald
  2 siblings, 0 replies; 37+ messages in thread
From: Joel Sherrill @ 1999-05-10  0:00 UTC (permalink / raw)


In article <rracine.13.0007AF82@draper.com>,
  rracine@draper.com (Roger Racine) wrote:
> >In article <rracine.12.00086B05@draper.com>,
> >  rracine@draper.com (Roger Racine) wrote:
>
> >> The implementation of "delay", given today's processor speeds, is
pretty good.
> >>  I have recently single stepped my way (in assembly) through it,
and while I
> >> did not count the instructions, it had to be on the order of 100
instructions.
> >>
> >> So, given a good real-time operating system and a reasonably fast
processor, a
> >> reasonable upper limit would probably be close to 1 microsecond
(conservative
> >> estimate).
> >>
> >> I am surprised GNAT documentation does not have this, except that
the number
> >> will be different for every underlying OS.
> >>
> >> Roger Racine
>
> >This is a highly misleading figure. The completion of a delay, in the
sense
> >that we are talking about, requires a preemptive context switch. To
expect
> >this to happen in 1 microsecond when running over an operating system
like
> >Unix, or even over a light real time executive is wildly optimistic.
>
> >I am not sure what you are measuring, but it is quite wrong. We don't
document
> >upper limits for such things in the GNAT manual, because it is
impractical to
> >do so in almost all operating systems contexts, since we depend on
the
> >underlying OS, and this information is not available for the OS.
>
> It is not misleading at all.  It is a consequence of the speed of
today's
> processors.  Back a few years (1983), it was reasonable to expect a
simple
> delay 0.0, which is what the issue is about, to take somewhere near
100 -200
> microseconds on a real-time OS.  This included the context switch, but
not the
> activities of other tasks.  Is this the meaning of the metric in
D.8(10,11)?
>
> I was somewhat surprised to see numbers, for context switches, in the
range of
> 1 microsecond, but not when I thought of the current speed of
processors.  And
> the metric is for processor clock cycles.  I am assuming waiting for
memory
> does not count, but even if that is true, the result will not be very
much
> longer.
>
> Note that I said, "given a good real-time operating system".  That
does
> not include any form of Unix (with the possible exception of LynxOS
and any
> other real-time versions; I have not looked at their documentation
recently).
>
> I understand why compiler vendors can not document upper limits for
host-based
> systems.  I would think that VxWorks and RTEMS could have the bounds
> documented.  And something like "GNAT takes xxx clock cycles + the
underlying
> operating system context switch" would meet the spirit of the RM.  I
know Wind
> River publishes their bounds.  It is quite reasonable to have users of
the
> metrics look at other documentation, as long as it is referenced.
>
> Roger Racine

This information is documented for some RTEMS targets.  But better
than this is that the test suite used to generate this information
is included with RTEMS and you can generate these numbers yourself
on your target hardware.  The context switch performance test in
particular is "tm26".  The mvme167 BSP reports a basic context
switch at 3 microseconds on a 25 Mhz board.

An important thing to remember is that often these older CPUs look
quite good at context switches compared to newer high performance
RISC CPUs.  Context switches tend to be memory bound and RISC CPUs
tend to have more state to save.  But then again new RISC CPUs have
incredibly high clock rates which covers up a lot. :)

--joel
joel@OARcorp.com                 On-Line Applications Research
Ask me about RTEMS: a free RTOS  Huntsville AL 35805
   Support Available             (205) 722-9985



--== Sent via Deja.com http://www.deja.com/ ==--
---Share what you know. Learn what you don't.---




^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: delay until and GNAT - expand
  1999-05-08  0:00           ` dewar
@ 1999-05-10  0:00             ` Roger Racine
  1999-05-10  0:00               ` Joel Sherrill
                                 ` (2 more replies)
  1999-05-10  0:00             ` Context switching (was: delay until and GNAT) Nick Roberts
                               ` (5 subsequent siblings)
  6 siblings, 3 replies; 37+ messages in thread
From: Roger Racine @ 1999-05-10  0:00 UTC (permalink / raw)


>In article <rracine.12.00086B05@draper.com>,
>  rracine@draper.com (Roger Racine) wrote:

>> The implementation of "delay", given today's processor speeds, is pretty good.
>>  I have recently single stepped my way (in assembly) through it, and while I
>> did not count the instructions, it had to be on the order of 100 instructions.
>>
>> So, given a good real-time operating system and a reasonably fast processor, a
>> reasonable upper limit would probably be close to 1 microsecond (conservative
>> estimate).
>>
>> I am surprised GNAT documentation does not have this, except that the number
>> will be different for every underlying OS.
>>
>> Roger Racine

>This is a highly misleading figure. The completion of a delay, in the sense
>that we are talking about, requires a preemptive context switch. To expect
>this to happen in 1 microsecond when running over an operating system like
>Unix, or even over a light real time executive is wildly optimistic.

>I am not sure what you are measuring, but it is quite wrong. We don't document
>upper limits for such things in the GNAT manual, because it is impractical to
>do so in almost all operating systems contexts, since we depend on the
>underlying OS, and this information is not available for the OS.

It is not misleading at all.  It is a consequence of the speed of today's 
processors.  Back a few years (1983), it was reasonable to expect a simple 
delay 0.0, which is what the issue is about, to take somewhere near 100 -200
microseconds on a real-time OS.  This included the context switch, but not the 
activities of other tasks.  Is this the meaning of the metric in D.8(10,11)?

I was somewhat surprised to see numbers, for context switches, in the range of 
1 microsecond, but not when I thought of the current speed of processors.  And 
the metric is for processor clock cycles.  I am assuming waiting for memory 
does not count, but even if that is true, the result will not be very much 
longer.

Note that I said, "given a good real-time operating system".  That does 
not include any form of Unix (with the possible exception of LynxOS and any 
other real-time versions; I have not looked at their documentation recently).

I understand why compiler vendors can not document upper limits for host-based 
systems.  I would think that VxWorks and RTEMS could have the bounds 
documented.  And something like "GNAT takes xxx clock cycles + the underlying 
operating system context switch" would meet the spirit of the RM.  I know Wind 
River publishes their bounds.  It is quite reasonable to have users of the 
metrics look at other documentation, as long as it is referenced.

Roger Racine




^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: delay until and GNAT - expand
  1999-05-08  0:00           ` dewar
  1999-05-10  0:00             ` Roger Racine
  1999-05-10  0:00             ` Context switching (was: delay until and GNAT) Nick Roberts
@ 1999-05-10  0:00             ` Roger Racine
  1999-05-11  0:00               ` Robert Dewar
  1999-05-11  0:00             ` delay until and GNAT - expand Roger Racine
                               ` (3 subsequent siblings)
  6 siblings, 1 reply; 37+ messages in thread
From: Roger Racine @ 1999-05-10  0:00 UTC (permalink / raw)


>>  rracine@draper.com (Roger Racine) wrote:

>>> The implementation of "delay", given today's processor speeds, is pretty good.
>>>  I have recently single stepped my way (in assembly) through it, and while I
>>> did not count the instructions, it had to be on the order of 100 instructions.
>>>
>>> So, given a good real-time operating system and a reasonably fast processor, a
>>> reasonable upper limit would probably be close to 1 microsecond (conservative
>>> estimate).
>>>
>>> I am surprised GNAT documentation does not have this, except that the number
>>> will be different for every underlying OS.
>>>
>>> Roger Racine

>>This is a highly misleading figure. The completion of a delay, in the sense
>>that we are talking about, requires a preemptive context switch. To expect
>>this to happen in 1 microsecond when running over an operating system like
>>Unix, or even over a light real time executive is wildly optimistic.

>>I am not sure what you are measuring, but it is quite wrong. We don't document
>>upper limits for such things in the GNAT manual, because it is impractical to
>>do so in almost all operating systems contexts, since we depend on the
>>underlying OS, and this information is not available for the OS.

>It is not misleading at all.  It is a consequence of the speed of today's 
>processors.  Back a few years (1983), it was reasonable to expect a simple 
>delay 0.0, which is what the issue is about, to take somewhere near 100 -200
>microseconds on a real-time OS.  This included the context switch, but not the 
>activities of other tasks.  Is this the meaning of the metric in D.8(10,11)?

>I was somewhat surprised to see numbers, for context switches, in the range of 
>1 microsecond, but not when I thought of the current speed of processors.  And 
>the metric is for processor clock cycles.  I am assuming waiting for memory 
>does not count, but even if that is true, the result will not be very much 
>longer.

>Note that I said, "given a good real-time operating system".  That does 
>not include any form of Unix (with the possible exception of LynxOS and any 
>other real-time versions; I have not looked at their documentation recently).

>I understand why compiler vendors can not document upper limits for host-based 
>systems.  I would think that VxWorks and RTEMS could have the bounds 
>documented.  And something like "GNAT takes xxx clock cycles + the underlying 
>operating system context switch" would meet the spirit of the RM.  I know Wind 
>River publishes their bounds.  It is quite reasonable to have users of the 
>metrics look at other documentation, as long as it is referenced.

>Roger Racine

I just did a little checking on Wind Rivers' web site, and I found, for the 
"MV167C" Motorola 68K board (they do not specify which chip or speed in the 
web page), "context switching requires only 3.8 microseconds".  Not exactly 1 
microsecond, but I got the order of magnitude correct.  

Checking the Motorola site, it turns out the MVME167 board (which I assume is 
what Wind Rivers was referencing) has a 68040 running at a maximum 33 MHz.  I 
can not find the data supporting my claim of 1 microsecond, but try a PowerPC 
running at 350 Mhz.

Roger Racine




^ permalink raw reply	[flat|nested] 37+ messages in thread

* Context switching (was: delay until and GNAT)
  1999-05-08  0:00           ` dewar
  1999-05-10  0:00             ` Roger Racine
@ 1999-05-10  0:00             ` Nick Roberts
  1999-05-11  0:00               ` Robert Dewar
  1999-05-11  0:00               ` Robert Dewar
  1999-05-10  0:00             ` delay until and GNAT - expand Roger Racine
                               ` (4 subsequent siblings)
  6 siblings, 2 replies; 37+ messages in thread
From: Nick Roberts @ 1999-05-10  0:00 UTC (permalink / raw)


I'd just to add a little note on context switching.

[:1:]  On some processors, a 'fast' context switch can be achieved in just a
few CPU clock cycles (by simply switching register banks), which, on a
modern 'RISC' processor, could equate to a mere few picoseconds.  In
practice, obviously, this is a facility likely to be used only in HRT
embedded systems, or for fundamental system software only.

[:2:]  Many processor architectures today provide built-in support for
(normal) context switching, so that the operating system will usually have
very little to do with the speed of these context switches.  Switches can
generally be achieved within a few dozen memory clock cycles (typically
out-of-cache), which will be, for most modern microcomputers, in the
ballpark of 1 microsecond (+/-1oom).

[:3:]  Some operating systems (naming no names ;-) are amazingly slow at
context switches, taking many thousands of instructions to achieve just one
switch.

If you can get access to a high-resolution system clock from within a
compiled program, you can easily test the performance of a particular
system.  Try running under different loading conditions: 1 task; 10 tasks;
100 tasks; 1000 tasks.  (You may find the 1000 trial goes kerplunk.)  Come
to think of it, I might write a little Ada program to do this and post it on
comp.lang.ada for fun!

-------------------------------------
Nick Roberts
-------------------------------------








^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: delay until   and GNAT
  1999-05-06  0:00 ` Roger Racine
@ 1999-05-10  0:00   ` Nick Roberts
  1999-05-11  0:00     ` Context Switching Nick Roberts
  0 siblings, 1 reply; 37+ messages in thread
From: Nick Roberts @ 1999-05-10  0:00 UTC (permalink / raw)


Okay here it is:

with Ada.Calendar, Ada.Text_IO, Ada.Integer_Text_IO, Ada.Float_Text_IO,
Ada.Numerics.Elementary_Functions;
use  Ada.Calendar, Ada.Text_IO, Ada.Integer_Text_IO, Ada.Float_Text_IO,
Ada.Numerics.Elementary_Functions;

procedure Test_Task_Switching is

   Tasks:      constant := 10;
   Iterations: constant := 2000;

--   package Duration_IO is new Fixed_IO(Duration);

   task type Test_Task is
      entry Results (Task_Start,
                     Task_Stop:  out Time;
                     Switches:   out Natural;
                     Diff_Sum:   out Float;
                     Sum_Square: out Float);
   end;

   task body Test_Task is

      Task_Start,
      Task_Stop:  Time;
      Switches:   Natural := 0;
      Diff_Sum:   Float   := 0.0;
      Sum_Square: Float   := 0.0;

      Start: Time;
      Stop:  Time;
      Diff:  Float;

   begin
      Put('<');
      Task_Start := Clock;
      for i in Natural range 1..Iterations loop
         Start := Clock;
         delay 0.0; -- task switch?
         Stop := Clock;
         if Stop /= Start then
            Switches   := Switches + 1;
            Diff       := Float(Stop-Start);
            Diff_Sum   := Diff_Sum + Diff;
            Sum_Square := Sum_Square + Diff**2;
         end if;
      end loop;
      Task_Stop := Clock;
      Put('>');
      accept Results (Task_Start,
                      Task_Stop:  out Time;
                      Switches:   out Natural;
                      Diff_Sum:   out Float;
                      Sum_Square: out Float) do
         Task_Start := Test_Task.Task_Start;
         Task_Stop  := Test_Task.Task_Stop;
         Switches   := Test_Task.Switches;
         Diff_Sum   := Test_Task.Diff_Sum;
         Sum_Square := Test_Task.Sum_Square;
      end;
   end Test_Task;

   Testers: array (1..Tasks) of Test_Task;

   Task_Start,
   Task_Stop:  Time;
   Switches:   Natural;
   Diff_Sum:   Float;
   Sum_Square: Float;

   Grand_Switches:   Natural := 0;
   Grand_Diff_Sum:   Float   := 0.0;
   Grand_Sum_Square: Float   := 0.0;

   Average, Std_Dev: Float;

begin

   for i in Testers'Range loop
      Testers(i).Results(Task_Start,Task_Stop,Switches,Diff_Sum,Sum_Square);
      Grand_Switches   := Grand_Switches + Switches;
      Grand_Diff_Sum   := Grand_Diff_Sum + Diff_Sum;
      Grand_Sum_Square := Grand_Sum_Square + Sum_Square;
   end loop;

   New_Line;
   Put("Tasks:      ");   Put(Tasks,Width=>9);                  New_Line;
   Put("Iterations: ");   Put(Iterations,Width=>9);             New_Line;
   Put("Switches:   ");   Put(Grand_Switches,Width=>9);         New_Line;

   if Grand_Switches /= 0 then

   Average := Grand_Diff_Sum/Float(Grand_Switches);
   Std_Dev := Sqrt(Grand_Sum_Square/Float(Grand_Switches)-Average**2);

   Put("Average:    ");   Put(Average,Fore=>2,Aft=>6,Exp=>0);   New_Line;
   Put("Std Dev:    ");   Put(Std_Dev,Fore=>2,Aft=>6,Exp=>0);   New_Line;

   end if;

   Put_Line("Program completed.");

end;

At least two caveats however: (a) the clock used (be it Ada.Calendar or
whatever) must be HIGH RESOLUTION (i.e. higher than the lengths of the
context switches it's measuring) or the results will be bogus; (b) the
heuristic for deciding whether a task switch occurred (assume not if no
apparent time elapsed) is potentially dodgy both ways (false negative, false
positive) and you should replace it with something better if you can.

There are lots of obvious extensions and improvements to this program (e.g.
minima and maxima).

Have fun.

-------------------------------------
Nick Roberts
-------------------------------------

Roger Racine wrote in message ...
[...]
|Please note that all of the problems are associated with the operating
|system(s), not GNAT.  The bottom line is: if you want to use multitasking,
in
|any language, and want an upper bound on timing, use a real-time operating
|system.
[...]

You don't necessarily need a real-time operating system, just a half decent
one :-)











^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: delay until and GNAT - expand
  1999-05-08  0:00           ` dewar
                               ` (2 preceding siblings ...)
  1999-05-10  0:00             ` delay until and GNAT - expand Roger Racine
@ 1999-05-11  0:00             ` Roger Racine
       [not found]             ` <rracine.14.00 <rracine.15.000968A0@draper.com>
                               ` (2 subsequent siblings)
  6 siblings, 0 replies; 37+ messages in thread
From: Roger Racine @ 1999-05-11  0:00 UTC (permalink / raw)


In article <7h830a$e4$1@nnrp1.deja.com> Robert Dewar <robert_dewar@my-dejanews.com> writes:
>In article <rracine.14.0008C889@draper.com>,
>  rracine@draper.com (Roger Racine) wrote:

>> I just did a little checking on Wind Rivers' web site, and I
>> found, for the  "MV167C" Motorola 68K board (they do not
>> specify which chip or speed in the  web page), "context
>> switching requires only 3.8 microseconds".  Not exactly 1
>> microsecond, but I got the order of magnitude correct.

>I would say this shows you likely got the order of magnitude
>one off. Implementation of a delay involves more than a simple
>context switch. As I say, we will measure the WR VXW speed
>on a fast machine and see what we get. I am willing to bet
>you are way off in the one microsecond estimate. Yes, it would
>be nice if it were only one microsecond, but I am afraid we will
>not see it. Anyway, let's wait till we can get some data here.

Since I did not have a good real-time OS handy, I decided to check Windows NT, 
decidedly -not- a good real-time OS, but it was right on my desk.  As 
mentioned before, I had in the recent past been using the debugger in this 
area, and remembered single stepping (in assembler) through the delay 
statement.  So I did it again, and counted the "stepi"s.

From the initial "pushl" to set up the call to the delay routine, to the 
return from the call, it took 654 "stepi"s, which I assume is the same thing 
as 654 instructions.  By the way, the vast majority of those instructions were 
in NT system calls, and would not be anywhere near that many in a real-time OS 
(it should not take close to 200 instructions to read the current time, for 
example).

Unfortunately, this is somewhat meaningless, since the Pentium is not exactly 
a RISC processor, and a single instruction can take many clock cycles.  But it 
does indicate that, even in a desktop OS, we are not talking about thousands 
of instructions.

As Robert Dewar has pointed out (in a separate email conversation), there is 
also a great dependency on how the processor is used.  Is the MMU used?  For 
every task switch?  How much cache is there?.  So it is truly difficult 
to provide the Reference Manual metrics in this area, without giving a large
list of assumptions.

So I will still maintain that I could (and will, when I can get my hands on a 
processor board and real-time OS) get the number down to 1 microsecond.  

But I will also agree that Robert could use a different setup, and could get 
the number up to 10 microseconds using the same OS and processor board.  And 
that, of course, does not include any foolish programming (like having a task 
disable preemption for a millisecond or 2).

So why were these issues not debated when the Reference Manual was being 
reviewed?  What type of metric were the writers thinking about?  Or did they 
expect an answer like "between 350 and 3500 clock cycles, depending on your 
use of the processor"?

Roger Racine




^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: Context switching (was: delay until and GNAT)
  1999-05-11  0:00               ` Robert Dewar
@ 1999-05-11  0:00                 ` Tarjei Tj�stheim Jensen
  1999-05-11  0:00                   ` David Brown
  1999-05-11  0:00                   ` Robert Dewar
  0 siblings, 2 replies; 37+ messages in thread
From: Tarjei Tj�stheim Jensen @ 1999-05-11  0:00 UTC (permalink / raw)



Robert Dewar wrote :
>  "Nick Roberts" wrote:
>> [:2:]  Many processor architectures today provide built-in
>> support for (normal) context switching, so that the operating
>> system will usually have very little to do with the speed of
>> these context switches.  Switches can generally be achieved
>> within a few dozen memory clock cycles (typically
>> out-of-cache), which will be, for most modern microcomputers,
>> in the ballpark of 1 microsecond (+/-1oom).
>
>Can you say what processor architectures you have in mind here?
>Certainly none of the ones that GNAT is commonly used on ...
>The context switch on the x86 in particular is horribly slow,
>and one would like to avoid it in a high efficiency x86 exec

This is from memory, so it might not be accurate:

The transputer and the ARM.

The ARM has a complete register set available for interrupt handling so you
don't have to save and restore the user mode register.

I believe the transputer had special hardware support for context switching.
What it exactly did is something which I don't remember.


Greetings,







^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: delay until and GNAT - expand
  1999-05-11  0:00               ` Robert Dewar
@ 1999-05-11  0:00                 ` dennison
  0 siblings, 0 replies; 37+ messages in thread
From: dennison @ 1999-05-11  0:00 UTC (permalink / raw)


In article <7h82qm$bq$1@nnrp1.deja.com>,
  Robert Dewar <robert_dewar@my-dejanews.com> wrote:
> In article <rracine.13.0007AF82@draper.com>,
>   rracine@draper.com (Roger Racine) wrote:
>
> > Note that I said, "given a good real-time operating system".
> > That does  not include any form of Unix (with the possible
> > exception of LynxOS and any  other real-time versions; I have
> > not looked at their documentation recently).
>
> I do not believe that the estimate of one microsecond latency
> for a delay is anywhere near achievable for any of the typical
> operating systems or executives that are used today for typical
> real time systems. Roger, do some measurements if you want to
> challenge this. Most certainly Lynx, which I know well cannot
> begin to achieve this performance. Modern RISC machines have
> a lot of state to save and restore, and of course context
> switches typically involve cache switch overs which waste
> even more time.
>
> Certainly in all measurements we have done on GNAT, the time
> spent in pthreads calls completely overwhelms all the time
> in the GNAT runtime itself.
>
> Probably the best case is to run VxWorks on a really fast
> machine, I will see if I can get some measurements for this.

If your "really fast machine" is a PII PC, beware. There's a bug in
vxWorks that actually causes it to take much *longer* to switch tasks on
a PII than on a Pentium. We have a patch for it here, which supposedly
is in Tornado II.

I'll see if I can't get some rough numbers from Windview...

--
T.E.D.


--== Sent via Deja.com http://www.deja.com/ ==--
---Share what you know. Learn what you don't.---




^ permalink raw reply	[flat|nested] 37+ messages in thread

* Context Switching
  1999-05-10  0:00   ` Nick Roberts
@ 1999-05-11  0:00     ` Nick Roberts
  1999-05-11  0:00       ` Robert Dewar
  0 siblings, 1 reply; 37+ messages in thread
From: Nick Roberts @ 1999-05-11  0:00 UTC (permalink / raw)


In another thread I said some processors could do a context switch in a few
'picoseconds'.  I am grateful to Keith Thompson pointing out to me that I
meant 'nanoseconds'.  It'll happen one day, of course, but not until they've
solved the problem of those pesky tunnelling electrons.  Sorry!

-------------------------------------
Nick Roberts
-------------------------------------







^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: delay until and GNAT - expand
  1999-05-11  0:00               ` Robert Dewar
@ 1999-05-11  0:00                 ` dennison
  1999-05-11  0:00                   ` Robert Dewar
  1999-05-12  0:00                 ` delay until and GNAT - where to get the info isaac buchwald
  1 sibling, 1 reply; 37+ messages in thread
From: dennison @ 1999-05-11  0:00 UTC (permalink / raw)


In article <7h830a$e4$1@nnrp1.deja.com>,
  Robert Dewar <robert_dewar@my-dejanews.com> wrote:
> In article <rracine.14.0008C889@draper.com>,
>   rracine@draper.com (Roger Racine) wrote:
>
> > I just did a little checking on Wind Rivers' web site, and I
> > found, for the  "MV167C" Motorola 68K board (they do not
> > specify which chip or speed in the  web page), "context
> > switching requires only 3.8 microseconds".  Not exactly 1
> > microsecond, but I got the order of magnitude correct.
>
> I would say this shows you likely got the order of magnitude
> one off. Implementation of a delay involves more than a simple
> context switch. As I say, we will measure the WR VXW speed
> on a fast machine and see what we get. I am willing to bet
> you are way off in the one microsecond estimate. Yes, it would
> be nice if it were only one microsecond, but I am afraid we will
> not see it. Anyway, let's wait till we can get some data here.

I'm not sure *exactly* what you were hoping to measure. However, I
happen to have an old Windview log here of a program that uses "delay
until" for scheduling. It was taken on a PII-400 PC.

What I am seeing is that "interrupt 0", the clock interrupt, takes about
11 micro seconds. Then the slighly misnamed "idle" task continues to
execute for about 11 micros. Then my Ada task begins. Where in there my
Ada code starts executing again rather than vxWorks system calls, I
cannot tell with Windview. But all totaled that is about 22 microseconds
from the clock tick at which the delay expired to the time my Ada task's
context was switched to.

Note that this is with the Pentium II fix that is in Tornado II (The
fix prevents a rather bogous and very time consuming cache
invalidation, as I understand it). Before the fix we were seeing times
of 19 and 50 micros for a total of about 70 respectively.

--
T.E.D.


--== Sent via Deja.com http://www.deja.com/ ==--
---Share what you know. Learn what you don't.---




^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: Context switching (was: delay until and GNAT)
  1999-05-11  0:00                 ` Tarjei Tj�stheim Jensen
@ 1999-05-11  0:00                   ` David Brown
  1999-05-11  0:00                   ` Robert Dewar
  1 sibling, 0 replies; 37+ messages in thread
From: David Brown @ 1999-05-11  0:00 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 1367 bytes --]

>>>>> On Tue, 11 May 1999 15:41:21 +0200, "Tarjei Tj�stheim Jensen" <tarjei.jensen@kvaerner.no> said:

> Robert Dewar wrote :

>>> [:2:]  Many processor architectures today provide built-in
>>> support for (normal) context switching, so that the operating
>>> system will usually have very little to do with the speed of
>>> these context switches.  Switches can generally be achieved
>>> within a few dozen memory clock cycles (typically
>>> out-of-cache), which will be, for most modern microcomputers,
>>> in the ballpark of 1 microsecond (+/-1oom).
>> 
>> Can you say what processor architectures you have in mind here?
>> Certainly none of the ones that GNAT is commonly used on ...
>> The context switch on the x86 in particular is horribly slow,
>> and one would like to avoid it in a high efficiency x86 exec

> This is from memory, so it might not be accurate:

> The transputer and the ARM.

> The ARM has a complete register set available for interrupt handling so you
> don't have to save and restore the user mode register.

At least on the ARM cores I've used, there isn't a complete register
set available.  There is a special kind of interrupt known as a fast
interrupt.  It has a subset of registers available for it.  Seems to
me to be the kind of thing a small piece of assembly would use,
probably not that useful for high level.

Dave Brown




^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: delay until and GNAT - expand
  1999-05-11  0:00                 ` dennison
@ 1999-05-11  0:00                   ` Robert Dewar
  0 siblings, 0 replies; 37+ messages in thread
From: Robert Dewar @ 1999-05-11  0:00 UTC (permalink / raw)


In article <7h9j77$61l$1@nnrp1.deja.com>,
  dennison@telepath.com wrote:
> What I am seeing is that "interrupt 0", the clock interrupt,
takes about
> 11 micro seconds. Then the slighly misnamed "idle" task
continues to
> execute for about 11 micros. Then my Ada task begins. Where in
there my
> Ada code starts executing again rather than vxWorks system
calls, I
> cannot tell with Windview. But all totaled that is about 22
microseconds
> from the clock tick at which the delay expired to the time my
Ada task's
> context was switched to.

20 microseconds sounds much more in the region that I would
expect than Roger's one microsecond.

Robert Dewar


--== Sent via Deja.com http://www.deja.com/ ==--
---Share what you know. Learn what you don't.---




^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: Context switching (was: delay until and GNAT)
  1999-05-11  0:00                 ` Tarjei Tj�stheim Jensen
  1999-05-11  0:00                   ` David Brown
@ 1999-05-11  0:00                   ` Robert Dewar
  1 sibling, 0 replies; 37+ messages in thread
From: Robert Dewar @ 1999-05-11  0:00 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 569 bytes --]

In article <7h9cgs$c862@ftp.kvaerner.com>,
  "Tarjei Tj�stheim Jensen" <tarjei.jensen@kvaerner.no> wrote:
> I believe the transputer had special hardware support for
> context switching.
> What it exactly did is something which I don't remember.

The trick on the transputer was that a context switch could
happen only at a jump, and it destroyed all registers, so it
could be extremly fast (faster than a call instruction, which
had to mess with some registers).


--== Sent via Deja.com http://www.deja.com/ ==--
---Share what you know. Learn what you don't.---




^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: Context Switching
  1999-05-11  0:00     ` Context Switching Nick Roberts
@ 1999-05-11  0:00       ` Robert Dewar
  1999-05-11  0:00         ` Robert I. Eachus
  0 siblings, 1 reply; 37+ messages in thread
From: Robert Dewar @ 1999-05-11  0:00 UTC (permalink / raw)


In article <37383ab3@eeyore.callnetuk.com>,
  "Nick Roberts" <nickroberts@callnetuk.com> wrote:
> In another thread I said some processors could do a context
switch in a few
> 'picoseconds'.  I am grateful to Keith Thompson pointing out
to me that I
> meant 'nanoseconds'.  It'll happen one day, of course, but not
until they've
> solved the problem of those pesky tunnelling electrons.
Sorry!


You have to be careful here. The raw hardware speed for a
context switch is not what is interesting, what is interesting
is the time for executing the COMPLETE pthread call that causes
the context switch.

For example, the actual raw hardware speed for changing a task
priority is probably just a single store instruction, but it may
well take hundreds or even thousands of instructions to filter
through the necessary kernel machinery to get to the point of
issuing that store!


--== Sent via Deja.com http://www.deja.com/ ==--
---Share what you know. Learn what you don't.---




^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: delay until and GNAT - expand
       [not found]             ` <rracine.14.00 <rracine.15.000968A0@draper.com>
@ 1999-05-11  0:00               ` Robert Dewar
  0 siblings, 0 replies; 37+ messages in thread
From: Robert Dewar @ 1999-05-11  0:00 UTC (permalink / raw)


In article <rracine.15.000968A0@draper.com>,
  rracine@draper.com (Roger Racine) wrote:
> So why were these issues not debated when the Reference Manual
> was being  reviewed?  What type of metric were the writers
> thinking about?  Or did they  expect an answer like "between
> 350 and 3500 clock cycles, depending on your use of the
> processor"?
>
> Roger Racine
>

It is interesting to post the starting point of this thread:

    Does  someone  know  the  upper bound  on the lateness of
delay  until
    and delay relative for  Gnat  implement.  on  Win95 , WinNT
or  Linux.

Roger maintains that a good real time operating system can get
this figure
down to one microsecond. This clearly depends on what is meant
by "good"
ROS. If you mean one which you construct for the purposes of
Ada, I think
one microsecond is far too long (the RT exec I wrote for
Honeywell for
an 8080 would do full task switches in a few microsecondes, and
I think
machines are a *bit* faster now :-)

However, I suspect that virtually everyone is running GNAT on
systems that
are by Roger's definition, not good. In particular, the only
quantitative
input we have is for VxWorks on a 400MHz Pentium II, with a
special patch
from WRS to tune up context switching, and this figure is 22
microseconds
(it was 70 microseconds before the tuneup).

I prefer to deal with what I can actually see today, and what
people are
actually running, rather than an imagined ideal situation. If
Roger can make
this situation true, great.

One thing that has happened in the Ada world is that virtually
no one is
working on bare boards any more, everyone wants to run on top of
real
unixes (we know of serious RT development on top of Sparc
Solaris for
example), or real time executives like VxWorks. A lot of serious
realtime
work is also being done on NT.

As for what people had in mind in the RM metrics? First these
were the
product of a realtively small group of people, and I don't think
many
people paid much attention to them. Indeed all documentation
requirements
are a bit bogus in a formal standard for a language, since they
have to
do more with usability, and cannot for example be tested during
validation
in any meaningful way.

Historically, this concern came from a mindset of implementing
Ada on
a bare board with complete control over the Ada tasking
executive, as
was common in the Ada 83 world. There were even those who
insisted that
specific timing constraints be written into the requirements
(e.g.
RV shall take no more than XXX instructions/cycles etc) That
would
clearly be absurd in the context of the ISO standard, and the
metrics
are a kind of compromise position that also does not make a
whole lot
of sense.

At least, speaking as one of the reviewers, that is my view of
the
metrics requirements in the RM.

P.S. Someone should post a delay latency measuring program, it's
not that
hard to write, and then we can run it on multiple systems and
compilers.
Of course it does not give the metric, since this is not
measurable, but
it is still an interesting figure.

P.P.S. when I wrote earlier in the thread that Roger was "quite
wrong" in
suggesting the one microsecond figure, I was really thinking in
terms of
the original post quoted above, not his claim that he can
achieve this in
new software. I apologize for this confusion. In fact I think
that for a
purpose written Ada exec for a reasonable chip, one microsecond
is indeed
conservative.

P.P.P.S. I am not sure that Roger's measurements are meaningful
for NT.
I would not expect stepi to be able to step into ring 0 kernel
stuff
so easily, and I suspect that these results may be misleading.
Really
the only thing that is convincing is to run an Ada test program.


--== Sent via Deja.com http://www.deja.com/ ==--
---Share what you know. Learn what you don't.---




^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: delay until and GNAT - expand
  1999-05-10  0:00             ` Roger Racine
  1999-05-10  0:00               ` Joel Sherrill
  1999-05-11  0:00               ` Robert Dewar
@ 1999-05-11  0:00               ` isaac buchwald
  1999-05-11  0:00                 ` dennison
  1999-05-12  0:00                 ` Robert Dewar
  2 siblings, 2 replies; 37+ messages in thread
From: isaac buchwald @ 1999-05-11  0:00 UTC (permalink / raw)


  Gnat  have  the  number   (kind of metric )  but   the  don't  want to
publish   it.
   Does  anybody   wander   why   there  are  no more   ada  aficados.

                     Have a  nice  day   -  specialy  R.  Dewar !



Roger Racine wrote in message ...
>>In article <rracine.12.00086B05@draper.com>,
>>  rracine@draper.com (Roger Racine) wrote:
>  ......
>>>
>>> I am surprised GNAT documentation does not have this, except that the
number
>>> will be different for every underlying OS.
>>>
>>> Roger Racine
>
cycles + the underlying
>operating system context switch" would meet the spirit of the RM.  I know
Wind
>River publishes their bounds.  It is quite reasonable to have users of the
>metrics look at other documentation, as long as it is referenced.
>
>Roger Racine






^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: delay until and GNAT - expand
  1999-05-11  0:00               ` isaac buchwald
@ 1999-05-11  0:00                 ` dennison
  1999-05-12  0:00                 ` Robert Dewar
  1 sibling, 0 replies; 37+ messages in thread
From: dennison @ 1999-05-11  0:00 UTC (permalink / raw)


In article <u1%Z2.22$801.870688@news.siol.net>,
  "isaac buchwald" <isaac.buchwald@velenje.cx> wrote:
>   Gnat  have  the  number   (kind of metric )  but   the  don't  want
to
> publish   it.
>    Does  anybody   wander   why   there  are  no more   ada  aficados.

Hmmm. I think someone has been watching too much "X-files"...

--
T.E.D.


--== Sent via Deja.com http://www.deja.com/ ==--
---Share what you know. Learn what you don't.---




^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: Context Switching
  1999-05-11  0:00       ` Robert Dewar
@ 1999-05-11  0:00         ` Robert I. Eachus
  1999-05-12  0:00           ` dennison
  0 siblings, 1 reply; 37+ messages in thread
From: Robert I. Eachus @ 1999-05-11  0:00 UTC (permalink / raw)




Robert Dewar wrote:
 
> You have to be careful here. The raw hardware speed for a
> context switch is not what is interesting, what is interesting
> is the time for executing the COMPLETE pthread call that causes
> the context switch.
> 
> For example, the actual raw hardware speed for changing a task
> priority is probably just a single store instruction, but it may
> well take hundreds or even thousands of instructions to filter
> through the necessary kernel machinery to get to the point of
> issuing that store!
 
     Actually we have been finding on some modern microprocessors that
the (distributed) overhead of cache misses can dominate other costs.  On
others, where the cache is a "physical" cache that does not have to be
invalidated,
this overhead is very small.  I don't remember which chips are which
here, but there were some cases where the cache miss overhead tripled
the cost of a thread switch.

-- 

                                        Robert I. Eachus

with Standard_Disclaimer;
use  Standard_Disclaimer;
function Message (Text: in Clever_Ideas) return Better_Ideas is...




^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: delay until and GNAT - expand
  1999-05-10  0:00             ` Roger Racine
  1999-05-10  0:00               ` Joel Sherrill
@ 1999-05-11  0:00               ` Robert Dewar
  1999-05-11  0:00                 ` dennison
  1999-05-11  0:00               ` isaac buchwald
  2 siblings, 1 reply; 37+ messages in thread
From: Robert Dewar @ 1999-05-11  0:00 UTC (permalink / raw)


In article <rracine.13.0007AF82@draper.com>,
  rracine@draper.com (Roger Racine) wrote:

> Note that I said, "given a good real-time operating system".
> That does  not include any form of Unix (with the possible
> exception of LynxOS and any  other real-time versions; I have
> not looked at their documentation recently).

I do not believe that the estimate of one microsecond latency
for a delay is anywhere near achievable for any of the typical
operating systems or executives that are used today for typical
real time systems. Roger, do some measurements if you want to
challenge this. Most certainly Lynx, which I know well cannot
begin to achieve this performance. Modern RISC machines have
a lot of state to save and restore, and of course context
switches typically involve cache switch overs which waste
even more time.

Certainly in all measurements we have done on GNAT, the time
spent in pthreads calls completely overwhelms all the time
in the GNAT runtime itself.

Probably the best case is to run VxWorks on a really fast
machine, I will see if I can get some measurements for this.


--== Sent via Deja.com http://www.deja.com/ ==--
---Share what you know. Learn what you don't.---




^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: delay until and GNAT - expand
  1999-05-10  0:00             ` delay until and GNAT - expand Roger Racine
@ 1999-05-11  0:00               ` Robert Dewar
  1999-05-11  0:00                 ` dennison
  1999-05-12  0:00                 ` delay until and GNAT - where to get the info isaac buchwald
  0 siblings, 2 replies; 37+ messages in thread
From: Robert Dewar @ 1999-05-11  0:00 UTC (permalink / raw)


In article <rracine.14.0008C889@draper.com>,
  rracine@draper.com (Roger Racine) wrote:

> I just did a little checking on Wind Rivers' web site, and I
> found, for the  "MV167C" Motorola 68K board (they do not
> specify which chip or speed in the  web page), "context
> switching requires only 3.8 microseconds".  Not exactly 1
> microsecond, but I got the order of magnitude correct.

I would say this shows you likely got the order of magnitude
one off. Implementation of a delay involves more than a simple
context switch. As I say, we will measure the WR VXW speed
on a fast machine and see what we get. I am willing to bet
you are way off in the one microsecond estimate. Yes, it would
be nice if it were only one microsecond, but I am afraid we will
not see it. Anyway, let's wait till we can get some data here.


--== Sent via Deja.com http://www.deja.com/ ==--
---Share what you know. Learn what you don't.---




^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: Context switching (was: delay until and GNAT)
  1999-05-10  0:00             ` Context switching (was: delay until and GNAT) Nick Roberts
@ 1999-05-11  0:00               ` Robert Dewar
  1999-05-11  0:00               ` Robert Dewar
  1 sibling, 0 replies; 37+ messages in thread
From: Robert Dewar @ 1999-05-11  0:00 UTC (permalink / raw)


In article <3736e102@eeyore.callnetuk.com>,
  "Nick Roberts" <nickroberts@callnetuk.com> wrote:
> Come
> to think of it, I might write a little Ada program to do this
> and post it on comp.lang.ada for fun!


Remember that the issue here is the latency of delay, so what
is interesting is a program that specifically tests that!


--== Sent via Deja.com http://www.deja.com/ ==--
---Share what you know. Learn what you don't.---




^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: Context switching (was: delay until and GNAT)
  1999-05-10  0:00             ` Context switching (was: delay until and GNAT) Nick Roberts
  1999-05-11  0:00               ` Robert Dewar
@ 1999-05-11  0:00               ` Robert Dewar
  1999-05-11  0:00                 ` Tarjei Tj�stheim Jensen
  1 sibling, 1 reply; 37+ messages in thread
From: Robert Dewar @ 1999-05-11  0:00 UTC (permalink / raw)


In article <3736e102@eeyore.callnetuk.com>,
  "Nick Roberts" <nickroberts@callnetuk.com> wrote:
> I'd just to add a little note on context switching.
>
> [:2:]  Many processor architectures today provide built-in
> support for (normal) context switching, so that the operating
> system will usually have very little to do with the speed of
> these context switches.  Switches can generally be achieved
> within a few dozen memory clock cycles (typically
> out-of-cache), which will be, for most modern microcomputers,
> in the ballpark of 1 microsecond (+/-1oom).

Can you say what processor architectures you have in mind here?
Certainly none of the ones that GNAT is commonly used on ...
The context switch on the x86 in particular is horribly slow,
and one would like to avoid it in a high efficiency x86 exec
(I once saw an RFP for an Ada compiler from Intel that required
that tasks use the hardware tasking of the x86. There was a foot
note saying that it was understood that this requirement would
degrade performance :-)



--== Sent via Deja.com http://www.deja.com/ ==--
---Share what you know. Learn what you don't.---




^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: delay until and GNAT - expand
  1999-05-08  0:00           ` dewar
                               ` (5 preceding siblings ...)
       [not found]             ` <rracine.14.00 <rracine.17.0007DA28@draper.com>
@ 1999-05-12  0:00             ` Roger Racine
  6 siblings, 0 replies; 37+ messages in thread
From: Roger Racine @ 1999-05-12  0:00 UTC (permalink / raw)


In article <7h9nal$9dr$1@nnrp1.deja.com> Robert Dewar <robert_dewar@my-dejanews.com> writes:
>In article <7h9j77$61l$1@nnrp1.deja.com>,
>  dennison@telepath.com wrote:
>> What I am seeing is that "interrupt 0", the clock interrupt,
>takes about
>> 11 micro seconds. Then the slighly misnamed "idle" task
>continues to
>> execute for about 11 micros. Then my Ada task begins. Where in
>there my
>> Ada code starts executing again rather than vxWorks system
>calls, I
>> cannot tell with Windview. But all totaled that is about 22
>microseconds
>> from the clock tick at which the delay expired to the time my
>Ada task's
>> context was switched to.

>20 microseconds sounds much more in the region that I would
>expect than Roger's one microsecond.

>Robert Dewar

We are obviously talking about different metrics.  The original message (as 
expanded) asked about the "delay 0.0" case, not a positive value.  So your 
clock interrupt goes away, as does the idle task execution (which is all of 
your measured time).

All that is needed (not necessarily all that is done) is a check for positive, 
and, if not, a call to "sched_yield" (POSIX interface), which puts the task at 
the end of the ready queue and performs a context switch.

Let's make sure we are all measuring the same thing.

Roger Racine




^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: Context Switching
  1999-05-11  0:00         ` Robert I. Eachus
@ 1999-05-12  0:00           ` dennison
  0 siblings, 0 replies; 37+ messages in thread
From: dennison @ 1999-05-12  0:00 UTC (permalink / raw)


In article <3738A31D.1013C463@mitre.org>,
  "Robert I. Eachus" <eachus@mitre.org> wrote:
>
>
>      Actually we have been finding on some modern microprocessors that
> the (distributed) overhead of cache misses can dominate other costs.
On
> others, where the cache is a "physical" cache that does not have to be
> invalidated,
> this overhead is very small.  I don't remember which chips are which
> here, but there were some cases where the cache miss overhead tripled
> the cost of a thread switch.

The "patch" I was talking about in another posting for vxWorks on a PII
was basicly the removal of a "write back invalidate cache" instruction
(WBINVD). According to Intel, it took about 5 cycles or so on a Pentium
and 486, but takes over 2000 on a PII. Yikes!

We knew there was a problem when the same code ran *slower* on a PII-400
than on a Pentium 166...

--
T.E.D.


--== Sent via Deja.com http://www.deja.com/ ==--
---Share what you know. Learn what you don't.---




^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: delay until and GNAT - expand
       [not found]             ` <rracine.14.00 <rracine.17.0007DA28@draper.com>
@ 1999-05-12  0:00               ` dennison
  0 siblings, 0 replies; 37+ messages in thread
From: dennison @ 1999-05-12  0:00 UTC (permalink / raw)


In article <rracine.17.0007DA28@draper.com>,
  rracine@draper.com (Roger Racine) wrote:
> In article <7h9nal$9dr$1@nnrp1.deja.com> Robert Dewar
<robert_dewar@my-dejanews.com> writes:
> >In article <7h9j77$61l$1@nnrp1.deja.com>,
> >  dennison@telepath.com wrote:
> >> What I am seeing is that "interrupt 0", the clock interrupt,
> >takes about
> >> 11 micro seconds. Then the slighly misnamed "idle" task
> >continues to
> >> execute for about 11 micros. Then my Ada task begins. Where in
> >there my
> We are obviously talking about different metrics.  The original
message (as
> expanded) asked about the "delay 0.0" case, not a positive value.  So
your
> clock interrupt goes away, as does the idle task execution (which is
all of
> your measured time).
>
> All that is needed (not necessarily all that is done) is a check for
positive,
> and, if not, a call to "sched_yield" (POSIX interface), which puts the
task at
> the end of the ready queue and performs a context switch.


Well, it appears that Windview reports the work of a context switch in
the context of the currently running task (yikes, what a sentence!). So
I think it can be reasonably well assumed that the entire period of time
I reported for the "idle" task is being taken up by an attempt to change
the task context from that of the idle task to that of the newly-readied
60hz task. What we *don't* know is how much, if any, task switch
overhead occurrs after Windview reports the context switch. But I think
we can resonably take 11 micros as a lower bound for the amount of time
it takes to perform a context switch in vxWorks (on a PII-400, with the
cache invalidate instruction disabled, etc, etc.).

--
T.E.D.


--== Sent via Deja.com http://www.deja.com/ ==--
---Share what you know. Learn what you don't.---




^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: delay until and GNAT - where to get the info
  1999-05-11  0:00               ` Robert Dewar
  1999-05-11  0:00                 ` dennison
@ 1999-05-12  0:00                 ` isaac buchwald
  1999-05-12  0:00                   ` Robert Dewar
  1 sibling, 1 reply; 37+ messages in thread
From: isaac buchwald @ 1999-05-12  0:00 UTC (permalink / raw)


  to  R.  Dewar  and   TV-viewers  please  do  read   the  mess.  i've  got
from  GNAT:

If you send a request to report@gnat.com with your customer number
(assuming you are supported), we will be happy to answer this question
for you. If you are not a supported customer, I don't think there is any

way to obtain this information unfortunately. Your best bet is just to
do
some measurements and hope they are typical


.






^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: delay until and GNAT - where to get the info
  1999-05-12  0:00                 ` delay until and GNAT - where to get the info isaac buchwald
@ 1999-05-12  0:00                   ` Robert Dewar
  0 siblings, 0 replies; 37+ messages in thread
From: Robert Dewar @ 1999-05-12  0:00 UTC (permalink / raw)


In article <fWk_2.52$801.1578174@news.siol.net>,
  "isaac buchwald" <isaac.buchwald@velenje.cx> wrote:
>   to  R.  Dewar  and   TV-viewers  please  do  read   the
mess.  i've  got
> from  GNAT:
>
> If you send a request to report@gnat.com with your customer
> number (assuming you are supported), we will be happy to
> answer this question for you. If you are not a supported
> customer, I don't think there is any

What this means is that if there is a customer who needs
performance data of some kind, we will work with them to
understand exactly what they need, and how best to obtain
the information. We do not have a set of numbers that we
secretly guard! But in the few cases where customers have
had performance concerns of this kind, we have been able to
figure out how to meet those concerns.


--== Sent via Deja.com http://www.deja.com/ ==--
---Share what you know. Learn what you don't.---




^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: delay until and GNAT - expand
  1999-05-11  0:00               ` isaac buchwald
  1999-05-11  0:00                 ` dennison
@ 1999-05-12  0:00                 ` Robert Dewar
  1 sibling, 0 replies; 37+ messages in thread
From: Robert Dewar @ 1999-05-12  0:00 UTC (permalink / raw)


In article <u1%Z2.22$801.870688@news.siol.net>,
  "isaac buchwald" <isaac.buchwald@velenje.cx> wrote:
>  Gnat  have  the  number   (kind of metric )  but   the
> don't  want to publish   it. Does  anybody   wander   why
> there  are  no more   ada  aficados.

Well if "GNAT have the number", he (or is it she, or it?) has
not divulged this information to me or anyone else at ACT!



--== Sent via Deja.com http://www.deja.com/ ==--
---Share what you know. Learn what you don't.---




^ permalink raw reply	[flat|nested] 37+ messages in thread

end of thread, other threads:[~1999-05-12  0:00 UTC | newest]

Thread overview: 37+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
1999-05-05  0:00 delay until and GNAT isaac buchwald
1999-05-05  0:00 ` dennison
1999-05-06  0:00   ` Buz Cory
1999-05-06  0:00     ` Robert Dewar
1999-05-06  0:00       ` delay until and GNAT - expand isaac buchwald
1999-05-07  0:00         ` Roger Racine
1999-05-08  0:00           ` dewar
1999-05-10  0:00             ` Roger Racine
1999-05-10  0:00               ` Joel Sherrill
1999-05-11  0:00               ` Robert Dewar
1999-05-11  0:00                 ` dennison
1999-05-11  0:00               ` isaac buchwald
1999-05-11  0:00                 ` dennison
1999-05-12  0:00                 ` Robert Dewar
1999-05-10  0:00             ` Context switching (was: delay until and GNAT) Nick Roberts
1999-05-11  0:00               ` Robert Dewar
1999-05-11  0:00               ` Robert Dewar
1999-05-11  0:00                 ` Tarjei Tj�stheim Jensen
1999-05-11  0:00                   ` David Brown
1999-05-11  0:00                   ` Robert Dewar
1999-05-10  0:00             ` delay until and GNAT - expand Roger Racine
1999-05-11  0:00               ` Robert Dewar
1999-05-11  0:00                 ` dennison
1999-05-11  0:00                   ` Robert Dewar
1999-05-12  0:00                 ` delay until and GNAT - where to get the info isaac buchwald
1999-05-12  0:00                   ` Robert Dewar
1999-05-11  0:00             ` delay until and GNAT - expand Roger Racine
     [not found]             ` <rracine.14.00 <rracine.15.000968A0@draper.com>
1999-05-11  0:00               ` Robert Dewar
     [not found]             ` <rracine.14.00 <rracine.17.0007DA28@draper.com>
1999-05-12  0:00               ` dennison
1999-05-12  0:00             ` Roger Racine
1999-05-05  0:00 ` delay until and GNAT David C. Hoos, Sr.
1999-05-06  0:00 ` Roger Racine
1999-05-10  0:00   ` Nick Roberts
1999-05-11  0:00     ` Context Switching Nick Roberts
1999-05-11  0:00       ` Robert Dewar
1999-05-11  0:00         ` Robert I. Eachus
1999-05-12  0:00           ` dennison

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox