comp.lang.ada
 help / color / mirror / Atom feed
Search results ordered by [date|relevance]  view[summary|nested|Atom feed]
thread overview below | download mbox.gz: |
* Re: Ada 95 Timers: Any Alternatives Available, Storage, Efficiency, Reliability, Accuracy, Et Cetera Issues?
  @ 2019-05-30 16:20  8%   ` Dmitry A. Kazakov
  0 siblings, 0 replies; 170+ results
From: Dmitry A. Kazakov @ 2019-05-30 16:20 UTC (permalink / raw)


On 2019-05-30 17:34, Niklas Holsti wrote:
> On 19-05-30 16:57 , Felix_The_Cat@gmail.com wrote:
>>
>> What are the limitations of the "with Timer;" in Ada 95?
> 
> Not understood. There is no standard "package Timer" in Ada 95 (nor 
> other Ada standards, AFAIK).

It looks that the OP meant Ada.Execution_Time.Timers.

>> I want to simply do something in the Ada code when the timer
>> counts down to 0.
> 
> That sounds like some HW timer. Normally you would not use those 
> directly, but instead use the "delay" or "delay until" statements, to 
> wait until the actual time when something should be done, and let the 
> compiler and run-time system handle the HW timers.

Right, alternatives could also be timed entry call (9.7.2) and delay 
alternative of selective accept (9.7.1).

As for limitations of Ada.Execution_Time.Timers they are from the SW 
design POV. Scheduled actions usually interact with the program logic, 
the states of program objects, the context of the task performing these 
actions. Ada.Execution_Time.Timers are too low-level for a that.

The intended use of Ada.Execution_Time.Timers is for something quite 
decoupled from the program itself, e.g. I can imagine a system health 
monitoring based on Ada.Execution_Time.Timers.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de

^ permalink raw reply	[relevance 8%]

* GNAT.Sockets Streaming inefficiency
@ 2017-06-08 10:36  7% masterglob
  0 siblings, 0 replies; 170+ results
From: masterglob @ 2017-06-08 10:36 UTC (permalink / raw)


Configuration: X64, Linux & Windows (GNATPRO 7.4.2)
While using GNAT.Sockets.Stream_Access, there is a real performance issue while using String'Output(...).

My test sends 500 times a 1024-long String using String'Output(TCP_Stream on 127.0.0.1) and the result is:
- Linux : average 'output duration = 3 us
- Windows: average 'output duration = 250 us

From prior discussion with AdaCore, the "String" type is the only one for which this latency is NOT observed on Linux.

Any idea on:
- is there a way to get similar performance on Windows (maybe using another type or method?)
- is there any configuration that may solve this issue?


=============
Result on W7/64
---------
OS=Windows_NT  cygwin
GNAT Pro 7.4.2 (20160527-49)
[SERVER] start...
[SERVER]:bind... 127.0.0.1:4264
[CLIENT] connected...
[SERVER]:connection from 127.0.0.1:56008
[SERVER] Sending 500 messages ...
[CLIENT] waiting for 500 messages ...
[CLIENT]:execution time:  0.000000000
[Server] execution time ( 500msg):  0.140400900 s
[Server] Output_Exec time (1 msg):  0.280801800 ms
[Server] Output_Duration time (1 msg):  0.263417164 ms

=============
Result on Ubuntu/64
---------
OS= 
GNAT Pro 7.4.2 (20160527-49)
[SERVER] start...
[SERVER]:bind... 127.0.0.1:4264
[SERVER]:connection from 127.0.0.1:52574
[CLIENT] connected...
[SERVER] Sending 500 messages ...
[CLIENT] waiting for 500 messages ...
[Server] execution time ( 500msg):  0.001783393 s
[Server] Output_Exec time (1 msg):  0.003072174 ms
[Server] Output_Duration time (1 msg):  0.003204778 ms
[CLIENT]:execution time:  0.001561405

=============
Makefile:
---------
all:build exec
build:
	@gprbuild -p -Ptest_stream_socket_string.gpr
exec: 
	@echo "OS=$$OS  $$OSTYPE"
	@echo $$(gnat --version|head -1)
	@obj/test_stream_socket_string



=============
test_stream_socket_string.gpr:
---------
project Test_Stream_Socket_String is

   for Object_Dir  use "obj";
   for Exec_Dir    use "obj";
   for Main        use ("test_stream_socket_string.adb");
   for Source_Dirs use (".");
   for Languages use ("Ada");

   package Builder is
      for Default_Switches ("Ada") use ("-g","-s","-j0");
   end Builder;

   package Compiler is
      Ada_Opt   := ("-O0");
      Ada_Comp    := ("-gnat12","-g","-gnatU","-gnato","-gnatVa","-fstack-check","-fstack-usage","-gnateE","-gnateF");
      Ada_Style   := ("-gnaty3aAbBCdefhiklL15M120nOprStux");
      Ada_Warning := ("-gnatwah.h.o.st.w");
      for Default_Switches ("Ada") use Ada_Opt & Ada_Comp & Ada_Warning & Ada_Style;
   end Compiler;

   package Binder is
      for Default_Switches ("Ada") use ("-v","-E","-R","-T0");
   end Binder;

   package Linker is
      for Default_Switches ("Ada") use ("-g","-v") ;
   end Linker;

end Test_Stream_Socket_String;



=============
test_stream_socket_string.adb
---------
with Ada.Execution_Time,
     Ada.Real_Time,
     Ada.Text_IO,
     Ada.Exceptions,

     GNAT.Sockets,
     GNAT.Traceback.Symbolic,
     GNAT.OS_Lib;
use GNAT,
    Ada;

use type GNAT.Sockets.Selector_Status,
         Ada.Real_Time.Time,
         Ada.Execution_Time.CPU_Time;

procedure Test_Stream_Socket_String is

   Port    : constant Sockets.Port_Type := 4264;
   Ip_Addr : constant String := "127.0.0.1";

   task type Client_Thread_T is
      entry Start;
      entry Synch;
      entry Wait;
   end Client_Thread_T;
   Client_Thread : Client_Thread_T;
   task type Server_Thread_T is
      entry Start;
      entry Wait;
   end Server_Thread_T;
   Server_Thread : Server_Thread_T;
   
   task body Client_Thread_T is
   
   task body Server_Thread_T is
      Nb_Loop         : constant := 500;
      Cpt             : Integer := Nb_Loop;
      Msg_Size        : constant :=  1024; -- 1 Ko
      Exec_Start_Time : Execution_Time.CPU_Time;
      Exec_Output1    : Execution_Time.CPU_Time;
      Exec_Output2    : Real_Time.Time;
      Output_Exec     : Duration := 0.0;
      Output_Duration : Duration := 0.0;
      Listen          : Sockets.Socket_Type;
      Client          : Sockets.Socket_Type;
      Address         : Sockets.Sock_Addr_Type := (Family => Sockets.Family_Inet,
                                                   Addr   => Sockets.Inet_Addr (Ip_Addr),
                                                   Port   => Port);
      Channel         : Sockets.Stream_Access;
   begin
      accept Start;
      Text_IO.Put_Line ("[SERVER] start...");

      Sockets.Create_Socket (Socket => Listen);
      Text_IO.Put_Line ("[SERVER]:bind... " & Sockets.Image (Address));
      Sockets.Bind_Socket (Socket  => Listen,
                           Address => Address);
      Sockets.Listen_Socket (Listen);

      Sockets.Accept_Socket (Listen, Client, Address);
      Text_IO.Put_Line ("[SERVER]:connection from " & Sockets.Image (Sockets.Get_Peer_Name (Client)));
      Channel := Sockets.Stream (Socket => Client);
      Exec_Start_Time := Execution_Time.Clock;

      Integer'Output (Channel, Cpt);
         Text_IO.Put_Line ("[SERVER] Sending" & Cpt'Img & " messages ...");
      
      while Cpt > 0 loop
         -- Text_IO.Put ('+');
         declare
            S : constant String (1 .. Msg_Size) := (others => '?');
         begin
            Exec_Output1 := Execution_Time.Clock;
            Exec_Output2 := Real_Time.Clock;
            String'Output (Channel, S);
            Output_Exec := Output_Exec + 
              Real_Time.To_Duration (Execution_Time.Clock - Exec_Output1);
            Output_Duration := Output_Duration + 
              Real_Time.To_Duration (Real_Time.Clock - Exec_Output2);
         end;

         Cpt := Cpt - 1;

      end loop;

      Text_IO.Put_Line ("[Server] execution time (" & Nb_Loop'Img & "msg): " &
                          Real_Time.To_Duration (Execution_Time.Clock - Exec_Start_Time)'Img & " s");

      Text_IO.Put_Line ("[Server] Output_Exec time (1 msg): " &
                          Duration'Image (1000.0 * Output_Exec / (Nb_Loop - Cpt)) & " ms");
      Text_IO.Put_Line ("[Server] Output_Duration time (1 msg): " &
                          Duration'Image (1000.0 * Output_Duration / (Nb_Loop - Cpt)) & " ms");

      Sockets.Close_Socket (Socket => Listen);

      accept Wait;
      -- Text_IO.New_Line;
   exception
      when E : others =>
         Text_IO.New_Line;
         Text_IO.Put_Line ("[Server] Exception: " & Exceptions.Exception_Information (E));
         Text_IO.Put_Line (Exceptions.Exception_Message (E));
         Text_IO.Put_Line (Traceback.Symbolic.Symbolic_Traceback (E));
         if Cpt /= Nb_Loop then
            Text_IO.Put_Line ("[Server] Output_Duration time: " &
                                Duration'Image (1000.0 * Output_Duration / (Nb_Loop - Cpt)) & " ms");
         end if;
         GNAT.OS_Lib.OS_Abort;
   end Server_Thread_T;
   
begin
   Server_Thread.Start;
   Client_Thread.Start;
   Client_Thread.Synch;
   Server_Thread.Wait;
   
   Client_Thread.Wait;
   -- Text_IO.New_Line;
exception
   when E : others =>
      Text_IO.Put_Line (Exceptions.Exception_Information (E));
      Text_IO.Put_Line (Exceptions.Exception_Message (E));
      Text_IO.Put_Line (Traceback.Symbolic.Symbolic_Traceback (E));
end Test_Stream_Socket_String;

^ permalink raw reply	[relevance 7%]

* Re: Languages don't  matter.  A mathematical refutation
  2015-04-09 18:40  4%                                           ` Niklas Holsti
@ 2015-04-09 19:02  0%                                             ` Dmitry A. Kazakov
  0 siblings, 0 replies; 170+ results
From: Dmitry A. Kazakov @ 2015-04-09 19:02 UTC (permalink / raw)


On Thu, 09 Apr 2015 21:40:06 +0300, Niklas Holsti wrote:

> On 15-04-09 17:35 , Dmitry A. Kazakov wrote:
>> On Thu, 09 Apr 2015 15:14:59 +0200, G.B. wrote:
>>
>>> Now, WRT storage management, assume a bounded container.
>>> The implementation may pre-allocate the maximum number of
>>> objects.(*) Given that, why should it not be possible at all,
>>> in no case, to guarantee a known complexity of storage
>>> management? A bounded vector, say, could specify its
>>> storage management needs, on condition that ... Much like
>>> a hash table package would state that
>>>
>>>     "As long as the number of slots is as least N/M,
>>>      finding an object will take no more than f(M, N, g(Hash))."
>>
>> Which is elementary incomputable, as it contains the future tense and thus
>> cannot be a part of any contract.
> 
> It is quite possible to extend the semantics of a programming language 
> with a predefined global (or task-specific) variable "elapsed execution 
> time", and the above contract can be stated as a constraint on the 
> difference between the pre- and post-values of this variable.

I had impression that Georg wished to compare the complexity [of free block
search?] after cleaning the garbage with the complexity without cleaning.
No time measurements can help that, obviously. And "will take" is not
"took". Furthermore, it is not a proper contract anyway, as it cannot be
verified *before* a program run. The [proper] contract must apply to all
possible runs (all program states).

> In fact, Ada.Execution_Time provides such variables. The Time_Remaining 
> function could be used to express the above constraint.
> 
>>> Like everything that actually handles program data, GC does
>>> influence the program's behavior, e.g. by taking time.
>>
>> That is not the program behavior. The behavior is only things related to
>> the program logic = functional things.
> 
> This is niggling about words, but for real-time systems the execution 
> time of computations is certainly an important property of the program, 
> whether it is called "behaviour" or not.

It is a constraint.

My point was that it is a lesser problem, IMO, when GC violates constraints
such as time and space constraints in an unpredictable manner. The bigger
problem is that it may violate [proper] behavior because of issues with
finalization and references order (weak, strong, the things Randy said
about designing memory management).

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de


^ permalink raw reply	[relevance 0%]

* Re: Languages don't  matter.  A mathematical refutation
  @ 2015-04-09 18:40  4%                                           ` Niklas Holsti
  2015-04-09 19:02  0%                                             ` Dmitry A. Kazakov
  0 siblings, 1 reply; 170+ results
From: Niklas Holsti @ 2015-04-09 18:40 UTC (permalink / raw)


On 15-04-09 17:35 , Dmitry A. Kazakov wrote:
> On Thu, 09 Apr 2015 15:14:59 +0200, G.B. wrote:
>
>> Now, WRT storage management, assume a bounded container.
>> The implementation may pre-allocate the maximum number of
>> objects.(*) Given that, why should it not be possible at all,
>> in no case, to guarantee a known complexity of storage
>> management? A bounded vector, say, could specify its
>> storage management needs, on condition that ... Much like
>> a hash table package would state that
>>
>>     "As long as the number of slots is as least N/M,
>>      finding an object will take no more than f(M, N, g(Hash))."
>
> Which is elementary incomputable, as it contains the future tense and thus
> cannot be a part of any contract.

It is quite possible to extend the semantics of a programming language 
with a predefined global (or task-specific) variable "elapsed execution 
time", and the above contract can be stated as a constraint on the 
difference between the pre- and post-values of this variable.

In fact, Ada.Execution_Time provides such variables. The Time_Remaining 
function could be used to express the above constraint.

>> Like everything that actually handles program data, GC does
>> influence the program's behavior, e.g. by taking time.
>
> That is not the program behavior. The behavior is only things related to
> the program logic = functional things.

This is niggling about words, but for real-time systems the execution 
time of computations is certainly an important property of the program, 
whether it is called "behaviour" or not.

That it is a platform-dependent property does not make it less important 
- only harder to manage. Formal analysis tools such as SPARK already 
take into account other platform-dependent properties such as the range 
of the predefined Integer type. The formal analysis tools called 
Worst-Case Execution-Time (WCET) Analyzers try to prove exactly this 
kind of contracts, sometimes using very detailed models of the platforms.

Of course garbage collection (GC) complicates WCET analysis a lot, at 
least if it is done synchronously, interrupting the normal computation 
at unpredictable times. A "background" GC would be more manageable, 
perhaps running on a dedicated core or in a low-priority task. (It is 
then necessary to prove, by some analysis, that the garbage-collection 
rate is sufficiently higher than the garbage-creation rate that a memory 
allocation never fails. That would be similar to analyses already being 
done, such as schedulability analyses and analyses to show that buffers 
never overflow.)

-- 
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
       .      @       .

^ permalink raw reply	[relevance 4%]

* Re: Access parameters and accessibility
  2014-12-17  2:02  5%     ` Adam Beneschan
@ 2014-12-17 23:18  0%       ` Randy Brukardt
  0 siblings, 0 replies; 170+ results
From: Randy Brukardt @ 2014-12-17 23:18 UTC (permalink / raw)


"Adam Beneschan" <adambeneschan@gmail.com> wrote in message 
news:eea29e4c-921a-467c-8007-80e80eda3507@googlegroups.com...
On Tuesday, December 16, 2014 11:46:59 AM UTC-8, Michael B. wrote:
...
>I'm sure Randy was being tongue-in-cheek when he said "get better 
>libraries".

Ahh, no.

...
>For one thing, you'd have to get better libraries than the libraries Ada 
>provides,
>because Ada defines some subprograms with anonymous access 
>parameters--namely
>Ada.Tags.Generic_Dispatching_Constructor, Ada.Execution_Time.Timers.Timer, 
>and >Read|Write_Exception_Occurrence in Ada.Exceptions. Also, the stream 
>attribute
>subprograms ('Read, 'Write, etc.) all have anonymous access parameters, and 
>you
>have to *write* a subprogram that has an anonymous access parameter in 
>order to
>use it as the stream subprogram for a type.  Quelle horreur!

Yup. With the exception of timers, all of the others stem directly from the 
mistake of using an anonymous access for streams. And *that* stems from the 
mistake of not allowing "in out" parameters for functions (as Input is a 
function).

Ergo, in Ada 2012 you would write those as "in out Root_Stream_Type'Class". 
And one should think of them as exactly that. (It's too late to fix this 
mistake, sadly.)

In the case of Timer, (a) no one ever uses this feature, and (b) I have no 
idea why this just isn't

type Timer (T : Ada.Task_Identification.Task_Id) is tagged limited private;

I have no idea who has an aliased Task_Id laying around anyway. That seems 
amazingly hard to use for no value whatsoever. I suspect it was something 
else originally and it never got changed sensibly.

>Anyway, I think you can avoid defining new subprograms that take anonymous 
>access
>parameters (except where needed for streams, or for 
>Generic_Dispatching_Constructor)
>and not add to the problem, but I don't see any value in avoiding existing 
>libraries.

Well, if you could tell a-priori if "access" was used as a stand-in for "in 
out", then it would be OK. In that case, the only use of the "access" is to 
dereference it.

But it if is actually used as an access type (with the access value being 
copied somewhere), then you have trouble (with random Program_Errors and a 
need to avoid passing local objects). It's possible in Ada 2012 to write a 
precondition for this case, but of course that's not required (and surely is 
not found in existing libraries), so the possibility doesn't help much.

Since you can't tell these cases apart (and the first is never necessary in 
Ada 2012 anyway, with the possible exception of foreign conventions), it's 
best to just avoid the feature. Especially as "access" is more expensive 
than "in out" because of the dynamic accessibility checking overhead (which 
exists regardless of whether it is ever used). Libraries should reflect this 
(moreso as Ada 2012 gets adopted more).

                                Randy.


^ permalink raw reply	[relevance 0%]

* Re: Access parameters and accessibility
  @ 2014-12-17  2:02  5%     ` Adam Beneschan
  2014-12-17 23:18  0%       ` Randy Brukardt
  0 siblings, 1 reply; 170+ results
From: Adam Beneschan @ 2014-12-17  2:02 UTC (permalink / raw)


On Tuesday, December 16, 2014 11:46:59 AM UTC-8, Michael B. wrote:

> > Besides, I agree with the others that it has nothing to do with OOP. Claw
> > only uses anonymous access parameters to get the effect of in out parameters
> > in functions (which isn't a problem with Ada 2012 anyway), and as Dmitry
> > noted, it doesn't work very well. Anonymous access parameters: just say no!!
> 
> How can I avoid them when they are heavily used in so many libraries?
> E.g. GtkAda: I just looked into some arbitrary .ads files from Gnat GPL 
> 2014 (glib-string.ads, glib-object.ads and gdk-event.ads) and found 
> examples of the usage of anonymous access parameters.
> You could argue that this is bad design, but rewriting all this code is 
> not really an option for me.
> And compared to writing GUIs in plain C it seems to be the lesser of two 
> evils.

I don't see how it could be a problem if you *use* a subprogram that requires an anonymous access parameter.  You can pretty much just pass in an object of any matching named access type, or 'Access (or 'Unchecked_Access) of a matching object.  You don't need to create any new anonymous access types in order to do so.

I'm sure Randy was being tongue-in-cheek when he said "get better libraries".  For one thing, you'd have to get better libraries than the libraries Ada provides, because Ada defines some subprograms with anonymous access parameters--namely Ada.Tags.Generic_Dispatching_Constructor, Ada.Execution_Time.Timers.Timer, and Read|Write_Exception_Occurrence in Ada.Exceptions.  Also, the stream attribute subprograms ('Read, 'Write, etc.) all have anonymous access parameters, and you have to *write* a subprogram that has an anonymous access parameter in order to use it as the stream subprogram for a type.  Quelle horreur!  Anyway, I think you can avoid defining new subprograms that take anonymous access parameters (except where needed for streams, or for Generic_Dispatching_Constructor) and not add to the problem, but I don't see any value in avoiding existing libraries.

                                -- Adam


^ permalink raw reply	[relevance 5%]

* Re: Benchmark Ada, please
  2014-07-05 17:00  6%     ` Guillaume Foliard
@ 2014-07-05 18:29  0%       ` Niklas Holsti
  0 siblings, 0 replies; 170+ results
From: Niklas Holsti @ 2014-07-05 18:29 UTC (permalink / raw)


On 14-07-05 20:00 , Guillaume Foliard wrote:
> Hello,
> 
> Niklas Holsti wrote:
>> On 14-07-05 14:34 , Guillaume Foliard wrote:
>>> Here are the numbers with GNAT GPL 2014 on a Core2 Q9650 @ 3.00GHz:
>>>
>>> $ gnatmake -O3 rbc_ada.adb
>>> $ time ./rbc_ada
>>> ...
>>> Elapsed time is = 1.966112682
>>>
>>>
>>> As for the C++ version:
>>>
>>> $ g++ -o testc -O3 RBC_CPP.cpp
>>> $ time ./testc
>>> ...
>>> Elapsed time is   = 3.12033
>>
>> Are you sure that Ada.Execution_Time.Clock gives numbers that can be
>> compared with the get_cpu_time() function in the C++ version? For a
>> program running under an OS, it is not evident what should be included
>> in the "processor time" for a program (program loading, process
>> creation, interrupts, I/O, page faults, ...).
> 
> I did these tests on Linux. On that platform, Ada.Execution_Time.Clock is 
> merely a call to the POSIX call clock_gettime() (itself a system call under 
> the hood), with CLOCK_THREAD_CPUTIME_ID as the clock_id parameter.

So at least they use the same POSIX library function, good.

> In the 
> GNAT GPL 2014 environment have a look at lib/gcc/x86_64-pc-linux-
> gnu/4.7.4/adainclude/a-exetim.adb to notice that yourself.

I believe you. I was asking if you had checked that the two clock
functions are equivalent. I did not mean to criticize.

> Let's now check the differences between clock() and clock_gettime(). 
> ...
> clock_gettime(CLOCK_THREAD_CPUTIME_ID, {0, 760910}) = 0
> clock_gettime(CLOCK_PROCESS_CPUTIME_ID, {0, 778713}) = 0
> ...
> 
> We can notice that under Linux clock() is in fact a wrapper above 
> clock_gettime(). So the only difference between Ada.Execution_Time.Clock and 
> clock() is the clock ID given to clock_gettime().

So they are not exactly the same.

> Moreover in RBC_CPP.cpp, if you rewrite the get_cpu_time() function as 
> follows:

  [using CLOCK_THREAD_CPUTIME_ID]

> and run the test again, you will find a value similar to the previous one.

Good. (But I'm not at all sure that I would find similar values on my
host system and my compilers, as demonstrated by other posts in this
thread.)

> 
>> What did the "time" command output in your tests?
> 
> For the command it is given as argument, "time" reports the elapsed time 
> ("real time"), the CPU time spent in user mode ("user time") and the CPU 
> time spent in system mode ("sys time"). I just used it to double check the 
> CPU time values.

I know well what "time" does; I asked you to show the values it produced
so that we could see that they, too, show Ada to be faster. If you don't
want to do that, that is your right, of course.

>> Just to be sure that the claim "Ada is faster" is really motivated.
>> Which would be very nice.
> 
> I suppose we are now surer than ever.

I was not at all sure, for the reasons I gave. I am surer now, thank you.

From other posts it seems that the relative speeds of C++ and Ada for
this benchmark depend on the platform and compiler, which is not very
surprising.

-- 
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
      .      @       .

^ permalink raw reply	[relevance 0%]

* Re: Benchmark Ada, please
  2014-07-05 13:00  5%   ` Niklas Holsti
@ 2014-07-05 17:00  6%     ` Guillaume Foliard
  2014-07-05 18:29  0%       ` Niklas Holsti
  0 siblings, 1 reply; 170+ results
From: Guillaume Foliard @ 2014-07-05 17:00 UTC (permalink / raw)


Hello,

Niklas Holsti wrote:
> On 14-07-05 14:34 , Guillaume Foliard wrote:
>> Here are the numbers with GNAT GPL 2014 on a Core2 Q9650 @ 3.00GHz:
>> 
>> $ gnatmake -O3 rbc_ada.adb
>> $ time ./rbc_ada
>> ...
>> Elapsed time is = 1.966112682
>> 
>> 
>> As for the C++ version:
>> 
>> $ g++ -o testc -O3 RBC_CPP.cpp
>> $ time ./testc
>> ...
>> Elapsed time is   = 3.12033
> 
> Are you sure that Ada.Execution_Time.Clock gives numbers that can be
> compared with the get_cpu_time() function in the C++ version? For a
> program running under an OS, it is not evident what should be included
> in the "processor time" for a program (program loading, process
> creation, interrupts, I/O, page faults, ...).

I did these tests on Linux. On that platform, Ada.Execution_Time.Clock is 
merely a call to the POSIX call clock_gettime() (itself a system call under 
the hood), with CLOCK_THREAD_CPUTIME_ID as the clock_id parameter. In the 
GNAT GPL 2014 environment have a look at lib/gcc/x86_64-pc-linux-
gnu/4.7.4/adainclude/a-exetim.adb to notice that yourself.

Let's now check the differences between clock() and clock_gettime(). 
Consider the following C program:

#include <time.h>

int main()
{
  struct timespec ts;
  clock_gettime(CLOCK_THREAD_CPUTIME_ID, &ts);

  clock_t c;
  c = clock();
}

Compile it and run it with strace:

$ gcc -o test_clock test_clock.c
$ strace ./test_clock
...
clock_gettime(CLOCK_THREAD_CPUTIME_ID, {0, 760910}) = 0
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, {0, 778713}) = 0
...

We can notice that under Linux clock() is in fact a wrapper above 
clock_gettime(). So the only difference between Ada.Execution_Time.Clock and 
clock() is the clock ID given to clock_gettime(). Thus, as our programs are 
not threaded I do not expect any differences between the two.
Moreover in RBC_CPP.cpp, if you rewrite the get_cpu_time() function as 
follows:

#include <time.h>
double get_cpu_time(){
  struct timespec ts;

  clock_gettime(CLOCK_THREAD_CPUTIME_ID, &ts);
  return (double)ts.tv_sec + (double)ts.tv_nsec / 1000000000;
}

and run the test again, you will find a value similar to the previous one.
So the timing results from my previous message are consistent.

> What did the "time" command output in your tests?

For the command it is given as argument, "time" reports the elapsed time 
("real time"), the CPU time spent in user mode ("user time") and the CPU 
time spent in system mode ("sys time"). I just used it to double check the 
CPU time values.

> Just to be sure that the claim "Ada is faster" is really motivated.
> Which would be very nice.

I suppose we are now surer than ever.

-- 
Guillaume Foliard

^ permalink raw reply	[relevance 6%]

* Re: Benchmark Ada, please
  2014-07-05 12:34  7% ` Guillaume Foliard
@ 2014-07-05 13:00  5%   ` Niklas Holsti
  2014-07-05 17:00  6%     ` Guillaume Foliard
  0 siblings, 1 reply; 170+ results
From: Niklas Holsti @ 2014-07-05 13:00 UTC (permalink / raw)


On 14-07-05 14:34 , Guillaume Foliard wrote:
> Victor Porton wrote:
> 
>> Somebody, write an Ada benchmark for this comparison of programming
>> languages:
>>
>> https://github.com/jesusfv/Comparison-Programming-Languages-Economics
> 
> Here is it:

 [most of code snipped]

>    Cpu_Time_End := Ada.Execution_Time.Clock;

I have a question about this, see below.

>    Ada.Text_IO.Put_Line
>       ("Elapsed time is ="
>        & Ada.Real_Time.To_Duration (Cpu_Time_End - Cpu_Time_Start)'Img);
> end Rbc_Ada;
> ---------------------------------------------------------------------
> 
> This is mostly a line to line translation from RBC_CPP.cpp. I have
> added a few type declarations though.
> 
>> It seems that C++ was the fastest (faster than Fortran), but Ada may be
>> even faster.
> 
> Here are the numbers with GNAT GPL 2014 on a Core2 Q9650 @ 3.00GHz:
> 
> $ gnatmake -O3 rbc_ada.adb
> $ time ./rbc_ada
> ...
> Elapsed time is = 1.966112682
> 
> 
> As for the C++ version:
> 
> $ g++ -o testc -O3 RBC_CPP.cpp
> $ time ./testc
> ... 
> Elapsed time is   = 3.12033

Are you sure that Ada.Execution_Time.Clock gives numbers that can be
compared with the get_cpu_time() function in the C++ version? For a
program running under an OS, it is not evident what should be included
in the "processor time" for a program (program loading, process
creation, interrupts, I/O, page faults, ...).

What did the "time" command output in your tests?

Just to be sure that the claim "Ada is faster" is really motivated.
Which would be very nice.

-- 
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
      .      @       .


^ permalink raw reply	[relevance 5%]

* Re: Benchmark Ada, please
  @ 2014-07-05 12:34  7% ` Guillaume Foliard
  2014-07-05 13:00  5%   ` Niklas Holsti
  0 siblings, 1 reply; 170+ results
From: Guillaume Foliard @ 2014-07-05 12:34 UTC (permalink / raw)


Victor Porton wrote:

> Somebody, write an Ada benchmark for this comparison of programming
> languages:
> 
> https://github.com/jesusfv/Comparison-Programming-Languages-Economics

Here is it:

---------------------------------------------------------------------
with Ada.Execution_Time;
with Ada.Numerics.Long_Elementary_Functions;
with Ada.Real_Time;
with Ada.Text_IO;

use Ada.Numerics.Long_Elementary_Functions;

use type Ada.Execution_Time.CPU_Time;

procedure Rbc_Ada
is
   
   Grid_Capital_Length      : constant := 17820;
   Grid_Productivity_Length : constant := 5;
   
   type Grid_Capital_Index is 
      new Integer range 0 .. Grid_Capital_Length - 1;
   
   type Grid_Productivity_Index is
      new Integer range 0 .. Grid_Productivity_Length - 1;
   
   type Grid_Array_Type is
      array (Grid_Capital_Index, Grid_Productivity_Index)
      of Long_Float;
   
   Null_Grid : constant Grid_Array_Type := (others => (others => 0.0));
   
   -- 1. Calibration
   
   -- Elasticity of output w.r.t. capital
   
   Alpha : constant Long_Float := 0.33333333333;
   
   -- Discount factor
   
   Beta : constant Long_Float := 0.95;
   
   -- Productivity values
   
   Productivities : constant
      array (Grid_Productivity_Index) 
      of Long_Float :=
      (0.9792, 0.9896, 1.0000, 1.0106, 1.0212);
   
   -- Transition matrix
   
   Transition : constant
      array (Grid_Productivity_Index, Grid_Productivity_Index) 
      of Long_Float :=
      ((0.9727, 0.0273, 0.0000, 0.0000, 0.0000),
       (0.0041, 0.9806, 0.0153, 0.0000, 0.0000),
       (0.0000, 0.0082, 0.9837, 0.0082, 0.0000),
       (0.0000, 0.0000, 0.0153, 0.9806, 0.0041),
       (0.0000, 0.0000, 0.0000, 0.0273, 0.9727));
   
   -- 2. Steady State
   
   Capital_Steady_State : constant Long_Float :=
      (Alpha * Beta) ** (1.0 / (1.0 - Alpha));
   
   Output_Steady_State : constant Long_Float := 
      Capital_Steady_State ** Alpha;
   
   Consumption_Steady_State : constant Long_Float := 
      Output_Steady_State - Capital_Steady_State;

   Grid_Capital_Next_Index : Grid_Capital_Index;
   
   Grid_Capital : array (Grid_Capital_Index) of Long_Float := 
      (others => 0.0);
  
   Output                  : Grid_Array_Type := Null_Grid;
   Value_Function          : Grid_Array_Type := Null_Grid;
   Value_Function_New      : Grid_Array_Type := Null_Grid;
   Policy_Function         : Grid_Array_Type := Null_Grid;
   Expected_Value_Function : Grid_Array_Type := Null_Grid;

   Max_Difference   : Long_Float := 10.0;
   Diff             : Long_Float;
   Diff_High_So_Far : Long_Float;
   
   Tolerance : constant := 0.0000001;
   
   Value_High_So_Far : Long_Float;
   Value_Provisional : Long_Float;
   Consumption       : Long_Float;
   Capital_Choice    : Long_Float;
   Iteration         : Integer := 0;
   Cpu_Time_Start    : Ada.Execution_Time.CPU_Time;
   Cpu_Time_End      : Ada.Execution_Time.CPU_Time;
begin
   Cpu_Time_Start := Ada.Execution_Time.Clock;
   
   Ada.Text_IO.Put_Line
      ("Output =" & Output_Steady_State'Img
       & ", Capital =" & Capital_Steady_State'Img
       & ", Consumption =" & Consumption_Steady_State'Img);
	
   -- We generate the grid of capital

   for Index in Grid_Capital'Range
   loop
      Grid_Capital (Index) :=
         0.5 * Capital_Steady_State + 0.00001 * Long_Float (Index);
   end loop;

   -- We pre-build output for each point in the grid
   
   for Productivity_Index in Grid_Productivity_Index
   loop
      for Capital_Index in Grid_Capital_Index
      loop
         Output (Capital_Index, Productivity_Index) := 
            Productivities (Productivity_Index)
            * Grid_Capital (Capital_Index) ** Alpha;
      end loop;
   end loop;

   -- Main iteration

   while Max_Difference > Tolerance
   loop
      for Productivity_Index in Grid_Productivity_Index   
      loop
         for Capital_Index in Grid_Capital_Index  
         loop
            Expected_Value_Function (Capital_Index, Productivity_Index) :=
               0.0;
            
            for Productivity_Next_Index in Grid_Productivity_Index   
            loop
               Expected_Value_Function (Capital_Index, Productivity_Index) 
:=
                  Expected_Value_Function (Capital_Index, 
Productivity_Index)
                  + Transition (Productivity_Index, Productivity_Next_Index)
                  * Value_Function (Capital_Index, Productivity_Next_Index);
            end loop;
         end loop;
      end loop;
      
      for Productivity_Index in Grid_Productivity_Index   
      loop
         -- We start from previous choice (monotonicity of policy function)
         
         Grid_Capital_Next_Index := 0;
         
         for Capital_Index in Grid_Capital_Index
         loop
            Value_High_So_Far := -100000.0;
            Capital_Choice    := Grid_Capital (0);
            
            for Capital_Next_Index in 
               Grid_Capital_Next_Index .. Grid_Capital_Index'Last
            loop
               Consumption := 
                  Output (Capital_Index, Productivity_Index)
                  - Grid_Capital (Capital_Next_Index);
               
               Value_Provisional :=
                  (1.0 - Beta) * Log (Consumption)
                  + Beta * Expected_Value_Function (Capital_Next_Index,
                                                    Productivity_Index);
               
               if Value_Provisional > Value_High_So_Far
               then
                  Value_High_So_Far := Value_Provisional;
                  Capital_Choice := Grid_Capital (Capital_Next_Index);
                  Grid_Capital_Next_Index := Capital_Next_Index;
                  
               else
                  exit;
               end if;
               
               Value_Function_New (Capital_Index, Productivity_Index) := 
                  Value_High_So_Far;
               
               Policy_Function (Capital_Index, Productivity_Index) :=
                  Capital_Choice;
            end loop;
         end loop;
      end loop;
      
      Diff_High_So_Far := -100000.0;
      
      for Productivity_Index in Grid_Productivity_Index   
      loop
         for Capital_Index in Grid_Capital_Index
         loop
            Diff := 
               abs (Value_Function (Capital_Index, Productivity_Index)
                   - Value_Function_New (Capital_Index, 
                                        Productivity_Index));
 
            if Diff > Diff_High_So_Far
            then
               Diff_High_So_Far := Diff;
            end if;
            
            Value_Function (Capital_Index, Productivity_Index)
               := Value_Function_New (Capital_Index, Productivity_Index);
         end loop;
      end loop;
      
      Max_Difference := Diff_High_So_Far;

      Iteration := Iteration + 1;
      
      if Iteration mod 10 = 0 or Iteration = 1
      then
         Ada.Text_IO.Put_Line ("Iteration =" & Iteration'Img
                               & ", Sup Diff =" & Max_Difference'Img);
      end if;
   end loop;
   
   Ada.Text_IO.Put_Line ("Iteration =" & Iteration'Img
                         & ", Sup Diff =" & Max_Difference'Img);
   Ada.Text_IO.New_Line;
   Ada.Text_IO.Put_Line ("My check =" & Policy_Function (999, 2)'Img);
   Ada.Text_IO.New_Line;   

   Cpu_Time_End := Ada.Execution_Time.Clock;
   
   Ada.Text_IO.Put_Line
      ("Elapsed time is ="
       & Ada.Real_Time.To_Duration (Cpu_Time_End - Cpu_Time_Start)'Img);
end Rbc_Ada;
---------------------------------------------------------------------

This is mostly a line to line translation from RBC_CPP.cpp. I have
added a few type declarations though.

> It seems that C++ was the fastest (faster than Fortran), but Ada may be
> even faster.

Here are the numbers with GNAT GPL 2014 on a Core2 Q9650 @ 3.00GHz:

$ gnatmake -O3 rbc_ada.adb
$ time ./rbc_ada
...
Elapsed time is = 1.966112682


As for the C++ version:

$ g++ -o testc -O3 RBC_CPP.cpp
$ time ./testc
... 
Elapsed time is   = 3.12033

So the Ada version is significantly faster. I suppose it is mainly because 
the Ada compiler has vectorized more loops than the C++ compiler (add -
ftree-vectorizer-verbose=2 to the above compilation commands to check by 
yourself).

> If we succeed, we would advertise Ada as the fastest(!) programming
> language (after assembler).

Feel free to advertise, using this Ada code as you wish.

-- 
Guillaume Foliard

^ permalink raw reply	[relevance 7%]

* Re: Ada.Execution_Time
  2011-01-06 22:55 12%                                             ` Ada.Execution_Time Niklas Holsti
@ 2011-01-07  6:25  8%                                               ` Randy Brukardt
  0 siblings, 0 replies; 170+ results
From: Randy Brukardt @ 2011-01-07  6:25 UTC (permalink / raw)


"Niklas Holsti" <niklas.holsti@tidorum.invalid> wrote in message 
news:8omvicFgm0U1@mid.individual.net...
...
>> That's probably enough for soft real-time systems anyway; and the 
>> facilities are useful for profiling and the like even without any strong 
>> connection to reality.
>
> I don't see how Ada.Execution_Time would be very useful for profiling. 
> While it could show you which tasks are CPU hogs, it does not resolve the 
> execution-time consumption to subprograms or subprogram parts, which I 
> think would be the important information for code redesigns aiming at 
> improving speed. Real profilers usually do give you subprogram-level or 
> even statement-level information.

I was thinking of "profiling" by hand, essentially by adding profiler calls 
to "interesting" points in the code. I have a number of packages which are 
designed for this purpose.

My experience with subprogram level profiling is that it often changes the 
results too much, as the overhead of profiling can be a lot more than the 
overhead of calling small subprograms (like the classic Getters/Setters). 
Perhaps it is just the way that we do it in Janus/Ada (by adding it to 
Enter_Walkback/Exit_Walkback calls that occur at the entrance and exit of 
every subprogram [unless turned off by compiler switch or pragma], which 
mean you don't have to recompile anything other than the main subprogram).

Instruction level profiling (which is used to provide statement profiling) 
is a statistical method, and that has its own problems (its possible to get 
into a situation where the program and profiler operate in sync, such that 
the profiler never sees the execution of some of the code). It's also 
impractical unless you have a very fast timer interrupt or a long time to 
run (machines have gotten too fast for the one I used to use -- it only gets 
a few hits before the compilations are finished...).

> Of course, a profiler that collects subprogram-level execution time (per 
> call, for example) but does not know about Ada task preemption will 
> probably produce wildly wrong results.

Right, and that is a problem with both my profiling packages and the 
Janus/Ada subprogram profiler. They both use Calendar.Clock, which is 
obviously oblivious to task switching. Ada.Execution_Time would be a major 
improvement in that respect.

                              Randy.

P.S. I think I owe you a compiler update. Did you read on our mailing list 
about the latest beta of Janus/Ada?





^ permalink raw reply	[relevance 8%]

* Re: Ada.Execution_Time
  2011-01-03 21:27  9%                                           ` Ada.Execution_Time Randy Brukardt
@ 2011-01-06 22:55 12%                                             ` Niklas Holsti
  2011-01-07  6:25  8%                                               ` Ada.Execution_Time Randy Brukardt
  0 siblings, 1 reply; 170+ results
From: Niklas Holsti @ 2011-01-06 22:55 UTC (permalink / raw)


Randy Brukardt wrote:
> "Niklas Holsti" <niklas.holsti@tidorum.invalid> wrote in message 
> news:8o8ptdF35oU1@mid.individual.net...
>> Randy Brukardt wrote:
>>> I think we're actually in agreement on most points.
>> Good, and I agree. Nevertheless I again make a lot of comments, below, 
>> because I think your view, if true, means that Ada.Execution_Time is 
>> useless for real-time systems.
> 
> Much like "unmeasurable", "useless" is a little bit strong, but this is 
> definitely true in general.
> 
> That is, if you can afford megabucks to have a compiler runtime tailored to 
> your target hardware (and compiler vendors love such customers),

The cost will depend on the number of users/customers and the extent of 
tailoring needed; I admit I haven't asked for quotes.

At least the definition of a predefined, standard programmer interface 
in the form of Ada.Execution_Time and children should help make the 
tailoring more portable over targets, applications, and customers.

> then you probably could find a use for in a hard real-time system.

I think so.

> But the typical off-the-shelf implementation is not going to be useful for 
> anything beyond gross guidance.

Depends on which shelf you buy from :-)  You are probably right for GNAT 
GPL on Linux, but perhaps there is more hope for RAVEN and the like. 
Although I see that Ravenscar excludes Ada.Execution_Time.

> That's probably enough for soft real-time 
> systems anyway; and the facilities are useful for profiling and the like 
> even without any strong connection to reality.

I don't see how Ada.Execution_Time would be very useful for profiling. 
While it could show you which tasks are CPU hogs, it does not resolve 
the execution-time consumption to subprograms or subprogram parts, which 
I think would be the important information for code redesigns aiming at 
improving speed. Real profilers usually do give you subprogram-level or 
even statement-level information.

Of course, a profiler that collects subprogram-level execution time (per 
call, for example) but does not know about Ada task preemption will 
probably produce wildly wrong results.

> ... unless you have a very 
> controlled environment. And if you have that environment, a
> language-defined package doesn't buy you anything over
> roll-your-own.

I don't agree. I think a clean, standard, language-defined interface 
between the Ada RTS/kernel and the application will help an implementer, 
as I said above. Moreover, the scheduling algorithms published by Burns 
and Wellings and others are written to use Ada.Execution_Time. Rolling 
one's own interface would mean changing these algorithms.

-- 
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
       .      @       .



^ permalink raw reply	[relevance 12%]

* Re: Ada.Execution_Time
  2011-01-03 21:33  4%                                           ` Ada.Execution_Time Randy Brukardt
@ 2011-01-05 15:55 10%                                             ` Brad Moore
  0 siblings, 0 replies; 170+ results
From: Brad Moore @ 2011-01-05 15:55 UTC (permalink / raw)


On 03/01/2011 2:33 PM, Randy Brukardt wrote:

 > if you can afford megabucks to have a compiler runtime tailored to
> your target hardware (and compiler vendors love such customers), then you
> probably could find a use for in a hard real-time system.
>
> But the typical off-the-shelf implementation is not going to be useful for
> anything beyond gross guidance. That's probably enough for soft real-time
> systems anyway; and the facilities are useful for profiling and the like
> even without any strong connection to reality.
>
> But it doesn't make sense to assume tight matches unless you have a very
> controlled environment. And if you have that environment, a language-defined
> package doesn't buy you anything over roll-your-own. So I would agree that
> the existence Ada.Execution_Time really doesn't buy anything for a hard
> real-time system.
>
> I suppose there are others that disagree (starting with Alan Burns). I think
> they're wrong.

> Keep in mind that the IRTAW proposals tend to get put into the language
> without much resistance from the majority of the ARG. Most of us don't have
> a enough real-time experience to really be able to have a strong opinion on
> a topic. So we don't generally oppose the basic reason for a proposal.
>
> We simply spend time on polishing the proposals into acceptable RM language
> (which is usually a large job by itself; the proposals tend to be very
> sloppy). Rarely do we object to the proposal itself, in large part because
> we simply don't have enough energy to think in great detail about every
> proposal.
>
> It's best to think of the Annex D stuff as designed and proposed separately
> by a small subgroup. (There's a similar dynamic for some other areas as
> well, numerics coming to mind.) That small subgroup may have a different
> world-view than the rest of us.

To get a perspective on some of the IRTAW thinking with respect to 
execution time, one might want to have a look at the paper;

"Execution-Time Control For Interrupt Handling", a proposal from the 
most recent IRTAW.

http://www.sigada.org/ada_letters/apr2010/paper3.pdf

The paper expresses an interest in accounting for the time spent
during the handling of interrupts, and tracking this separately
rather than charge the execution time to the current running task,
as is currently the case for most current implementations of 
Ada.Execution_Time.

The intent is to improve the accuracy of execution time measurement
and computing WCET so that task budgets can be tighter allowing for 
higher CPU utilization.


--BradM



^ permalink raw reply	[relevance 10%]

* Re: Ada.Execution_Time
  2011-01-01 15:54  9%                                         ` Ada.Execution_Time Simon Wright
@ 2011-01-03 21:33  4%                                           ` Randy Brukardt
  2011-01-05 15:55 10%                                             ` Ada.Execution_Time Brad Moore
  0 siblings, 1 reply; 170+ results
From: Randy Brukardt @ 2011-01-03 21:33 UTC (permalink / raw)


"Simon Wright" <simon@pushface.org> wrote in message 
news:m2zkrk62ms.fsf@pushface.org...
> "Randy Brukardt" <randy@rrsoftware.com> writes:
>
>> Your discussion of real-time scheduling using this still seems to me
>> to be more in the realm of academic exercise than something
>> practical. I'm sure that it works in very limited circumstances, but
>> those circumstances are getting less and less likely by the year.
>
> Sounds as if you don't agree with the !problem section of AI-00307?

Pretty much.

Keep in mind that the IRTAW proposals tend to get put into the language 
without much resistance from the majority of the ARG. Most of us don't have 
a enough real-time experience to really be able to have a strong opinion on 
a topic. So we don't generally oppose the basic reason for a proposal.

We simply spend time on polishing the proposals into acceptable RM language 
(which is usually a large job by itself; the proposals tend to be very 
sloppy). Rarely do we object to the proposal itself, in large part because 
we simply don't have enough energy to think in great detail about every 
proposal.

It's best to think of the Annex D stuff as designed and proposed separately 
by a small subgroup. (There's a similar dynamic for some other areas as 
well, numerics coming to mind.) That small subgroup may have a different 
world-view than the rest of us.

                                   Randy.





^ permalink raw reply	[relevance 4%]

* Re: Ada.Execution_Time
  2011-01-01 13:52 10%                                         ` Ada.Execution_Time Niklas Holsti
  2011-01-01 14:42 11%                                           ` Ada.Execution_Time Simon Wright
@ 2011-01-03 21:27  9%                                           ` Randy Brukardt
  2011-01-06 22:55 12%                                             ` Ada.Execution_Time Niklas Holsti
  1 sibling, 1 reply; 170+ results
From: Randy Brukardt @ 2011-01-03 21:27 UTC (permalink / raw)


"Niklas Holsti" <niklas.holsti@tidorum.invalid> wrote in message 
news:8o8ptdF35oU1@mid.individual.net...
> Randy Brukardt wrote:
>> I think we're actually in agreement on most points.
>
> Good, and I agree. Nevertheless I again make a lot of comments, below, 
> because I think your view, if true, means that Ada.Execution_Time is 
> useless for real-time systems.

Much like "unmeasurable", "useless" is a little bit strong, but this is 
definitely true in general.

That is, if you can afford megabucks to have a compiler runtime tailored to 
your target hardware (and compiler vendors love such customers), then you 
probably could find a use for in a hard real-time system.

But the typical off-the-shelf implementation is not going to be useful for 
anything beyond gross guidance. That's probably enough for soft real-time 
systems anyway; and the facilities are useful for profiling and the like 
even without any strong connection to reality.

But it doesn't make sense to assume tight matches unless you have a very 
controlled environment. And if you have that environment, a language-defined 
package doesn't buy you anything over roll-your-own. So I would agree that 
the existence Ada.Execution_Time really doesn't buy anything for a hard 
real-time system.

I suppose there are others that disagree (starting with Alan Burns). I think 
they're wrong.

                                   Randy.





>> The main difference is that I contend that the theoretical "execution 
>> time" is, in actual practice, unmeasurable. ("Unmeasurable" is a bit 
>> strong, but I mean that you can only get a gross estimation of it.)
>
> You are right, we disagree on this.
>
>> You have argued that there exist cases where it is possible to do better 
>> (a hardware clock, a single processor, a kernel that doesn't get in the 
>> way of using the hardware clock,
>
> Yes.
>
>> no contention with other devices on the bus, etc.),
>
> I may have been fuzzy on that. Such real but variable or unpredictable 
> delays or speed-ups are, in my view, just normal parts of the execution 
> time of the affected task, and do not harm the ideal concept of "the 
> execution time of a task" nor the usefulness of Ada.Execution_Time and its 
> child packages. They only mean that the measured execution time of one 
> particular execution of a task is less predictive of the execution times 
> of future executions of that task. More on this below.
>
>> and I wouldn't disagree. The problem is that those cases aren't 
>> realistic, particularly if you are talking about a language-defined 
>> package.
>
> I am not proposing new general requirements on timing accuracy for the RM.
>
>> (Even if there is a hardware clock available on a particular target, an 
>> off-the-shelf compiler isn't going to be able to use it. It can only use 
>> the facilities of the kernel or OS.)
>
> For critical, embedded, hard-real-time systems I think it is not uncommon 
> to use dedicated real-time computers with kernels such as VxWorks or RTEMS 
> or bare-board Ada run-time systems. I have worked on such computers in the 
> past, and I see such computers being advertised and used today.
>
> In such systems, the kernel or Ada RTS is usually customised by a "board 
> support package" (BSP) that, among other things, handles the interfaces to 
> clocks and timers on the target computer (the "board"). Such systems can 
> provide high-accuracy timers and mechanisms for execution-time monitoring 
> without having to change the compiler; it should be enough to change the 
> implementation of Ada.Execution_Time. In effect, Ada.Execution_Time would 
> be a part of the BSP, or depend on types and services defined in the BSP.
>
> The question is then if the compiler/system vendors will take the trouble 
> and cost to customise Ada.Execution_Time for a particular board/computer, 
> or if they will just use the general, lowest-denominator but portable 
> service provided by the kernel. Dmitry indicates that GNAT on VxWorks 
> takes the latter, easy way out. That's a matter of cost vs demand; if the 
> users want it, the vendors can provide it.
>
> For example, the "CPU Usage Statistics" section of the "RTEMS C User's 
> Guide" says: "RTEMS versions newer than the 4.7 release series, support 
> the ability to obtain timestamps with nanosecond granularity if the BSP 
> provides support. It is a desirable enhancement to change the way the 
> usage data is gathered to take advantage of this recently added 
> capability. Please consider sponsoring the core RTEMS development team to 
> add this capability." Thus, in March 2010 the RTEMS developers wanted and 
> could implement accurate execution-time measurement, but no customer had 
> yet paid for its implementation.
>
>> This shows up when you talk about the inability to reproduce the results. 
>> In practice, I think you'll find that the values vary a lot from run to 
>> run; the net effect is that the error is a large percentage of the 
>> results for many values.
>
> Again, I may have not have been clear on my view of the reproducibility 
> and variability of execution-time measurements. The variability in the 
> measured execution time of a task has three components or sources:
>
> 1. Variable execution paths due to data-dependent conditional control-flow 
> (if-then-else, case, loop). In other words, on different executions of the 
> task, different sequences of instructions are executed, leading to 
> different execution times.
>
> 2. Variable execution times of indivual instructions, for example due to 
> variable cache contents and variable bus contention.
>
> 3. Variable measurement errors, for example truncations or roundings in 
> the count of CPU_Time clock cycles, variable amount of interrupt handling 
> included in the task execution time, etc.
>
> Components 1 and 2 are big problems for worst-case analysis but in my view 
> are not problems for Ada.Execution_Time. In fact, I think that one of the 
> intended uses of Ada.Execution_Time is to help systems make good use of 
> this execution-time variability by letting the system do other useful 
> things when some task happens to execute quicker than its worst-case 
> execution time and leaves some slack in the schedule.
>
> Only component 3 is an "error" in the measured value. This component, and 
> also constant measurement errors if any, are problems for users of 
> Ada.Execution_Time.
>
>> Your discussion of real-time scheduling using this still seems to me to 
>> be more in the realm of academic exercise than something practical. I'm 
>> sure that it works in very limited circumstances,
>
> I think these circumstances have some correlation with the domain for 
> which Ada is or was intended: reliable, embedded, possibly real-time 
> systems. Well, this is a limited niche.
>
>> but those circumstances are getting less and less likely by the year. 
>> Techniques that assume a strong correspondence between these values and 
>> real-time are simply fantasy -- I would hope no one depend on these for 
>> anything of importance.
>
> If this is true, Ada.Execution_Time is useless for real-time systems. 
> Since it was added to the RM for 2005, and is still in the draft for RM 
> 2012, I suppose the ARG majority does not share your view.
>
> Thanks for a frank discussion, and best wishes for 2011!
>
> -- 
> Niklas Holsti
> Tidorum Ltd
> niklas holsti tidorum fi
>       .      @       . 





^ permalink raw reply	[relevance 9%]

* Re: An Example for Ada.Execution_Time
  2011-01-01 20:25 10%                 ` Niklas Holsti
@ 2011-01-03  8:50 11%                   ` Dmitry A. Kazakov
  0 siblings, 0 replies; 170+ results
From: Dmitry A. Kazakov @ 2011-01-03  8:50 UTC (permalink / raw)


On Sat, 01 Jan 2011 22:25:30 +0200, Niklas Holsti wrote:

> Perhaps you mean that CPU_Time is unrelated to Ada.Real_Time.Time over 
> longer periods during which the task is sometimes executing, sometimes 
> not.

I mean that

1. The corresponding clocks can be different. Ada cannot influence this.

2. The clocks are not synchronized, i.e. T1 on the Ada.Real_Time.Clock
cannot be translated into T2 on the Ada.Execution_Time.Clock.

> One of the differences between Ada.Real_Time.Time and 
> Ada.Execution_Time.CPU_Time is that the epoch for the former is not 
> specified in the RM. It follows that even if Ada.Real_Time.Time had a 
> visibly numeric value, its meaning would be unknown to the program.
> 
> In contrast, the epoch for CPU_Time is specified in the RM: the creation 
> of the task, a point in real time known to the program.

That does not define it either. An epoch cannot be defined otherwise than
in terms of other clock. The task start time according to
Ada.Real_Time.Time or Ada.Calendar.Time is unknown.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[relevance 11%]

* Re: An Example for Ada.Execution_Time
  2011-01-01 13:39  4%               ` Dmitry A. Kazakov
@ 2011-01-01 20:25 10%                 ` Niklas Holsti
  2011-01-03  8:50 11%                   ` Dmitry A. Kazakov
  0 siblings, 1 reply; 170+ results
From: Niklas Holsti @ 2011-01-01 20:25 UTC (permalink / raw)


Dmitry A. Kazakov wrote:
> On Fri, 31 Dec 2010 20:57:54 +0200, Niklas Holsti wrote:
> 
>> Dmitry A. Kazakov wrote:
>>> On Fri, 31 Dec 2010 14:42:41 +0200, Niklas Holsti wrote:
>>>
>>>> Dmitry A. Kazakov wrote:
>>>>> On Thu, 30 Dec 2010 18:51:30 -0500, BrianG wrote:
>>>>>
>>>>>> Since D.16 defines CPU_Time as if it were a numeric value, is it too 
>>>>>> much to ask why a conversion to some form of numeric value wasn't 
>>>>>> provided?
>>>>> But time is not a number and was not defined as if it were.
>>>> You keep saying that, Dmitri, but your only argument seems to be the 
>>>> absence of some operators like addition for CPU_Time. Since CPU_Time is 
>>>> private, we cannot tell if this absence means that the D.14 authors 
>>>> considered the type non-numeric, or just considered the operators 
>>>> unnecessary for the intended uses.
>>> No, the argument is that time is a state of some recurrent process, like
>>> the position of an Earth's meridian relatively to the Sun. This state is
>>> not numeric, it could be numeric though. That depends on the nature of the
>>> process.
>> This is your view of what the English word "time" means.
> 
> The English word "time" it has many meanings.

Yes indeed....

Dmitry, I think we are arguing this point -- whether 
Ada.Execution_Time.CPU_Time has "numeric" values -- from different 
personal definitions of what "numeric" means, so we are not going to 
agree, and should stop.

Meanwhile, I was reminded that the Ada RM actually defines "numeric 
type" as an integer or real type (RM 3.5(1)). Since CPU_Time and 
Time_Span etc. are all private types, and the RM does not say if they 
are implemented as integer or real types, we cannot know if they are 
"numeric types" using RM terms. Let's stop here, OK?

> It is based on the fact that RM always introduces a distinct type, when it
> means duration, time interval, period, e.g.: Duration, Time_Span. When RM
> uses a type named "time," it does not mean duration.

Yes, I agree that this is the RM principle, but I also think it is a 
different question. You may remember that Randy said that the ARG had 
discussions about type names for Ada.Execution_Time, and were not very 
happy with the name "CPU_Time" (as I understood Randy, at least).

Note that RM D.14 does not say that CPU_Time is a "time type" as "time 
types" are defined in RM 9.6. In contrast, Ada.Real_Time.Time is a "time 
type".

> D.14 reuses Time_Span for
> intervals of CPU_Time, which stresses the difference. If this is not clean,
> then, not because CPU_Time is duration, it is because CPU_Time can be (and
> is) unrelated to the source of Ada.Real_Time.Time.

I definitely don't agree that CPU_Time can be unrelated to the source of 
Ada.Real_Time.Time in an Ada system that is useful for real-time 
programming. Execution time may be measured with a different clock or 
timer than Ada.Real_Time.Time but the rates of the two times must be 
very closely the same while one task is executing. Otherwise 
Ada.Execution_Time would be useless.

Perhaps you mean that CPU_Time is unrelated to Ada.Real_Time.Time over 
longer periods during which the task is sometimes executing, sometimes 
not. Then I agree that the two times increase at different rates, of course.

> Thus reusing Time_Span for its intervals questionable. It would
> be better to introduce a separate type for this, e.g. CPU_Time_Interval.

I strongly disagree, because that would abandon the commensurability of 
CPU_Time (or its intervals) with real time and would destroy the 
usefulness of Ada.Execution_Time in real-time systems.

> Furthermore, the CPU_Time type should be local to the task,

It could be reasonable to have different CPU_Time types for each task. I 
can't immediately think of any reason for comparing or subtracting 
CPU_Time values across tasks. But I don't think this can be implemented 
in Ada.

> preventing its usage anywhere outside the task,

This would prevent or hamper the implementation of scheduling algorithms 
that monitor the execution times of several tasks. I object to it.

> while CPU_Time_Interval could be same for all tasks.

Definitely. And it must be the same as Ada.Real_Time.Time_Span or 
Duration for real-time scheduling purposes.

> RM is consistent here, but, as I said, sloppy, because the zero element of
> a time system has a specific name: "epoch."  RM uses this term in D.8, so
> should it do here.

Manfully resisting the strong temptation to continue to argue about the 
"numeric" issue, based on your statements about "zero", I want to 
comment on this "epoch" thing.

One of the differences between Ada.Real_Time.Time and 
Ada.Execution_Time.CPU_Time is that the epoch for the former is not 
specified in the RM. It follows that even if Ada.Real_Time.Time had a 
visibly numeric value, its meaning would be unknown to the program.

In contrast, the epoch for CPU_Time is specified in the RM: the creation 
of the task, a point in real time known to the program. This brings 
CPU_Time closer to the meaning of Duration or Time_Span. If CPU_Time had 
a visibly numeric value, its meaning would be known: the total execution 
time of the task since the program created the task.

I think the main reason why RM D.14 defines a specific type CPU_Time, 
instead of directly using Duration or Time_Span, is the large range 
required of CPU_Time: up to 50 years. Time_Span is required to hold only 
+- 1 hour, and Duration only +- 1 day.

-- 
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
       .      @       .



^ permalink raw reply	[relevance 10%]

* Re: Ada.Execution_Time
  2011-01-01 16:01  5%                                             ` Ada.Execution_Time Simon Wright
@ 2011-01-01 19:18  5%                                               ` Niklas Holsti
  0 siblings, 0 replies; 170+ results
From: Niklas Holsti @ 2011-01-01 19:18 UTC (permalink / raw)


Simon Wright wrote:
> Simon Wright <simon@pushface.org> writes:
> 
>> Niklas Holsti <niklas.holsti@tidorum.invalid> writes:
>>
>>> The question is then if the compiler/system vendors will take the
>>> trouble and cost to customise Ada.Execution_Time for a particular
>>> board/computer, or if they will just use the general,
>>> lowest-denominator but portable service provided by the kernel. Dmitry
>>> indicates that GNAT on VxWorks takes the latter, easy way out.
>> The latest supported GNAT on VxWorks (5.5) actually doesn't implement
>> Ada.Execution_Time at all (well, the source of the package spec is
>> there, adorned with "pragma Unimplemented_Unit", just like the FSF
>> 4.5.0 sources and the GNAT GPL sources -- unless you're running on
>> Cygwin or MaRTE).
> 
> Actually, Dmitry's complaint was that time (either from
> Ada.Calendar.Clock, or Ada.Real_Time.Clock, I forget) wasn't as precise
> as it can easily be on modern hardware, being incremented at each timer
> interrupt; so if your timer ticks every millisecond, that's your
> granularity.

Thanks for correcting me, Simon, I mis-remembered or misunderstood Dmitry.

-- 
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
       .      @       .



^ permalink raw reply	[relevance 5%]

* Re: Ada.Execution_Time
  2011-01-01 14:42 11%                                           ` Ada.Execution_Time Simon Wright
@ 2011-01-01 16:01  5%                                             ` Simon Wright
  2011-01-01 19:18  5%                                               ` Ada.Execution_Time Niklas Holsti
  0 siblings, 1 reply; 170+ results
From: Simon Wright @ 2011-01-01 16:01 UTC (permalink / raw)


Simon Wright <simon@pushface.org> writes:

> Niklas Holsti <niklas.holsti@tidorum.invalid> writes:
>
>> The question is then if the compiler/system vendors will take the
>> trouble and cost to customise Ada.Execution_Time for a particular
>> board/computer, or if they will just use the general,
>> lowest-denominator but portable service provided by the kernel. Dmitry
>> indicates that GNAT on VxWorks takes the latter, easy way out.
>
> The latest supported GNAT on VxWorks (5.5) actually doesn't implement
> Ada.Execution_Time at all (well, the source of the package spec is
> there, adorned with "pragma Unimplemented_Unit", just like the FSF
> 4.5.0 sources and the GNAT GPL sources -- unless you're running on
> Cygwin or MaRTE).

Actually, Dmitry's complaint was that time (either from
Ada.Calendar.Clock, or Ada.Real_Time.Clock, I forget) wasn't as precise
as it can easily be on modern hardware, being incremented at each timer
interrupt; so if your timer ticks every millisecond, that's your
granularity.

This could easily be changed (at any rate for Real_Time), but doesn't I
believe necessarily affect what's to be expected from delay/delay until,
or Execution_Time come to that.



^ permalink raw reply	[relevance 5%]

* Re: Ada.Execution_Time
  2010-12-31 23:34  8%                                       ` Ada.Execution_Time Randy Brukardt
  2011-01-01 13:52 10%                                         ` Ada.Execution_Time Niklas Holsti
@ 2011-01-01 15:54  9%                                         ` Simon Wright
  2011-01-03 21:33  4%                                           ` Ada.Execution_Time Randy Brukardt
  1 sibling, 1 reply; 170+ results
From: Simon Wright @ 2011-01-01 15:54 UTC (permalink / raw)


"Randy Brukardt" <randy@rrsoftware.com> writes:

> Your discussion of real-time scheduling using this still seems to me
> to be more in the realm of academic exercise than something
> practical. I'm sure that it works in very limited circumstances, but
> those circumstances are getting less and less likely by the year.

Sounds as if you don't agree with the !problem section of AI-00307?

The first paragraph says that measurement/estimation is important, and
that measurement is difficult [actually, I don't see this; _estimation_
is hard, sure, what with pipelining, caches etc, but _measurement_?]

The second paragraph says that in a hard real time system you ought to
be able to monitor execution times in order to tell whether things have
gone wrong. Hence Ada.Execution_Time.Timers.

The third paragraph talks about fancy scheduling algorithms. This is the
point at which I too start to wonder whether things are getting a bit
academic; but I have no practical experience of the sort of real-time
systems under discussion. We merely have to produce the required output
within 2 ms of a program interrupt; but the world won't end if we miss
occasionally (not that we ever do), because of the rest of the system
design, which has to cope with data loss over noisy radio channels.

The last paragraphs call up the Real-Time extensions to POSIX (IEEE
1003.1d, after a lot of googling) as an indication of the intention.



^ permalink raw reply	[relevance 9%]

* Re: Ada.Execution_Time
  2011-01-01 13:52 10%                                         ` Ada.Execution_Time Niklas Holsti
@ 2011-01-01 14:42 11%                                           ` Simon Wright
  2011-01-01 16:01  5%                                             ` Ada.Execution_Time Simon Wright
  2011-01-03 21:27  9%                                           ` Ada.Execution_Time Randy Brukardt
  1 sibling, 1 reply; 170+ results
From: Simon Wright @ 2011-01-01 14:42 UTC (permalink / raw)


Niklas Holsti <niklas.holsti@tidorum.invalid> writes:

> The question is then if the compiler/system vendors will take the
> trouble and cost to customise Ada.Execution_Time for a particular
> board/computer, or if they will just use the general,
> lowest-denominator but portable service provided by the kernel. Dmitry
> indicates that GNAT on VxWorks takes the latter, easy way out.

The latest supported GNAT on VxWorks (5.5) actually doesn't implement
Ada.Execution_Time at all (well, the source of the package spec is
there, adorned with "pragma Unimplemented_Unit", just like the FSF 4.5.0
sources and the GNAT GPL sources -- unless you're running on Cygwin or
MaRTE).



^ permalink raw reply	[relevance 11%]

* Re: Ada.Execution_Time
  2010-12-31 23:34  8%                                       ` Ada.Execution_Time Randy Brukardt
@ 2011-01-01 13:52 10%                                         ` Niklas Holsti
  2011-01-01 14:42 11%                                           ` Ada.Execution_Time Simon Wright
  2011-01-03 21:27  9%                                           ` Ada.Execution_Time Randy Brukardt
  2011-01-01 15:54  9%                                         ` Ada.Execution_Time Simon Wright
  1 sibling, 2 replies; 170+ results
From: Niklas Holsti @ 2011-01-01 13:52 UTC (permalink / raw)


Randy Brukardt wrote:
> I think we're actually in agreement on most points.

Good, and I agree. Nevertheless I again make a lot of comments, below, 
because I think your view, if true, means that Ada.Execution_Time is 
useless for real-time systems.

> The main difference is 
> that I contend that the theoretical "execution time" is, in actual practice, 
> unmeasurable. ("Unmeasurable" is a bit strong, but I mean that you can only 
> get a gross estimation of it.)

You are right, we disagree on this.

> You have argued that there exist cases where it is possible to do better (a 
> hardware clock, a single processor, a kernel that doesn't get in the way of 
> using the hardware clock,

Yes.

> no contention with other devices on the bus, etc.),

I may have been fuzzy on that. Such real but variable or unpredictable 
delays or speed-ups are, in my view, just normal parts of the execution 
time of the affected task, and do not harm the ideal concept of "the 
execution time of a task" nor the usefulness of Ada.Execution_Time and 
its child packages. They only mean that the measured execution time of 
one particular execution of a task is less predictive of the execution 
times of future executions of that task. More on this below.

> and I wouldn't disagree. The problem is that those cases aren't 
> realistic, particularly if you are talking about a language-defined package. 

I am not proposing new general requirements on timing accuracy for the RM.

> (Even if there is a hardware clock available on a particular target, an 
> off-the-shelf compiler isn't going to be able to use it. It can only use the 
> facilities of the kernel or OS.)

For critical, embedded, hard-real-time systems I think it is not 
uncommon to use dedicated real-time computers with kernels such as 
VxWorks or RTEMS or bare-board Ada run-time systems. I have worked on 
such computers in the past, and I see such computers being advertised 
and used today.

In such systems, the kernel or Ada RTS is usually customised by a "board 
support package" (BSP) that, among other things, handles the interfaces 
to clocks and timers on the target computer (the "board"). Such systems 
can provide high-accuracy timers and mechanisms for execution-time 
monitoring without having to change the compiler; it should be enough to 
change the implementation of Ada.Execution_Time. In effect, 
Ada.Execution_Time would be a part of the BSP, or depend on types and 
services defined in the BSP.

The question is then if the compiler/system vendors will take the 
trouble and cost to customise Ada.Execution_Time for a particular 
board/computer, or if they will just use the general, lowest-denominator 
but portable service provided by the kernel. Dmitry indicates that GNAT 
on VxWorks takes the latter, easy way out. That's a matter of cost vs 
demand; if the users want it, the vendors can provide it.

For example, the "CPU Usage Statistics" section of the "RTEMS C User's 
Guide" says: "RTEMS versions newer than the 4.7 release series, support 
the ability to obtain timestamps with nanosecond granularity if the BSP 
provides support. It is a desirable enhancement to change the way the 
usage data is gathered to take advantage of this recently added 
capability. Please consider sponsoring the core RTEMS development team 
to add this capability." Thus, in March 2010 the RTEMS developers wanted 
and could implement accurate execution-time measurement, but no customer 
had yet paid for its implementation.

> This shows up when you talk about the inability to reproduce the results. In 
> practice, I think you'll find that the values vary a lot from run to run; 
> the net effect is that the error is a large percentage of the results for 
> many values.

Again, I may have not have been clear on my view of the reproducibility 
and variability of execution-time measurements. The variability in the 
measured execution time of a task has three components or sources:

1. Variable execution paths due to data-dependent conditional 
control-flow (if-then-else, case, loop). In other words, on different 
executions of the task, different sequences of instructions are 
executed, leading to different execution times.

2. Variable execution times of indivual instructions, for example due to 
variable cache contents and variable bus contention.

3. Variable measurement errors, for example truncations or roundings in 
the count of CPU_Time clock cycles, variable amount of interrupt 
handling included in the task execution time, etc.

Components 1 and 2 are big problems for worst-case analysis but in my 
view are not problems for Ada.Execution_Time. In fact, I think that one 
of the intended uses of Ada.Execution_Time is to help systems make good 
use of this execution-time variability by letting the system do other 
useful things when some task happens to execute quicker than its 
worst-case execution time and leaves some slack in the schedule.

Only component 3 is an "error" in the measured value. This component, 
and also constant measurement errors if any, are problems for users of 
Ada.Execution_Time.

> Your discussion of real-time scheduling using this still seems to me to be 
> more in the realm of academic exercise than something practical. I'm sure 
> that it works in very limited circumstances,

I think these circumstances have some correlation with the domain for 
which Ada is or was intended: reliable, embedded, possibly real-time 
systems. Well, this is a limited niche.

> but those circumstances are 
> getting less and less likely by the year. Techniques that assume a strong 
> correspondence between these values and real-time are simply fantasy -- I 
> would hope no one depend on these for anything of importance.

If this is true, Ada.Execution_Time is useless for real-time systems. 
Since it was added to the RM for 2005, and is still in the draft for RM 
2012, I suppose the ARG majority does not share your view.

Thanks for a frank discussion, and best wishes for 2011!

-- 
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
       .      @       .



^ permalink raw reply	[relevance 10%]

* Re: An Example for Ada.Execution_Time
  2010-12-31 18:57  9%             ` Niklas Holsti
@ 2011-01-01 13:39  4%               ` Dmitry A. Kazakov
  2011-01-01 20:25 10%                 ` Niklas Holsti
  0 siblings, 1 reply; 170+ results
From: Dmitry A. Kazakov @ 2011-01-01 13:39 UTC (permalink / raw)


On Fri, 31 Dec 2010 20:57:54 +0200, Niklas Holsti wrote:

> Dmitry A. Kazakov wrote:
>> On Fri, 31 Dec 2010 14:42:41 +0200, Niklas Holsti wrote:
>> 
>>> Dmitry A. Kazakov wrote:
>>>> On Thu, 30 Dec 2010 18:51:30 -0500, BrianG wrote:
>>>>
>>>>> Since D.16 defines CPU_Time as if it were a numeric value, is it too 
>>>>> much to ask why a conversion to some form of numeric value wasn't 
>>>>> provided?
>>>> But time is not a number and was not defined as if it were.
>>> You keep saying that, Dmitri, but your only argument seems to be the 
>>> absence of some operators like addition for CPU_Time. Since CPU_Time is 
>>> private, we cannot tell if this absence means that the D.14 authors 
>>> considered the type non-numeric, or just considered the operators 
>>> unnecessary for the intended uses.
>> 
>> No, the argument is that time is a state of some recurrent process, like
>> the position of an Earth's meridian relatively to the Sun. This state is
>> not numeric, it could be numeric though. That depends on the nature of the
>> process.
> 
> This is your view of what the English word "time" means.

The English word "time" it has many meanings.

> It is not based on any text in the RM, as far as I can see.

It is based on the fact that RM always introduces a distinct type, when it
means duration, time interval, period, e.g.: Duration, Time_Span. When RM
uses a type named "time," it does not mean duration. This why it does not
declare it numeric. It does not provide addition of times or multiplication
by a scalar, which were appropriate if time were numeric or had the meaning
duration. CPU_Time is handled accordingly. D.14 reuses Time_Span for
intervals of CPU_Time, which stresses the difference. If this is not clean,
then, not because CPU_Time is duration, it is because CPU_Time can be (and
is) unrelated to the source of Ada.Real_Time.Time. Thus reusing Time_Span
for its intervals questionable. It would be better to introduce a separate
type for this, e.g. CPU_Time_Interval. Furthermore, the CPU_Time type
should be local to the task, preventing its usage anywhere outside the
task, while CPU_Time_Interval could be same for all tasks.

>>> - by RM D.14 (13/2), "the execution time value is set to zero at the 
>>> creation of the task".
>> 
>> I agree that here RM is sloppy. They should rather talk about an "epoch"
>> rather than "zero," if they introduced CPU_Time as a time.
> 
> So, here the RM disagrees with your view that CPU_Time is not numeric,

No it does not. "Zero" is not a numeric term, it denotes a specific element
of a group (an additive identity element), zero object (an initial
element). The number named "zero" is a special case, when the group is
numeric.

RM is consistent here, but, as I said, sloppy, because the zero element of
a time system has a specific name: "epoch." RM uses this term in D.8, so
should it do here.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[relevance 4%]

* Re: An Example for Ada.Execution_Time
  2010-12-30 23:51  4%     ` BrianG
  2010-12-31  9:11 12%       ` Dmitry A. Kazakov
@ 2011-01-01  0:07  3%       ` Randy Brukardt
  1 sibling, 0 replies; 170+ results
From: Randy Brukardt @ 2011-01-01  0:07 UTC (permalink / raw)


"BrianG" <briang000@gmail.com> wrote in message 
news:ifj5u9$rr5$1@news.eternal-september.org...
> Randy Brukardt wrote:
>> "BrianG" <briang000@gmail.com> wrote in message 
>> news:ifbi5c$rqt$1@news.eternal-september.org...
>> ...
>>>    >Neither Execution_Time or Execution_Time.Timers provides any value
>>>    >that can be used directly.
>>
>> This seems like a totally silly question. Giving this some sort of 
>> religious importance is beyond silly...
>>
>>                                    Randy.
>
>
> Apparently, asking how a package, defined in the standard, was intended to 
> be used is now a silly question, and asking for an answer to the question 
> I originally asked (which was to clarify a previous response, not to 
> provide an example of use) is a religious debate.  I need to revise my 
> definitions.

But you didn't ask how a package defined in the standard was intended to be 
used. You asked why you have to use another package (Ada.Real_Time) in order 
for it to be useful. And you've repeated that over and over and over like it 
was meaningful in some way. But that is pretty much the definition of a 
silly question. It's just the way the package was defined, and it doesn't 
matter beyond having to add one additional "with" clause.

And that's pretty much irrelevant. In real Ada programs, there are many with 
clauses in the average compilation unit. In Janus/Ada, the number of withs 
averages 20 or so, and Claw programs are much higher than that. One could 
reduce those numbers by putting everything into a few massive packages, but 
those would be unweldy, poorly encapuslated, and close to unmaintainable.

The need to add one extra with to use a real-time package just falls into 
the noise. Probably it would have been better to offer the option of 
retrieving a value in terms of Duration, but it is just not a significant 
difference.

The answer to the "how do you use" question is simple and has been provided 
many times: use "-" to get a Time_Span, operate on that, and why that would 
be a problem to you or anyone else is beyond my comprehension.

> I won't claim to be an expert on the RM, but I don't recall any other 
> package (I did look at the ones you mention) that define a private type 
> but don't provide operations that make that type useful (for some 
> definition of 'use').  Ada.Directories doesn't require Ada.IO_Exceptions 
> to use Directory_Entry_Type or Search_Type; Ada.Streams.Stream_IO doesn't 
> require Ada.Streams (or Ada.Text_IO) to Create/Open/Read/etc. a File_Type. 
> The only thing provided from a CPU_Time is a count in seconds, or another 
> private type.

Here I completely disagree. If you plan to do anything *practical* with the 
Ada.Directories types, you'll have to use another package (at least 
Ada.Text_IO) to do something with the results. (Indeed, that is true of 
*all* Ada packages -- you have to do I/O somewhere or the results are 
irrelevant.  And you are wrong about Stream_IO.Read; you have to use a 
Stream_Element_Array in order to do that, and that is in Ada.Streams, not in 
Stream_IO.

In any case, I'm done wasting my time answering this question. It's obvious 
that you have lost you mind vis-a-vis this question, and there is no reason 
to waste any more time if/until you get it back. Do not feed the troll (even 
if the troll is someone that is otherwise reasonable).

                                                    Randy.





^ permalink raw reply	[relevance 3%]

* Re: Ada.Execution_Time
  2010-12-30 23:49  8%                                     ` Ada.Execution_Time Niklas Holsti
@ 2010-12-31 23:34  8%                                       ` Randy Brukardt
  2011-01-01 13:52 10%                                         ` Ada.Execution_Time Niklas Holsti
  2011-01-01 15:54  9%                                         ` Ada.Execution_Time Simon Wright
  0 siblings, 2 replies; 170+ results
From: Randy Brukardt @ 2010-12-31 23:34 UTC (permalink / raw)


I think we're actually in agreement on most points. The main difference is 
that I contend that the theoretical "execution time" is, in actual practice, 
unmeasurable. ("Unmeasurable" is a bit strong, but I mean that you can only 
get a gross estimation of it.)

You have argued that there exist cases where it is possible to do better (a 
hardware clock, a single processor, a kernel that doesn't get in the way of 
using the hardware clock, no contention with other devices on the bus, 
etc.), and I wouldn't disagree. The problem is that those cases aren't 
realistic, particularly if you are talking about a language-defined package. 
(Even if there is a hardware clock available on a particular target, an 
off-the-shelf compiler isn't going to be able to use it. It can only use the 
facilities of the kernel or OS.)

This shows up when you talk about the inability to reproduce the results. In 
practice, I think you'll find that the values vary a lot from run to run; 
the net effect is that the error is a large percentage of the results for 
many values.

Your discussion of real-time scheduling using this still seems to me to be 
more in the realm of academic exercise than something practical. I'm sure 
that it works in very limited circumstances, but those circumstances are 
getting less and less likely by the year. Techniques that assume a strong 
correspondence between these values and real-time are simply fantasy -- I 
would hope no one depend on these for anything of importance. (With the 
possible exception of an all-Ada kernel, but even then you would be 
*writing* Ada.Execution_Time, not using it.) A weak correspondence (for 
choosing between otherwise similar tasks, for one instance) might make 
sense, but that is about it.

                       Randy.


"Niklas Holsti" <niklas.holsti@tidorum.invalid> wrote in message 
news:8o4k3tFko2U1@mid.individual.net...
> Before I answer Randy's points, below, I will try to summarise my position 
> in this discussion. It seems that my convoluted dialog with Dmitry has not 
> made it clear. I'm sorry that this makes for a longish post.
>
> I am not proposing or asking for new requirements on Ada.Execution_Time in 
> RM D.14. I accept that the accuracy and other qualities of the 
> implementation are (mostly) not specified in the RM, so Randy is right 
> that the implementors can (mostly) just provide an interface to whatever 
> services exist, and Dmitry is also (mostly) right when he says that 
> Ada.Execution_Time can deliver garbage results and still satisfy the RM 
> requirements.
>
> However, I think we all hope that, as Randy said, implementers are going 
> to provide the best implementation that they can. Speaking of a "best" 
> implementation implies that there is some ideal solution, perhaps not 
> practically realizable, against which the quality of an implementation can 
> be judged. Moreover, since the RM defines the language and is the basis on 
> which implementors work, I would expect that the RM tries to describe this 
> ideal solution, perhaps only through informal definitions, rationale, 
> implementation advice, or annotations. This is what I have called the 
> "intent of Ada.Execution_Time" and it includes such questions as the 
> intended meaning of a CPU_Time value and what is meant by "execution time 
> of a task".
>
> My background for this discussion and for understanding Ada.Execution_Time 
> is real-time programming and analysis, and in particular the various forms 
> of schedulability analysis. In this domain the theory and practice depend 
> crucially on the execution times of tasks, usually on the worst-case 
> execution time (WCET) but sometimes on the whole range or distribution of 
> execution times. Moreover, the theory and practice assume that "the 
> execution time of a task" has a physical meaning and a strong relationship 
> to real time.
>
> For example, it is assumed (usually implicitly) that when a task is 
> executing uninterrupted on a processor, the execution time of the task and 
> the real time increase at the same rate -- this is more or less the 
> definition of "execution time". Another (implicitly) assumed property is 
> that if a processor first runs task A for X seconds of execution time, 
> then switches to task B for Y seconds of execution time, the elapsed real 
> time equals X + Y plus some "overhead" time for the task switch. (As a 
> side comment, I admit that some of these assumptions are becoming dubious 
> for complex processors where tasks can have strong interactions, for 
> example through the cache.)
>
> I have assumed, and still mostly believe, that this concept of "execution 
> time of a task" is the ideal or intent behind Ada.Execution_Time, and that 
> approaching this ideal is an implementer's goal.
>
> My participation in this thread started with my objection to Dmitry's 
> assertion that "CPU_Time has no physical meaning". I may have 
> misunderstood Dmitry's thought, leading to a comedy of misunderstandings. 
> Perhaps Dmitry only meant that in the absence of any accuracy 
> requirements, CPU_Time may not have a useful physical meaning in a poor 
> implementation of Ada.Execution_Time. I accept that, but I think that a 
> good implementation should try to give CPU_Time the physical meaning that 
> "execution time" has in the theory and practice of real-time systems, as 
> closely as is practical and desired by the users of the implementation.
>
> My comments in this thread therefore show the properties that I think a 
> good implementation of Ada.Execution_Time should have, and are not 
> proposed as new requirements. At most they could appear as additions to 
> the rationale, advice, or annotations for RM D.14. I measure the 
> "goodness" of an implementation as "usefulness for implementing real-time 
> programs". Others, perhaps Randy or Dmitry, may have other goodness 
> measures.
>
> This thread started by a question about how Ada.Execution_Time is meant to 
> be used. I think it is useful to discuss the properties and uses that can 
> be expected of a good implementation, even if the RM also allows poor 
> implementations.
>
> Randy Brukardt wrote:
>> "Niklas Holsti" <niklas.holsti@tidorum.invalid> wrote in message 
>> news:8o0p0lF94rU1@mid.individual.net...
>>> I'm sure that the proposers of the package Ada.Execution_Time expected 
>>> the implementation to use the facilities of the underlying system. But I 
>>> am also confident that they had in mind some specific uses of the 
>>> package and that these uses require that the values provided by 
>>> Ada.Execution_Time have certain properties that can reasonably be 
>>> expected of "execution time", whether or not these properties are 
>>> expressly written as requirements in the RM.
>>
>> Probably, but quality of implementation is rarely specified in the Ada 
>> Standard. When it is, it generally is in the form of Implementation 
>> Advice (as opposed to hard requirements). The expectation is that 
>> implementers are going to provide the best implementation that they 
>> can -- implementers don't purposely build crippled or useless 
>> implementations. Moreover, that is *more* likely when the Standard is 
>> overspecified, simply because of the need to provide something that meets 
>> the standard.
>
> I agree.
>
>>> You put "time" in quotes, Randy. Don't you agree that there *is* a 
>>> valid, physical concept of "the execution time of a task" that can be 
>>> measured in units of physical time, seconds say? At least for processors 
>>> that only execute one task at a time, and whether or not the system 
>>> provides facilities for measuring this time?
>>
>> I'm honestly not sure. The problem is that while such a concept might 
>> logicially exist, as a practical matter it cannot be measured outside of 
>> the most controlled circumstances.
>
> I don't think that measurement is so difficult or that the circumstances 
> must be so very controlled.
>
> Let's assume a single-processor system and a measurement mechanism like 
> the "Model A" that Dmitry described. That is, we have a HW counter that is 
> driven by a fixed frequency. For simplicity and accuracy, let's assume 
> that the counter is driven by the CPU clock so that the counter changes 
> are synchronized with instruction executions. I think such counters are 
> not uncommon in current computers used in real-time systems, although they 
> may be provided by the board and not the processor. We implement 
> Ada.Execution_Time by making the task-switching routines and the Clock 
> function in Ada.Execution_Time read the value of the counter to keep track 
> of how much the counter increases while the processor is running a given 
> task. The accumulated increase is stored in the TCB of the task when the 
> task is not running.
>
> Randy, why do you think that this mechanism does not measure the execution 
> time of the tasks, or a good approximation of the ideal? There are of 
> course nice questions about exactly when in the task-switching code the 
> counter is read and stored, and what to do with interrupts, but I think 
> these are details that can be dismissed as in RM D.14 (11/2) by 
> considering them implementation defined.
>
> It is true that execution times measured in that way are not exactly 
> repeatable on today's processors, even when the task follows exactly the 
> same execution path in each measurement, because of external perturbations 
> (such as memory access delays due to DRAM refresh) and inter-task 
> interferences (such as cache evictions due to interrupts or preempting 
> tasks). But this non-repeatability is present in the ideal concept, too, 
> and I don't expect Ada.Execution_Time.Clock to give exactly repeatable 
> results. It should show the execution time in the current execution of the 
> task.
>
>> Thus, that might make sense in a bare-board Ada implementation, but not 
>> in any implementation running on top of any OS or kernel.
>
> The task concept is not so Ada-specific. For example, I would call RTEMS a 
> "kernel", and it has a general (not Ada-specific) service for measuring 
> per-task execution times, implemented much as Dmitry's Model A except that 
> the "RTC counter" may be the RTEMS interrupt-driven "clock tick" counter, 
> not a HW counter.
>
> And what is an OS but a kernel with a large set of services, and usually 
> running several unrelated applications / processes, not just several tasks 
> in one application? Task-specific execution times for an OS-based 
> application are probably less repeatable, and less useful, but that does 
> not detract from the principle.
>
>> As such, whether the concept exists is more of an "angels on the head of 
>> a pin" question than anything of practical importance.
>
> It is highly important for all practical uses of real-time scheduling 
> theories and algorithms, since it is their basic assumption. Of course, 
> some real-time programmers are of the opinion that real-time scheduling 
> theories are of no practical importance. I am not one of them :-)
>
>>> If so, then even if Ada.Execution_Time is intended as only a window into 
>>> these facilities, it is still intended to provide measures of the 
>>> physical execution time of tasks, to some practical level of accuracy.
>>
>> The problem is that that "practical level of accuracy" isn't realistic.
>
> I disagree. I think execution-time measurements using Dmitry's Model A are 
> practical and can be sufficiently accurate, especially if they use a 
> hardware counter or RTC.
>
>> Moreover, I've always viewed such facilities as "profiling" ones -- it's 
>> the relative magnitudes of the values that matter, not the absolute 
>> values. In that case, the scale of the values is not particularly 
>> relevant.
>
> When profiling is used merely to identify the CPU hogs or bottlenecks, 
> with the aim of speeding up a program, I agree that the scale is not 
> relevant. Profiling is often used in this way for non-real-time systems. 
> It can be done also for real-time systems, but would not help in the 
> analysis of schedulability if the time-scale is arbitrary.
>
> If profiling is used to measure task execution times for use in 
> schedulability analysis or even crude CPU load computations, the scale 
> must be real time, because the reference values (deadlines, load 
> measurement intervals) are expressed in real time.
>
>>> It has already been said, and not only by me, that Ada.Execution_Time is 
>>> intended (among other things, perhaps) to be used for implementing task 
>>> scheduling algorithms that depend on the accumulated execution time of 
>>> the tasks. This is supported by the Burns and Wellings paper referenced 
>>> above. In such algorithms I believe it is essential that the execution 
>>> times are physical times because they are used in formulae that relate 
>>> (sums of) execution-time spans to spans of real time.
>>
>> That would be a wrong interpretation of a algorithms, I think. (Either 
>> that, or the algorithms themselves are heavily flawed!). The important 
>> property is that all of the execution times have a reasonably 
>> proportional relationship to the actual time spent executing each task 
>> (that hypothetical concept); the absolute values shouldn't matter much
>
> I think you are wrong. The algorithms presented in the Burns and Wellings 
> paper that I referred to implement "execution-time servers" which, as I 
> understand them, are meant to limit the fraction of the CPU power that is 
> given to certain groups of tasks and work as follows in outline. The CPU 
> fraction is defined by an execution-time budget, say B seconds, that the 
> tasks in the group jointly consume. When the budget is exhausted, the 
> tasks in the group are either suspended or demoted to a background 
> priority. The budget is periodically replenished (increased up to B 
> seconds) every R seconds, with B < R for a one-processor system. The goal 
> is thus to let this task group use at most B/R of the CPU time. In other 
> words, that the CPU load fraction from this task group should be no more 
> than B/R.
>
> The B part is measured in execution time (CPU_Time differences as 
> Time_Spans) and the R part in real time. If execution time is not scaled 
> properly, the load fraction B/R is wrong in proprtion, and the CPU time 
> (1-B/R) left for other, perhaps more critical tasks could be too small, 
> causing deadline misses and failures. It is essential that execution time 
> (B) is commensurate with real time (R). The examples in the paper also 
> show this assumption.
>
> As usual for academic papers in real-time scheduling, Burns and Wellings 
> make no explicit assumptions about the meaning of execution time. They do 
> mention measurement inaccuracies, but a systematic difference in scale is 
> not considered.
>
>> (just as the exact priority values are mostly irrelevant to scheduling 
>> decisions).
>
> The case of priority values is entirely different. I don't know of any 
> real-time scheduling methods or analyses in which the quantitative values 
> of priorities are important; only their relative order is important. In 
> contrast, all real-time scheduling methods and analyses that I know of 
> depend on the quantitative, metric, values of task execution times. 
> Perhaps some heuristic scheduling methods like "shortest task first" are 
> exceptions to this rule, but I don't think they are suitable for real-time 
> systems.
>
>> Moreover, when the values are close, one would hope that the algorithms 
>> don't change behavior much.
>
> Yes, I don't think the on-line execution-time dependent scheduling 
> algorithms need very accurate measures of execution time. But if the 
> time-scale is wrong by a factor of 2, say, I'm sure the algorithms will 
> not perform as expected.
>
> Burns and Wellings say that one of the "servers" they describe may have 
> problems with cumulative drift due to measurement errors -- similar to 
> round-off or truncation errors -- and they propose methods to correct 
> that. But they do not discuss scale errors, which should lead to a much 
> larger drift.
>
>> The values would be trouble if they bore no relationship at all to the 
>> "actual time spent executing the task", but it's hard to imagine any 
>> real-world facility in which that was the case.
>
> I agree, but Dmitry claimed the opposite ("CPU_Time has no physical 
> meaning") to which I objected. And indeed it seems that under some 
> conditions the execution times measured with the Windows "quants" 
> mechanism are zero, which would certainly inconvenience scheduling 
> algorithms.
>
>>> The question is how much meaning should be read into ordinary words like 
>>> "time" when used in the RM without a formal definition.
>>>
>>> If the RM were to say that L is the length of a piece of string S, 
>>> measured in meters, and that some parts of S are colored red, some blue, 
>>> and some parts may not be colored at all, surely we could conclude that 
>>> the sum of the lengths in meters of the red, blue, and uncolored parts 
>>> equals L? And that the sum of the lengths of the red and blue parts is 
>>> at most L? And that, since we like colorful things, we hope that the 
>>> length of the uncolored part is small?
>>>
>>> I think the case of summing task execution time spans is analogous.
>>
>> I think you are inventing things.
>
> Possibly so. RM D.14 (11/2) tries to define "the execution time of a task" 
> by using an English phrase "... is the time spent by the system ...". I am 
> trying to understand what this definition might mean, as a description of 
> the ideal or intent in the minds of the authors, who certainly intended 
> the definition to have *some* meaning for the reader.
>
> I strongly suspect that the proposers of D.14 meant "execution time" as 
> understood in the real-time scheduling theory domain, and that they felt 
> it unnecessary to define it or its properties more formally, partly out of 
> habit, as those properties are implicitly assumed in academic papers, 
> partly because any definition would have had to include messy text about 
> measurement errors, and they did not want to specify accuracy 
> requirements.
>
>> There is no such requirement in the standard, and that's good:
>
> I agree and I don't want to add such a requirement.
>
>> I've never seen a real system in which this has been true.
>
> I don't think it will be exactly true in a real system. I continue to 
> think that it will be approximately true in a good implementation of 
> Ada.Execution_Time (modulo the provision on interrupt handlers etc), and 
> that this is necessary if Ada.Execution_Time is to be used as Burns and 
> Wellings propose.
>
>> Even the various profilers I wrote for MS-DOS (the closest system to a 
>> bare machine that will ever be in wide use) never had this property.
>
> I can't comment on that now, but I would be interested to hear what you 
> tried, and what happened.
>
>>> Moreover, on a multi-process systems (an Ada program running under 
>>> Windows or Linux, for example) some of the CPU time is spent on other 
>>> processes, all of which would be "overhead" from the point of view of 
>>> the Ada program. I don't think that the authors of D.14 had such systems 
>>> in mind.
>>
>> I disagree, in the sense that the ARG as a whole certainly considered the 
>> use of this facility in all environments.
>
> Aha. Well, as I said above on the question of Ada vs kernel vs OS, the 
> package should be implementable under and OS like Windows or Linux, too. I 
> don't think D.14 makes any attempt to exclude that, and why should it.
>
>> (I find that it would be very valuable for profiling on Windows, for 
>> instance, even if the results only have a weak relationship to reality).
>
> Yes, if all you want are the relative execution-time consumptions of the 
> tasks in order to find the CPU hogs. And the quant-truncation problem may 
> distort even the relative execution times considerably.
>
> Randy wrote in another post:
> > It's clear that he [Niklas] would never be happy
> > with a Windows implementation of Ada.Execution_Time -- too bad,
> > because it still can be useful.
>
> I probably would not be happy to use Windows at all for a real-time 
> system, so my happiness with Ada.Execution_Time is perhaps moot.
>
> But it is all a question of what accuracy is required and can be provided. 
> If I understand the description of Windows quants correctly, an 
> implementation of Ada.Execution_Time under Windows may have tolerable 
> accuracy if the average span of non-interrupted, non-preempted, and 
> non-suspended execution of task is much larger than a quant, as the 
> truncation of partially used quants then causes a relatively small error.
>
> I don't know if the scheduling methods of Windows would let one make good 
> use the measured execution times. I have no experience of Windows 
> programming on that level, unfortunately.
>
> -- 
> Niklas Holsti
> Tidorum Ltd
> niklas holsti tidorum fi
>       .      @       . 





^ permalink raw reply	[relevance 8%]

* Re: An Example for Ada.Execution_Time
  2010-12-31 13:05  4%         ` Simon Wright
  2010-12-31 14:14  4%           ` Dmitry A. Kazakov
  2010-12-31 14:24  5%           ` Robert A Duff
@ 2010-12-31 22:40  5%           ` Simon Wright
  2 siblings, 0 replies; 170+ results
From: Simon Wright @ 2010-12-31 22:40 UTC (permalink / raw)


Simon Wright <simon@pushface.org> writes:

> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:

>> I think this is the core of misunderstanding. The thing you have in
>> mind is "time interval since the task start according to the
>> execution time clock."  It is not Ada.Execution_Time.Clock, it is:
>>
>>    Ada.Execution_Time.Clock - CPU_Time_First
>
> I don't _think_ that CPU_Time_First is actually defined as the value
> of Clock when the task starts (is initialized, put on the ready queue,
> whatever).

I think (from having had a look at GNAT GPL 2010's sources) that you are
right to suggest Time_Of (0) in place of CPU_Time_First (in GNAT's
implementation, the latter is Duration'First!).

Or Niklas's suggestion of the missing CPU_Time_Zero.



^ permalink raw reply	[relevance 5%]

* Re: An Example for Ada.Execution_Time
  2010-12-31 14:15 11%           ` Dmitry A. Kazakov
@ 2010-12-31 18:57  9%             ` Niklas Holsti
  2011-01-01 13:39  4%               ` Dmitry A. Kazakov
  0 siblings, 1 reply; 170+ results
From: Niklas Holsti @ 2010-12-31 18:57 UTC (permalink / raw)


Dmitry A. Kazakov wrote:
> On Fri, 31 Dec 2010 14:42:41 +0200, Niklas Holsti wrote:
> 
>> Dmitry A. Kazakov wrote:
>>> On Thu, 30 Dec 2010 18:51:30 -0500, BrianG wrote:
>>>
>>>> Since D.16 defines CPU_Time as if it were a numeric value, is it too 
>>>> much to ask why a conversion to some form of numeric value wasn't 
>>>> provided?
>>> But time is not a number and was not defined as if it were.
>> You keep saying that, Dmitri, but your only argument seems to be the 
>> absence of some operators like addition for CPU_Time. Since CPU_Time is 
>> private, we cannot tell if this absence means that the D.14 authors 
>> considered the type non-numeric, or just considered the operators 
>> unnecessary for the intended uses.
> 
> No, the argument is that time is a state of some recurrent process, like
> the position of an Earth's meridian relatively to the Sun. This state is
> not numeric, it could be numeric though. That depends on the nature of the
> process.

This is your view of what the English word "time" means. It is not based 
on any text in the RM, as far as I can see. (And I don't see what 
"recurrent process" has to do with it. Time could also be measured by 
radioactive decay, for example, which is not recurrent. Or by water 
clocks, also not recurrent as long as the water lasts. Using recurrent 
processes like oscillators, rotators, or orbiters is just a good way to 
measure time accurately by counting periods.)

I have also tried to understand D.14 by interpreting its English words, 
like "time", but Randy says that this is "reading stuff that is not 
there". I don't entirely agree with him. For CPU_Time I think that D.14 
shows sufficiently clearly that it has a numeric meaning.

>> - by RM D.14 (13/2), "the execution time value is set to zero at the 
>> creation of the task".
> 
> I agree that here RM is sloppy. They should rather talk about an "epoch"
> rather than "zero," if they introduced CPU_Time as a time.

So, here the RM disagrees with your view that CPU_Time is not numeric, 
and your conclusion is that the RM is wrong? I am not convinced.

> I suppose Time_Of (0) is the time when the task started because of D.14
> (13/2).

Good point, I did not think of Time_Of (0) as a replacement for the 
(missing) CPU_Time_Zero constant.

So, even if we don't agree on the numeric or non-numeric nature of 
CPU_Time, we do agree on how to use Ada.Execution_Time.Clock to compute 
the execution time. Good!

-- 
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
       .      @       .



^ permalink raw reply	[relevance 9%]

* Re: An Example for Ada.Execution_Time
  2010-12-31 13:05  4%         ` Simon Wright
  2010-12-31 14:14  4%           ` Dmitry A. Kazakov
@ 2010-12-31 14:24  5%           ` Robert A Duff
  2010-12-31 22:40  5%           ` Simon Wright
  2 siblings, 0 replies; 170+ results
From: Robert A Duff @ 2010-12-31 14:24 UTC (permalink / raw)


Simon Wright <simon@pushface.org> writes:

> Of course, I competely understand that the Standard is what it is, and
> no one's going to change it at this point.

I haven't followed most of this (long!) discussion, but if
somebody reports a bug to ada-comment, and the ARG agrees
that it's sufficiently broken, then the standard will
get changed.  (Eventually.  ARG is just a bunch of
volunteers, who only meet for about 8 days per year.)

- Bob



^ permalink raw reply	[relevance 5%]

* Re: An Example for Ada.Execution_Time
  2010-12-31 12:42  9%         ` Niklas Holsti
@ 2010-12-31 14:15 11%           ` Dmitry A. Kazakov
  2010-12-31 18:57  9%             ` Niklas Holsti
  0 siblings, 1 reply; 170+ results
From: Dmitry A. Kazakov @ 2010-12-31 14:15 UTC (permalink / raw)


On Fri, 31 Dec 2010 14:42:41 +0200, Niklas Holsti wrote:

> Dmitry A. Kazakov wrote:
>> On Thu, 30 Dec 2010 18:51:30 -0500, BrianG wrote:
>> 
>>> Since D.16 defines CPU_Time as if it were a numeric value, is it too 
>>> much to ask why a conversion to some form of numeric value wasn't 
>>> provided?
>> 
>> But time is not a number and was not defined as if it were.
> 
> You keep saying that, Dmitri, but your only argument seems to be the 
> absence of some operators like addition for CPU_Time. Since CPU_Time is 
> private, we cannot tell if this absence means that the D.14 authors 
> considered the type non-numeric, or just considered the operators 
> unnecessary for the intended uses.

No, the argument is that time is a state of some recurrent process, like
the position of an Earth's meridian relatively to the Sun. This state is
not numeric, it could be numeric though. That depends on the nature of the
process. 

>> It is the time
>> differences which are numeric. RM D.14 defines differences of CPU_Time as
>> Time_Span. Time_Span is numeric.
> 
> CPU_Time is logically numeric, since its "values correspond one-to-one 
> with a .. range of mathematical integers" by RM D.14 (12/2). Moreover, 
> RM D.14 (13/2) uses the symbol "I" to stand for a value of CPU_Time, and 
> then uses "I" as a factor that multiplies a floating-point number. So 
> "I" clearly stands for a number (one of the "mathematical integers").

No, this stands only for countability. Time is not a number, it can be
represented in the form Epoch + n * Interval, where n is a number.

> - by RM D.14 (13/2), "the execution time value is set to zero at the 
> creation of the task".

I agree that here RM is sloppy. They should rather talk about an "epoch"
rather than "zero," if they introduced CPU_Time as a time.
 
>>    Ada.Execution_Time.Clock - CPU_Time_First
> 
> No. There is no rule in D.14 that makes CPU_Time_First equal to the 
> CPU_Time at the start of a task (which by D.14 (13/2) is "zero"). 

OK, I agree with that. Correction:

   Ada.Execution_Time.Clock - Ada.Execution_Time.Time_Of (0)

> As far as I can tell, the only sure way to get the task's execution time 
> as a Time_Span or Duration is to read and store the value of 
> Ada.Execution_Time.Clock at the start of the task, and then use that in 
> Dmitry's subtraction formula instead of CPU_Time_First.

I suppose Time_Of (0) is the time when the task started because of D.14
(13/2).

Happy New Year,

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[relevance 11%]

* Re: An Example for Ada.Execution_Time
  2010-12-31 13:05  4%         ` Simon Wright
@ 2010-12-31 14:14  4%           ` Dmitry A. Kazakov
  2010-12-31 14:24  5%           ` Robert A Duff
  2010-12-31 22:40  5%           ` Simon Wright
  2 siblings, 0 replies; 170+ results
From: Dmitry A. Kazakov @ 2010-12-31 14:14 UTC (permalink / raw)


On Fri, 31 Dec 2010 13:05:06 +0000, Simon Wright wrote:

> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:
> 
>> On Thu, 30 Dec 2010 18:51:30 -0500, BrianG wrote:
>>
>>> Since D.16 defines CPU_Time as if it were a numeric value, is it too 
>>> much to ask why a conversion to some form of numeric value wasn't 
>>> provided?
>>
>> But time is not a number and was not defined as if it were. It is the time
>> differences which are numeric. RM D.14 defines differences of CPU_Time as
>> Time_Span. Time_Span is numeric.
> 
> This is certainly true. However, I remain surprised that .. even delving
> back to v1.2 of the AI .. the proposer found it necessary to have this
> type CPU_Time. The concept being looked for is the number of seconds
> during which *this* task was executing, still sounds like a Duration
> (OK, Time_Span if you must) to me.

Yes, since the epoch is unambiguous: the task start. It does not make much
sense to introduce execution time. Maybe a reason was that introducing
clock returning time span or duration could be strange.

> D14(13/2) says "For each task, the execution time value is set to zero
> at the creation of the task."; however, zero is clearly not a value of
> CPU_Time, so I don't know what is meant.

Yes, Time_Of (0) seem to be the epoch.

>> There is a direct correspondence between two, yet they are
>> conceptually different things.
> 
> I have no idea what's meant by CPU_Time, then! Seems to me it's a giant
> bug waiting to happen. It's clear that the CPU_Time of one task is
> incommensurate with the CPU_Time of another. Given that, what's it
> _for_?

That does not manifest any bug. Except that each task should have had its
own CPU_Time type! Technically, it would be possible to inject an implicit
declaration of a CPU_Time type in the declaration scope of each task. We
could have a task-local "Standard/System" package encapsulating this and
similar stuff. Though, nobody would care to do this.

BTW, you could use CPU_Time as a simulation time within the task. E.g. you
could have subtasks scheduled by the task. The scheduler would use the
CPU_Time in order to activate/deactivate/re-schedule them, periodically
according to the CPU_Time, rather than the system time.

Happy New Year,

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[relevance 4%]

* Re: An Example for Ada.Execution_Time
  2010-12-31  9:11 12%       ` Dmitry A. Kazakov
  2010-12-31 12:42  9%         ` Niklas Holsti
@ 2010-12-31 13:05  4%         ` Simon Wright
  2010-12-31 14:14  4%           ` Dmitry A. Kazakov
                             ` (2 more replies)
  1 sibling, 3 replies; 170+ results
From: Simon Wright @ 2010-12-31 13:05 UTC (permalink / raw)


"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:

> On Thu, 30 Dec 2010 18:51:30 -0500, BrianG wrote:
>
>> Since D.16 defines CPU_Time as if it were a numeric value, is it too 
>> much to ask why a conversion to some form of numeric value wasn't 
>> provided?
>
> But time is not a number and was not defined as if it were. It is the time
> differences which are numeric. RM D.14 defines differences of CPU_Time as
> Time_Span. Time_Span is numeric.

This is certainly true. However, I remain surprised that .. even delving
back to v1.2 of the AI .. the proposer found it necessary to have this
type CPU_Time. The concept being looked for is the number of seconds
during which *this* task was executing, still sounds like a Duration
(OK, Time_Span if you must) to me.

> I think this is the core of misunderstanding. The thing you have in
> mind is "time interval since the task start according to the execution
> time clock."  It is not Ada.Execution_Time.Clock, it is:
>
>    Ada.Execution_Time.Clock - CPU_Time_First

I don't _think_ that CPU_Time_First is actually defined as the value of
Clock when the task starts (is initialized, put on the ready queue,
whatever).

D14(13/2) says "For each task, the execution time value is set to zero
at the creation of the task."; however, zero is clearly not a value of
CPU_Time, so I don't know what is meant.

> There is a direct correspondence between two, yet they are
> conceptually different things.

I have no idea what's meant by CPU_Time, then! Seems to me it's a giant
bug waiting to happen. It's clear that the CPU_Time of one task is
incommensurate with the CPU_Time of another. Given that, what's it
_for_?


Of course, I competely understand that the Standard is what it is, and
no one's going to change it at this point.



^ permalink raw reply	[relevance 4%]

* Re: An Example for Ada.Execution_Time
  2010-12-31  9:11 12%       ` Dmitry A. Kazakov
@ 2010-12-31 12:42  9%         ` Niklas Holsti
  2010-12-31 14:15 11%           ` Dmitry A. Kazakov
  2010-12-31 13:05  4%         ` Simon Wright
  1 sibling, 1 reply; 170+ results
From: Niklas Holsti @ 2010-12-31 12:42 UTC (permalink / raw)


Dmitry A. Kazakov wrote:
> On Thu, 30 Dec 2010 18:51:30 -0500, BrianG wrote:
> 
>> Since D.16 defines CPU_Time as if it were a numeric value, is it too 
>> much to ask why a conversion to some form of numeric value wasn't 
>> provided?
> 
> But time is not a number and was not defined as if it were.

You keep saying that, Dmitri, but your only argument seems to be the 
absence of some operators like addition for CPU_Time. Since CPU_Time is 
private, we cannot tell if this absence means that the D.14 authors 
considered the type non-numeric, or just considered the operators 
unnecessary for the intended uses.

> It is the time
> differences which are numeric. RM D.14 defines differences of CPU_Time as
> Time_Span. Time_Span is numeric.

CPU_Time is logically numeric, since its "values correspond one-to-one 
with a .. range of mathematical integers" by RM D.14 (12/2). Moreover, 
RM D.14 (13/2) uses the symbol "I" to stand for a value of CPU_Time, and 
then uses "I" as a factor that multiplies a floating-point number. So 
"I" clearly stands for a number (one of the "mathematical integers").

> I think this is the core of misunderstanding. The thing you have in mind is
> "time interval since the task start according to the execution time clock."

I agree with that.

> It is not Ada.Execution_Time.Clock,

It is, since:

- by RM D.14 (17/2), Ada.Execution_Time.Clock returns the "current 
execution time" of the task, and

- by RM D.14 (13/2), "the execution time value is set to zero at the 
creation of the task".

Moreover, the range required of CPU_Time is "from the task start-up to 
50 years of execution time later" (RM D.14 (20/2)). This, too, indicates 
that CPU_Time is not an absolute time point like Ada.Calendar.Time, but 
represents accumulated execution time (duration) from the creation of 
the task.

However, the difference is irrelevant for the programmer, since the 
programmer can get visibly numeric values from CPU_Time only by 
computing the difference of two CPU_Time values as a Time_Span and then 
converting the Time_Span to a Duration.

> it is:
> 
>    Ada.Execution_Time.Clock - CPU_Time_First

No. There is no rule in D.14 that makes CPU_Time_First equal to the 
CPU_Time at the start of a task (which by D.14 (13/2) is "zero"). 
CPU_Time_First is just defined as the smallest value of CPU_Time, 
presumably according to the "<" operator. It might represent a negative 
execution time. The "range of mathematical integers" in D.14 (12/2) may 
include negative numbers -- at least nothing is said to exclude this 
possibility.

I don't know why Ada.Execution_Time does not define a constant called 
CPU_Time_Zero; I think it should. Perhaps CPU_Time_First is meant to 
stand for zero, but this is not stated in D.14.

Overall, I think there may be some confusion in D.14 regarding the 
interpretation of CPU_Time as absolute or relative to the task start. 
The fact that no CPU_Time_Zero is defined suggests that an "absolute" 
interpretation was meant, as does the absence of an operator that adds 
two CPU_Times to yield a CPU_Time sum. Other parts of D.14, such as the 
zero initialization at task creation, suggest the "relative to task 
start" meaning.

As far as I can tell, the only sure way to get the task's execution time 
as a Time_Span or Duration is to read and store the value of 
Ada.Execution_Time.Clock at the start of the task, and then use that in 
Dmitry's subtraction formula instead of CPU_Time_First.

> Happy New Year,

Also to all of you...

-- 
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
       .      @       .



^ permalink raw reply	[relevance 9%]

* Re: An Example for Ada.Execution_Time
  2010-12-30 23:51  4%     ` BrianG
@ 2010-12-31  9:11 12%       ` Dmitry A. Kazakov
  2010-12-31 12:42  9%         ` Niklas Holsti
  2010-12-31 13:05  4%         ` Simon Wright
  2011-01-01  0:07  3%       ` Randy Brukardt
  1 sibling, 2 replies; 170+ results
From: Dmitry A. Kazakov @ 2010-12-31  9:11 UTC (permalink / raw)


On Thu, 30 Dec 2010 18:51:30 -0500, BrianG wrote:

> Since D.16 defines CPU_Time as if it were a numeric value, is it too 
> much to ask why a conversion to some form of numeric value wasn't 
> provided?

But time is not a number and was not defined as if it were. It is the time
differences which are numeric. RM D.14 defines differences of CPU_Time as
Time_Span. Time_Span is numeric.

I think this is the core of misunderstanding. The thing you have in mind is
"time interval since the task start according to the execution time clock."
It is not Ada.Execution_Time.Clock, it is:

   Ada.Execution_Time.Clock - CPU_Time_First

There is a direct correspondence between two, yet they are conceptually
different things.

Happy New Year,

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[relevance 12%]

* Re: Ada.Execution_Time
  2010-12-31  0:40  5%                                             ` Ada.Execution_Time BrianG
@ 2010-12-31  9:09  5%                                               ` Dmitry A. Kazakov
  0 siblings, 0 replies; 170+ results
From: Dmitry A. Kazakov @ 2010-12-31  9:09 UTC (permalink / raw)


On Thu, 30 Dec 2010 19:40:08 -0500, BrianG wrote:

> Dmitry A. Kazakov wrote:
>> On Tue, 28 Dec 2010 14:14:57 +0000, Simon Wright wrote:
>> 
>>> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:
>>>
>>>> And conversely, the catastrophic accuracy of the VxWorks real-time
>>>> clock service does not hinder its usability for real-time application.
>>> Catastrophic?
>> 
>> Yes, this thing. In our case it was Pentium VxWorks 6.x. (The PPC we used
>> prior to it had poor performance) The problem was that Ada.Real_Time.Clock
>> had the accuracy of the clock interrupts, i.e. 1ms, which is by all
>> accounts catastrophic for a 1.7GHz processor. You can switch some tasks
>> forth and back between two clock changes.  
> 
> So, we're talking about GNAT's use of VxWorks' features as
> "catastrophic"?  That's not how I read the original statement.

No, GNAT just uses the standard OS clock. It is the OS design flaw. They
should have used the CPU's real time clock, or provide a configurable
clock, so that you could choose its source. Why should AdaCore clean up
Wind River's mess?

Happy New Year,

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[relevance 5%]

* Re: Ada.Execution_Time
  2010-12-28 15:08  5%                                           ` Ada.Execution_Time Dmitry A. Kazakov
  2010-12-28 16:18  5%                                             ` Ada.Execution_Time Simon Wright
@ 2010-12-31  0:40  5%                                             ` BrianG
  2010-12-31  9:09  5%                                               ` Ada.Execution_Time Dmitry A. Kazakov
  1 sibling, 1 reply; 170+ results
From: BrianG @ 2010-12-31  0:40 UTC (permalink / raw)


Dmitry A. Kazakov wrote:
> On Tue, 28 Dec 2010 14:14:57 +0000, Simon Wright wrote:
> 
>> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:
>>
>>> And conversely, the catastrophic accuracy of the VxWorks real-time
>>> clock service does not hinder its usability for real-time application.
>> Catastrophic?
>>
...
> 
> Yes, this thing. In our case it was Pentium VxWorks 6.x. (The PPC we used
> prior to it had poor performance) The problem was that Ada.Real_Time.Clock
> had the accuracy of the clock interrupts, i.e. 1ms, which is by all
> accounts catastrophic for a 1.7GHz processor. You can switch some tasks
> forth and back between two clock changes.
>  

So, we're talking about GNAT's use of VxWorks' features as
"catastrophic"?  That's not how I read the original statement.

(We use GNAT on VxWorks, but since we don't (yet) really use tasks, and 
have our own "delay" equivalent to match external hardware, I don't see 
its performance - but since I'm used to GNAT on DOS/Windows, I don't 
have very high expectations.  It sounds like I should keep that opinion.)

--BrianG



^ permalink raw reply	[relevance 5%]

* Re: An Example for Ada.Execution_Time
  2010-12-29  3:10  9%   ` Randy Brukardt
@ 2010-12-30 23:51  4%     ` BrianG
  2010-12-31  9:11 12%       ` Dmitry A. Kazakov
  2011-01-01  0:07  3%       ` Randy Brukardt
  0 siblings, 2 replies; 170+ results
From: BrianG @ 2010-12-30 23:51 UTC (permalink / raw)


Randy Brukardt wrote:
> "BrianG" <briang000@gmail.com> wrote in message 
> news:ifbi5c$rqt$1@news.eternal-september.org...
> ...
>>    >Neither Execution_Time or Execution_Time.Timers provides any value
>>    >that can be used directly.
> 
> This seems like a totally silly question. 
> 
> Giving this some sort of religious importance is beyond 
> silly...
> 
>                                    Randy.


Apparently, asking how a package, defined in the standard, was intended 
to be used is now a silly question, and asking for an answer to the 
question I originally asked (which was to clarify a previous response, 
not to provide an example of use) is a religious debate.  I need to 
revise my definitions.

I won't claim to be an expert on the RM, but I don't recall any other 
package (I did look at the ones you mention) that define a private type 
but don't provide operations that make that type useful (for some 
definition of 'use').  Ada.Directories doesn't require Ada.IO_Exceptions 
to use Directory_Entry_Type or Search_Type; Ada.Streams.Stream_IO 
doesn't require Ada.Streams (or Ada.Text_IO) to Create/Open/Read/etc. a 
File_Type.  The only thing provided from a CPU_Time is a count in 
seconds, or another private type.

Since D.16 defines CPU_Time as if it were a numeric value, is it too 
much to ask why a conversion to some form of numeric value wasn't 
provided?  Perhaps either a "-" or To_Duration  (and before anyone 
mentions duplicating the existing function, look at all of the 
Open/Close/Create/etc. for all the *_IO File_Types)?  I wasn't asking 
for anything to be changed, merely "why" - because I originally thought 
there might be some use that I hadn't foreseen.  Apparently not.

(Give the RM definition, making it a child of Real_Time might make it 
seem more logical, I guess, but since CPU_Time is not really a time, and 
is not related to "real time" that doesn't seem to make any sense.  I 
would think that would be all the more reason not to relate it to 
Ada.Real_Time.)

--BrianG

-- don't ask me
-- I'm just improvising
--   my illusion of careless flight
-- can't you see
--   my temperature's rising
-- I radiate more heat than light



^ permalink raw reply	[relevance 4%]

* Re: Ada.Execution_Time
  2010-12-30  5:06  2%                                   ` Ada.Execution_Time Randy Brukardt
@ 2010-12-30 23:49  8%                                     ` Niklas Holsti
  2010-12-31 23:34  8%                                       ` Ada.Execution_Time Randy Brukardt
  0 siblings, 1 reply; 170+ results
From: Niklas Holsti @ 2010-12-30 23:49 UTC (permalink / raw)


Before I answer Randy's points, below, I will try to summarise my 
position in this discussion. It seems that my convoluted dialog with 
Dmitry has not made it clear. I'm sorry that this makes for a longish post.

I am not proposing or asking for new requirements on Ada.Execution_Time 
in RM D.14. I accept that the accuracy and other qualities of the 
implementation are (mostly) not specified in the RM, so Randy is right 
that the implementors can (mostly) just provide an interface to whatever 
services exist, and Dmitry is also (mostly) right when he says that 
Ada.Execution_Time can deliver garbage results and still satisfy the RM 
requirements.

However, I think we all hope that, as Randy said, implementers are going 
to provide the best implementation that they can. Speaking of a "best" 
implementation implies that there is some ideal solution, perhaps not 
practically realizable, against which the quality of an implementation 
can be judged. Moreover, since the RM defines the language and is the 
basis on which implementors work, I would expect that the RM tries to 
describe this ideal solution, perhaps only through informal definitions, 
rationale, implementation advice, or annotations. This is what I have 
called the "intent of Ada.Execution_Time" and it includes such questions 
as the intended meaning of a CPU_Time value and what is meant by 
"execution time of a task".

My background for this discussion and for understanding 
Ada.Execution_Time is real-time programming and analysis, and in 
particular the various forms of schedulability analysis. In this domain 
the theory and practice depend crucially on the execution times of 
tasks, usually on the worst-case execution time (WCET) but sometimes on 
the whole range or distribution of execution times. Moreover, the theory 
and practice assume that "the execution time of a task" has a physical 
meaning and a strong relationship to real time.

For example, it is assumed (usually implicitly) that when a task is 
executing uninterrupted on a processor, the execution time of the task 
and the real time increase at the same rate -- this is more or less the 
definition of "execution time". Another (implicitly) assumed property is 
that if a processor first runs task A for X seconds of execution time, 
then switches to task B for Y seconds of execution time, the elapsed 
real time equals X + Y plus some "overhead" time for the task switch. 
(As a side comment, I admit that some of these assumptions are becoming 
dubious for complex processors where tasks can have strong interactions, 
for example through the cache.)

I have assumed, and still mostly believe, that this concept of 
"execution time of a task" is the ideal or intent behind 
Ada.Execution_Time, and that approaching this ideal is an implementer's 
goal.

My participation in this thread started with my objection to Dmitry's 
assertion that "CPU_Time has no physical meaning". I may have 
misunderstood Dmitry's thought, leading to a comedy of 
misunderstandings. Perhaps Dmitry only meant that in the absence of any 
accuracy requirements, CPU_Time may not have a useful physical meaning 
in a poor implementation of Ada.Execution_Time. I accept that, but I 
think that a good implementation should try to give CPU_Time the 
physical meaning that "execution time" has in the theory and practice of 
real-time systems, as closely as is practical and desired by the users 
of the implementation.

My comments in this thread therefore show the properties that I think a 
good implementation of Ada.Execution_Time should have, and are not 
proposed as new requirements. At most they could appear as additions to 
the rationale, advice, or annotations for RM D.14. I measure the 
"goodness" of an implementation as "usefulness for implementing 
real-time programs". Others, perhaps Randy or Dmitry, may have other 
goodness measures.

This thread started by a question about how Ada.Execution_Time is meant 
to be used. I think it is useful to discuss the properties and uses that 
can be expected of a good implementation, even if the RM also allows 
poor implementations.

Randy Brukardt wrote:
> "Niklas Holsti" <niklas.holsti@tidorum.invalid> wrote in message 
> news:8o0p0lF94rU1@mid.individual.net...
>> I'm sure that the proposers of the package Ada.Execution_Time expected the 
>> implementation to use the facilities of the underlying system. But I am 
>> also confident that they had in mind some specific uses of the package and 
>> that these uses require that the values provided by Ada.Execution_Time 
>> have certain properties that can reasonably be expected of "execution 
>> time", whether or not these properties are expressly written as 
>> requirements in the RM.
> 
> Probably, but quality of implementation is rarely specified in the Ada 
> Standard. When it is, it generally is in the form of Implementation Advice 
> (as opposed to hard requirements). The expectation is that implementers are 
> going to provide the best implementation that they can -- implementers don't 
> purposely build crippled or useless implementations. Moreover, that is 
> *more* likely when the Standard is overspecified, simply because of the need 
> to provide something that meets the standard.

I agree.

>> You put "time" in quotes, Randy. Don't you agree that there *is* a valid, 
>> physical concept of "the execution time of a task" that can be measured in 
>> units of physical time, seconds say? At least for processors that only 
>> execute one task at a time, and whether or not the system provides 
>> facilities for measuring this time?
> 
> I'm honestly not sure. The problem is that while such a concept might 
> logicially exist, as a practical matter it cannot be measured outside of the 
> most controlled circumstances.

I don't think that measurement is so difficult or that the circumstances 
must be so very controlled.

Let's assume a single-processor system and a measurement mechanism like 
the "Model A" that Dmitry described. That is, we have a HW counter that 
is driven by a fixed frequency. For simplicity and accuracy, let's 
assume that the counter is driven by the CPU clock so that the counter 
changes are synchronized with instruction executions. I think such 
counters are not uncommon in current computers used in real-time 
systems, although they may be provided by the board and not the 
processor. We implement Ada.Execution_Time by making the task-switching 
routines and the Clock function in Ada.Execution_Time read the value of 
the counter to keep track of how much the counter increases while the 
processor is running a given task. The accumulated increase is stored in 
the TCB of the task when the task is not running.

Randy, why do you think that this mechanism does not measure the 
execution time of the tasks, or a good approximation of the ideal? There 
are of course nice questions about exactly when in the task-switching 
code the counter is read and stored, and what to do with interrupts, but 
I think these are details that can be dismissed as in RM D.14 (11/2) by 
considering them implementation defined.

It is true that execution times measured in that way are not exactly 
repeatable on today's processors, even when the task follows exactly the 
same execution path in each measurement, because of external 
perturbations (such as memory access delays due to DRAM refresh) and 
inter-task interferences (such as cache evictions due to interrupts or 
preempting tasks). But this non-repeatability is present in the ideal 
concept, too, and I don't expect Ada.Execution_Time.Clock to give 
exactly repeatable results. It should show the execution time in the 
current execution of the task.

> Thus, that might make sense in a bare-board 
> Ada implementation, but not in any implementation running on top of any OS 
> or kernel.

The task concept is not so Ada-specific. For example, I would call RTEMS 
a "kernel", and it has a general (not Ada-specific) service for 
measuring per-task execution times, implemented much as Dmitry's Model A 
except that the "RTC counter" may be the RTEMS interrupt-driven "clock 
tick" counter, not a HW counter.

And what is an OS but a kernel with a large set of services, and usually 
running several unrelated applications / processes, not just several 
tasks in one application? Task-specific execution times for an OS-based 
application are probably less repeatable, and less useful, but that does 
not detract from the principle.

> As such, whether the concept exists is more of an "angels on the 
> head of a pin" question than anything of practical importance.

It is highly important for all practical uses of real-time scheduling 
theories and algorithms, since it is their basic assumption. Of course, 
some real-time programmers are of the opinion that real-time scheduling 
theories are of no practical importance. I am not one of them :-)

>> If so, then even if Ada.Execution_Time is intended as only a window into 
>> these facilities, it is still intended to provide measures of the physical 
>> execution time of tasks, to some practical level of accuracy.
> 
> The problem is that that "practical level of accuracy" isn't realistic. 

I disagree. I think execution-time measurements using Dmitry's Model A 
are practical and can be sufficiently accurate, especially if they use a 
hardware counter or RTC.

> Moreover, I've always viewed such facilities as "profiling" ones -- it's the 
> relative magnitudes of the values that matter, not the absolute values. In 
> that case, the scale of the values is not particularly relevant.

When profiling is used merely to identify the CPU hogs or bottlenecks, 
with the aim of speeding up a program, I agree that the scale is not 
relevant. Profiling is often used in this way for non-real-time systems. 
It can be done also for real-time systems, but would not help in the 
analysis of schedulability if the time-scale is arbitrary.

If profiling is used to measure task execution times for use in 
schedulability analysis or even crude CPU load computations, the scale 
must be real time, because the reference values (deadlines, load 
measurement intervals) are expressed in real time.

>> It has already been said, and not only by me, that Ada.Execution_Time is 
>> intended (among other things, perhaps) to be used for implementing task 
>> scheduling algorithms that depend on the accumulated execution time of the 
>> tasks. This is supported by the Burns and Wellings paper referenced above. 
>> In such algorithms I believe it is essential that the execution times are 
>> physical times because they are used in formulae that relate (sums of) 
>> execution-time spans to spans of real time.
> 
> That would be a wrong interpretation of a algorithms, I think. (Either that, 
> or the algorithms themselves are heavily flawed!). The important property is 
> that all of the execution times have a reasonably proportional relationship 
> to the actual time spent executing each task (that hypothetical concept); 
> the absolute values shouldn't matter much

I think you are wrong. The algorithms presented in the Burns and 
Wellings paper that I referred to implement "execution-time servers" 
which, as I understand them, are meant to limit the fraction of the CPU 
power that is given to certain groups of tasks and work as follows in 
outline. The CPU fraction is defined by an execution-time budget, say B 
seconds, that the tasks in the group jointly consume. When the budget is 
exhausted, the tasks in the group are either suspended or demoted to a 
background priority. The budget is periodically replenished (increased 
up to B seconds) every R seconds, with B < R for a one-processor system. 
The goal is thus to let this task group use at most B/R of the CPU time. 
In other words, that the CPU load fraction from this task group should 
be no more than B/R.

The B part is measured in execution time (CPU_Time differences as 
Time_Spans) and the R part in real time. If execution time is not scaled 
properly, the load fraction B/R is wrong in proprtion, and the CPU time 
(1-B/R) left for other, perhaps more critical tasks could be too small, 
causing deadline misses and failures. It is essential that execution 
time (B) is commensurate with real time (R). The examples in the paper 
also show this assumption.

As usual for academic papers in real-time scheduling, Burns and Wellings 
make no explicit assumptions about the meaning of execution time. They 
do mention measurement inaccuracies, but a systematic difference in 
scale is not considered.

> (just as the exact priority values 
> are mostly irrelevant to scheduling decisions).

The case of priority values is entirely different. I don't know of any 
real-time scheduling methods or analyses in which the quantitative 
values of priorities are important; only their relative order is 
important. In contrast, all real-time scheduling methods and analyses 
that I know of depend on the quantitative, metric, values of task 
execution times. Perhaps some heuristic scheduling methods like 
"shortest task first" are exceptions to this rule, but I don't think 
they are suitable for real-time systems.

> Moreover, when the values 
> are close, one would hope that the algorithms don't change behavior much.

Yes, I don't think the on-line execution-time dependent scheduling 
algorithms need very accurate measures of execution time. But if the 
time-scale is wrong by a factor of 2, say, I'm sure the algorithms will 
not perform as expected.

Burns and Wellings say that one of the "servers" they describe may have 
problems with cumulative drift due to measurement errors -- similar to 
round-off or truncation errors -- and they propose methods to correct 
that. But they do not discuss scale errors, which should lead to a much 
larger drift.

> The values would be trouble if they bore no relationship at all to the 
> "actual time spent executing the task", but it's hard to imagine any 
> real-world facility in which that was the case.

I agree, but Dmitry claimed the opposite ("CPU_Time has no physical 
meaning") to which I objected. And indeed it seems that under some 
conditions the execution times measured with the Windows "quants" 
mechanism are zero, which would certainly inconvenience scheduling 
algorithms.

>> The question is how much meaning should be read into ordinary words like 
>> "time" when used in the RM without a formal definition.
>>
>> If the RM were to say that L is the length of a piece of string S, 
>> measured in meters, and that some parts of S are colored red, some blue, 
>> and some parts may not be colored at all, surely we could conclude that 
>> the sum of the lengths in meters of the red, blue, and uncolored parts 
>> equals L? And that the sum of the lengths of the red and blue parts is at 
>> most L? And that, since we like colorful things, we hope that the length 
>> of the uncolored part is small?
>>
>> I think the case of summing task execution time spans is analogous.
> 
> I think you are inventing things.

Possibly so. RM D.14 (11/2) tries to define "the execution time of a 
task" by using an English phrase "... is the time spent by the system 
...". I am trying to understand what this definition might mean, as a 
description of the ideal or intent in the minds of the authors, who 
certainly intended the definition to have *some* meaning for the reader.

I strongly suspect that the proposers of D.14 meant "execution time" as 
understood in the real-time scheduling theory domain, and that they felt 
it unnecessary to define it or its properties more formally, partly out 
of habit, as those properties are implicitly assumed in academic papers, 
partly because any definition would have had to include messy text about 
measurement errors, and they did not want to specify accuracy requirements.

> There is no such requirement in the standard, and that's good:

I agree and I don't want to add such a requirement.

> I've never seen a real system in which this has been true.

I don't think it will be exactly true in a real system. I continue to 
think that it will be approximately true in a good implementation of 
Ada.Execution_Time (modulo the provision on interrupt handlers etc), and 
that this is necessary if Ada.Execution_Time is to be used as Burns and 
Wellings propose.

> Even the various profilers I wrote for MS-DOS (the closest system to a bare 
> machine that will ever be in wide use) never had this property.

I can't comment on that now, but I would be interested to hear what you 
tried, and what happened.

>> Moreover, on a multi-process systems (an Ada program running under Windows 
>> or Linux, for example) some of the CPU time is spent on other processes, 
>> all of which would be "overhead" from the point of view of the Ada 
>> program. I don't think that the authors of D.14 had such systems in mind.
> 
> I disagree, in the sense that the ARG as a whole certainly considered the 
> use of this facility in all environments.

Aha. Well, as I said above on the question of Ada vs kernel vs OS, the 
package should be implementable under and OS like Windows or Linux, too. 
I don't think D.14 makes any attempt to exclude that, and why should it.

> (I find that it would be very 
> valuable for profiling on Windows, for instance, even if the results only 
> have a weak relationship to reality).

Yes, if all you want are the relative execution-time consumptions of the 
tasks in order to find the CPU hogs. And the quant-truncation problem 
may distort even the relative execution times considerably.

Randy wrote in another post:
 > It's clear that he [Niklas] would never be happy
 > with a Windows implementation of Ada.Execution_Time -- too bad,
 > because it still can be useful.

I probably would not be happy to use Windows at all for a real-time 
system, so my happiness with Ada.Execution_Time is perhaps moot.

But it is all a question of what accuracy is required and can be 
provided. If I understand the description of Windows quants correctly, 
an implementation of Ada.Execution_Time under Windows may have tolerable 
accuracy if the average span of non-interrupted, non-preempted, and 
non-suspended execution of task is much larger than a quant, as the 
truncation of partially used quants then causes a relatively small error.

I don't know if the scheduling methods of Windows would let one make 
good use the measured execution times. I have no experience of Windows 
programming on that level, unfortunately.

-- 
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
       .      @       .



^ permalink raw reply	[relevance 8%]

* Re: Ada.Execution_Time
  2010-12-29 14:30  3%                                   ` Ada.Execution_Time Dmitry A. Kazakov
  2010-12-29 16:19  5%                                     ` Ada.Execution_Time (see below)
  2010-12-29 20:32 10%                                     ` Ada.Execution_Time Niklas Holsti
@ 2010-12-30 19:23  5%                                     ` Niklas Holsti
  2 siblings, 0 replies; 170+ results
From: Niklas Holsti @ 2010-12-30 19:23 UTC (permalink / raw)


Dmitry A. Kazakov wrote:
> On Wed, 29 Dec 2010 14:48:20 +0200, Niklas Holsti wrote:
> 
>> Dmitry has agreed with some of my statements on this point, for example:
>>
>> - A task cannot accumulate execution time at a higher rate than real 
>> time. For example, in one real-time second the CPU_Time of a task cannot 
>> increase by more than one second.
> 
> Hold on, that is only true if a certain model of CPU_Time measurement used.
> There are many potential models. The one we discussed was the model A:
> 
> Model A. Get an RTC reading upon activation. Each time CPU_Time is
> requested by Clock get another RTC reading, build the difference, add the
> accumulator to the result. Upon task deactivation, get the difference and
> update the accumulator.

[ cut ]

> Note that in either model the counter readings are rounded. Windows rounds
> toward zero, which why you never get more load than 100%. But it is
> thinkable and expectable that some systems would round away from zero or to
> the nearest bound.

The RTEMS CPU Usage Statistics function in fact rounds up: "RTEMS keeps 
track of how many clock ticks have occurred which [should be "while"] 
the task being switched out has been executing. If the task has been 
running less than 1 clock tick, then for the purposes of the statistics, 
it is assumed to have executed 1 clock tick. This results in some 
inaccuracy but the alternative is for the task to have appeared to 
execute 0 clock ticks." (Quoted from 
http://www.rtems.org/onlinedocs/releases/rtemsdocs-4.9.4/share/rtems/pdf/c_user.pdf, 
page 285).

I think that rounding up may indeed be better (safer) for real-time 
scheduling and monitoring purposes. But of course this is just changing 
the direction of the inaccuracy, the principle stands.

-- 
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
       .      @       .



^ permalink raw reply	[relevance 5%]

* Re: Ada.Execution_Time
  2010-12-29 21:20  5%                                           ` Ada.Execution_Time Dmitry A. Kazakov
  2010-12-30  5:13 11%                                             ` Ada.Execution_Time Randy Brukardt
@ 2010-12-30 13:37  5%                                             ` Niklas Holsti
  1 sibling, 0 replies; 170+ results
From: Niklas Holsti @ 2010-12-30 13:37 UTC (permalink / raw)


Dmitry A. Kazakov wrote:
> On Wed, 29 Dec 2010 19:57:19 +0000, (see below) wrote:
> 
>> They are implementation dependent in the best of
>> circumstances, and so need to be specified by the implementer.
> 
> But Niklas seems to want more than merely documentation.

I'll answer that in a soon-to-appear answer to Randy.

-- 
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
       .      @       .



^ permalink raw reply	[relevance 5%]

* Re: Ada.Execution_Time
  2010-12-29 21:21  5%                                       ` Ada.Execution_Time Dmitry A. Kazakov
@ 2010-12-30 13:34 12%                                         ` Niklas Holsti
  0 siblings, 0 replies; 170+ results
From: Niklas Holsti @ 2010-12-30 13:34 UTC (permalink / raw)


Dmitry A. Kazakov wrote:
> On Wed, 29 Dec 2010 22:32:30 +0200, Niklas Holsti wrote:
> 
>> Dmitry A. Kazakov wrote:
>>> Model C. Asynchronous task monitoring process
>> That sounds weird. Please clarify.
> 
> For example, in the kernel you have a timer interrupt each n us. Within the
> handler you get the current TCB and increment the CPU usage counter there
> by 1. CPU_Time returned by Clock yields Counter * n us. This is a quite
> lightweight schema, which can be used for small and real-time systems. The
> overhead is constant.

Yes, that is one possible implementation, and not a bad one, although 
the CPU_Tick will probably be relatively large, and may have some 
jitter. The definition of Ada.Execution_Time.CPU_Tick as the *average* 
constant-Clock duration would come into play.

In principle this implementation is not much different from the simple, 
hardware-driven, directly readable counter of nanoseconds or CPU clock 
cycles. The only difference is that here the "counter" is driven by a 
timer-generated interrupt, not by a hardware clock generator, and it is 
easier to make task-specific counters.

>> Again, it holds within the accuracy of the measurement method and the 
>> time source, which is all that one can expect.
> 
> The error is not bound. Only its deviation is bound, e.g. x seconds per
> second of measurement.

Yes, as for different unsynchronized clocks in general. I don't think 
this is a problem for the intended uses of Ada.Execution_Time, as I 
understand them.

-- 
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
       .      @       .



^ permalink raw reply	[relevance 12%]

* Task execution time 2
@ 2010-12-30  8:54  7% Dmitry A. Kazakov
  0 siblings, 0 replies; 170+ results
From: Dmitry A. Kazakov @ 2010-12-30  8:54 UTC (permalink / raw)


Here is a better test for the thing. It measures the execution time using
real-time clock and Windows services. The idea is to give up the processor
after 0.5ms, before the system time quant expires.

The test must run considerably long time. Because the task Measured should
lose the processor each 0.5ms and return back after other tasks would take
their share. If the test completes shortly, that possibly mean something
wrong. Increase the number of worker task.
----------------------------------------------------------------
with Ada.Execution_Time;  use Ada.Execution_Time;
with Ada.Real_Time;       use Ada.Real_Time;
with Ada.Text_IO;         use Ada.Text_IO;

with Ada.Numerics.Elementary_Functions;

procedure Executuion_Time_1 is
   task type Measured;
   task body Measured is
      Count     : Seconds_Count;
      Fraction  : Time_Span;
      Estimated : Time_Span := Time_Span_Zero;
      Start     : Time;
   begin
      for I in 1..1_000 loop
         Start := Clock;
         while To_Duration (Clock - Start) < 0.000_5 loop
            null;
         end loop;
         Estimated := Estimated + Clock - Start;
         delay 0.0;
      end loop;
      Split (Ada.Execution_Time.Clock, Count, Fraction);
      Put_Line
      (  "Measured: seconds" & Seconds_Count'Image (Count) &
         " Fraction " & Duration'Image (To_Duration (Fraction))
      );
      Put_Line
      (  "Estimated:" & Duration'Image (To_Duration (Estimated))
      );
   end Measured;

   task type Worker; -- Used to generate CPU load
   task body Worker is
      use Ada.Numerics.Elementary_Functions;
      X : Float;
   begin
      for I in Positive'Range loop
         X := sin (Float (I));
      end loop;
   end Worker;

begin
   delay 0.1;
   declare
      Workers : array (1..5) of Worker;
      Test    : Measured;
   begin
      null;
   end;
end Executuion_Time_1;
-----------------------------------------------------------
Windows XP SP3

Measured: seconds 0 Fraction  0.000000000
Estimated: 0.690618774

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[relevance 7%]

* Re: Ada.Execution_Time
  2010-12-29 21:20  5%                                           ` Ada.Execution_Time Dmitry A. Kazakov
@ 2010-12-30  5:13 11%                                             ` Randy Brukardt
  2010-12-30 13:37  5%                                             ` Ada.Execution_Time Niklas Holsti
  1 sibling, 0 replies; 170+ results
From: Randy Brukardt @ 2010-12-30  5:13 UTC (permalink / raw)


"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message 
news:xdzib16pw12u$.v5dybjo9t1sb.dlg@40tude.net...
> On Wed, 29 Dec 2010 19:57:19 +0000, (see below) wrote:
>...
>>> ARG defined Ada.Execution_Time in the most
>>> *reasonable* way, in particular, allowing to deliver whatever garbage 
>>> the
>>> underlying OS service spits.
>>
>> I agree. What else could they do?
>> And if the implementation documents that, where is the harm?
>
> To me no harm.

Exactly. The issue is if you think that there is something more than that; 
for whatever reason Niklas seems to think there are requirements intended 
beyond that, and that simply isn't true. It's clear that he'd never be happy 
with a Windows implementation of Ada.Execution_Time -- too bad, because it 
still can be useful.

                              Randy.







^ permalink raw reply	[relevance 11%]

* Re: Ada.Execution_Time
  2010-12-29 12:48  8%                                 ` Ada.Execution_Time Niklas Holsti
  2010-12-29 14:30  3%                                   ` Ada.Execution_Time Dmitry A. Kazakov
@ 2010-12-30  5:06  2%                                   ` Randy Brukardt
  2010-12-30 23:49  8%                                     ` Ada.Execution_Time Niklas Holsti
  1 sibling, 1 reply; 170+ results
From: Randy Brukardt @ 2010-12-30  5:06 UTC (permalink / raw)


"Niklas Holsti" <niklas.holsti@tidorum.invalid> wrote in message 
news:8o0p0lF94rU1@mid.individual.net...
> Randy, I'm glad that you are participating in this thread. My duologue 
> with Dmitry is becoming repetitive and our views entrenched.
>
> We have been discussing several things, although the focus is on the 
> intended meaning and properties of Ada.Execution_Time. As I am not an ARG 
> member I have based my understanding on the (A)RM text. If the text does 
> not reflect the intent of the ARG, I will be glad to know it, but perhaps 
> the ARG should then consider resolving the conflict by confirming or 
> changing the text.

Perhaps, but that presumes there is something wrong with the text. A lot is 
purposely left unspecified in the ARM; what causes problems is when people 
start reading stuff that is not there.

...

>> My understanding was that it was intended to provide a window into 
>> whatever facilities the underlying system had for execution "time" 
>> counting.
>
> Of course, as long as those facilities are good enough for the RM 
> requirements and for the users; otherwise, the implementor might improve 
> on the underlying system as required. The same holds for Ada.Real_Time. If 
> the underlying system is a bare-board Ada RTS, the Ada 95 form of the RTS 
> probably had to be extended to support Ada.Execution_Time.
>
> I'm sure that the proposers of the package Ada.Execution_Time expected the 
> implementation to use the facilities of the underlying system. But I am 
> also confident that they had in mind some specific uses of the package and 
> that these uses require that the values provided by Ada.Execution_Time 
> have certain properties that can reasonably be expected of "execution 
> time", whether or not these properties are expressly written as 
> requirements in the RM.

Probably, but quality of implementation is rarely specified in the Ada 
Standard. When it is, it generally is in the form of Implementation Advice 
(as opposed to hard requirements). The expectation is that implementers are 
going to provide the best implementation that they can -- implementers don't 
purposely build crippled or useless implementations. Moreover, that is 
*more* likely when the Standard is overspecified, simply because of the need 
to provide something that meets the standard.

> Examples of these uses are given in the paper by A. Burns and A.J. 
> Wellings, "Programming Execution-Time Servers in Ada 2005," pp.47-56, 27th 
> IEEE International Real-Time Systems Symposium (RTSS'06), 2006. 
> http://doi.ieeecomputersociety.org/10.1109/RTSS.2006.39.
>
> You put "time" in quotes, Randy. Don't you agree that there *is* a valid, 
> physical concept of "the execution time of a task" that can be measured in 
> units of physical time, seconds say? At least for processors that only 
> execute one task at a time, and whether or not the system provides 
> facilities for measuring this time?

I'm honestly not sure. The problem is that while such a concept might 
logicially exist, as a practical matter it cannot be measured outside of the 
most controlled circumstances. Thus, that might make sense in a bare-board 
Ada implementation, but not in any implementation running on top of any OS 
or kernel. As such, whether the concept exists is more of an "angels on the 
head of a pin" question than anything of practical importance.

> I think that the concept exists and that it matches the description in RM 
> D.14 (11/2), using the background from D.2.1, whether or not the ARG 
> intended this match.
>
> If you agree that such "execution time" values are conceptually well 
> defined, do you not think that the "execution time counting facilities" of 
> real-time OSes are meant to measure these values, to some practical level 
> of accuracy?
>
> If so, then even if Ada.Execution_Time is intended as only a window into 
> these facilities, it is still intended to provide measures of the physical 
> execution time of tasks, to some practical level of accuracy.

The problem is that that "practical level of accuracy" isn't realistic. 
Moreover, I've always viewed such facilities as "profiling" ones -- it's the 
relative magnitudes of the values that matter, not the absolute values. In 
that case, the scale of the values is not particularly relevant.

Specifically, I mean that what is important is which task is taking a lot of 
CPU. In that case, it simply is the task that has a large "execution time" 
(whatever that means) compared to the others. Typically, that's more than 
100 times the usage of the other tasks, so the units involved are hardly 
relevant.

>> That had no defined relationship with what Ada calls "time".
>> As usch, I think the name "execution time" is misleading, (and I recall 
>> some discussions about that in the ARG), but no one had a better name 
>> that made any sense at all.
>
> Do you remember if these discussions concerned the name of the package, 
> the name of the type CPU_Time, or the very concept "execution time"? If 
> the question was of the terms "time" versus "duration", I think "duration" 
> would have been more consistent with earlier Ada usage, but "execution 
> time" is more common outside Ada, for example in the acronym WCET for 
> Worst-Case Execution Time.
>
> The fact that Ada.Execution_Time provides a subtraction operator for 
> CPU_Time that yields Time_Span, which can be further converted to 
> Duration, leads the RM reader to assume some relationship, at least that 
> spans of real time and spans of execution time can be measured in the same 
> physical units (seconds).
>
> It has already been said, and not only by me, that Ada.Execution_Time is 
> intended (among other things, perhaps) to be used for implementing task 
> scheduling algorithms that depend on the accumulated execution time of the 
> tasks. This is supported by the Burns and Wellings paper referenced above. 
> In such algorithms I believe it is essential that the execution times are 
> physical times because they are used in formulae that relate (sums of) 
> execution-time spans to spans of real time.

That would be a wrong interpretation of a algorithms, I think. (Either that, 
or the algorithms themselves are heavily flawed!). The important property is 
that all of the execution times have a reasonably proportional relationship 
to the actual time spent executing each task (that hypothetical concept); 
the absolute values shouldn't matter much (just as the exact priority values 
are mostly irrelevant to scheduling decisions). Moreover, when the values 
are close, one would hope that the algorithms don't change behavior much.

The values would be trouble if they bore no relationship at all to the 
"actual time spent executing the task", but it's hard to imagine any 
real-world facility in which that was the case.

...
...
>> In particular, there is no requirement in the RM or anywhere else that 
>> these "times" sum to any particular answer.
>
> I agree that the RM has no such explicit requirement. I made this claim to 
> counter Dmitry's assertion that CPU_Time has no physical meaning, and of 
> course I accept that the sum will usually be less than real elapsed time 
> because the processor spends some time on non-task activities.
>
> The last sentence of RM D.14 (11/2) says "It is implementation defined 
> which task, if any, is charged the execution time that is consumed by 
> interrupt handlers and run-time services on behalf of the system". This 
> sentence strongly suggests to me that the author of this paragraph had in 
> mind that the total available execution time (span) equals the real time 
> (span), that some of this total is charged to the tasks, but that some of 
> the time spent in interrupt handlers etc. need not be charged to tasks.
>
> The question is how much meaning should be read into ordinary words like 
> "time" when used in the RM without a formal definition.
>
> If the RM were to say that L is the length of a piece of string S, 
> measured in meters, and that some parts of S are colored red, some blue, 
> and some parts may not be colored at all, surely we could conclude that 
> the sum of the lengths in meters of the red, blue, and uncolored parts 
> equals L? And that the sum of the lengths of the red and blue parts is at 
> most L? And that, since we like colorful things, we hope that the length 
> of the uncolored part is small?
>
> I think the case of summing task execution time spans is analogous.

I think you are inventing things. There is no such requirement in the 
standard, and that's good: I've never seen a real system in which this has 
been true.

Even the various profilers I wrote for MS-DOS (the closest system to a bare 
machine that will ever be in wide use) never had this property. I used to 
think that it was some sort of bug in my methods, but even using completely 
different ways of measuring time (counting ticks at subprogram heads vs. 
statistical probes -- I tried both) the effects still showed up. I've pretty 
much concluded that is is simply part of the nature of computer time -- much 
like floating point, it is an incomplete abstraction of the "real" time, and 
expecting too much out of it is going to lead immediately to disappointment.

>> I don't quite see how there could be, unless you were going to require a 
>> tailored Ada target system (which is definitely not going to be a 
>> requirement).
>
> I don't want such a requirement. The acceptable overhead (fraction of 
> execution time not charged to tasks) depends on the application.
>
> Moreover, on a multi-process systems (an Ada program running under Windows 
> or Linux, for example) some of the CPU time is spent on other processes, 
> all of which would be "overhead" from the point of view of the Ada 
> program. I don't think that the authors of D.14 had such systems in mind.

I disagree, in the sense that the ARG as a whole certainly considered the 
use of this facility in all environments. (I find that it would be very 
valuable for profiling on Windows, for instance, even if the results only 
have a weak relationship to reality).

It's possible that the people who proposed it originally were thinking as 
you are, but the ARG modified those proposals quite a bit; the result is 
definitely a team effort and not the work on any particular individual.

                               Randy.





^ permalink raw reply	[relevance 2%]

* Re: Ada.Execution_Time
  2010-12-29 20:32 10%                                     ` Ada.Execution_Time Niklas Holsti
@ 2010-12-29 21:21  5%                                       ` Dmitry A. Kazakov
  2010-12-30 13:34 12%                                         ` Ada.Execution_Time Niklas Holsti
  0 siblings, 1 reply; 170+ results
From: Dmitry A. Kazakov @ 2010-12-29 21:21 UTC (permalink / raw)


On Wed, 29 Dec 2010 22:32:30 +0200, Niklas Holsti wrote:

> Dmitry A. Kazakov wrote:
>> 
>> Model C. Asynchronous task monitoring process
> 
> That sounds weird. Please clarify.

For example, in the kernel you have a timer interrupt each n us. Within the
handler you get the current TCB and increment the CPU usage counter there
by 1. CPU_Time returned by Clock yields Counter * n us. This is a quite
lightweight schema, which can be used for small and real-time systems. The
overhead is constant.

> Again, it holds within the accuracy of the measurement method and the 
> time source, which is all that one can expect.

The error is not bound. Only its deviation is bound, e.g. x seconds per
second of measurement.

>> Time as physical concept is not absolute. There is no *the* real time, but
>> many real times and even more unreal ones.
> 
> When RM D.14(11/2) defines "the execution time of a given task" as "the 
> time spent by the system executing that task", the only reasonable 
> reading of the second "time" is as the common-sense physical time, as 
> measured by your wrist-watch or by some more precise clock.

Which physical experiment could prove or disprove that a given
implementation is in agreement with this definition? Execution time is not
observable.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[relevance 5%]

* Re: Ada.Execution_Time
  2010-12-29 19:57  4%                                         ` Ada.Execution_Time (see below)
@ 2010-12-29 21:20  5%                                           ` Dmitry A. Kazakov
  2010-12-30  5:13 11%                                             ` Ada.Execution_Time Randy Brukardt
  2010-12-30 13:37  5%                                             ` Ada.Execution_Time Niklas Holsti
  0 siblings, 2 replies; 170+ results
From: Dmitry A. Kazakov @ 2010-12-29 21:20 UTC (permalink / raw)


On Wed, 29 Dec 2010 19:57:19 +0000, (see below) wrote:

> They are implementation dependent in the best of
> circumstances, and so need to be specified by the implementer.

But Niklas seems to want more than merely documentation.

> I've no idea what "blocked without losing the CPU" means.

That is when you access something over the system bus from the task and
have to wait for the bus to become free.

There are also kernel times spend on some OS book keeping and for I/O
initiated by other task. It is impossible to tell what is done on task's
behalf and what is not. D 14(11/2) leaves everything to the implementation.

>> ARG defined Ada.Execution_Time in the most
>> *reasonable* way, in particular, allowing to deliver whatever garbage the
>> underlying OS service spits.
> 
> I agree. What else could they do?
> And if the implementation documents that, where is the harm?

To me no harm.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[relevance 5%]

* Re: Ada.Execution_Time
  2010-12-29 14:30  3%                                   ` Ada.Execution_Time Dmitry A. Kazakov
  2010-12-29 16:19  5%                                     ` Ada.Execution_Time (see below)
@ 2010-12-29 20:32 10%                                     ` Niklas Holsti
  2010-12-29 21:21  5%                                       ` Ada.Execution_Time Dmitry A. Kazakov
  2010-12-30 19:23  5%                                     ` Ada.Execution_Time Niklas Holsti
  2 siblings, 1 reply; 170+ results
From: Niklas Holsti @ 2010-12-29 20:32 UTC (permalink / raw)


Dmitry A. Kazakov wrote:
> On Wed, 29 Dec 2010 14:48:20 +0200, Niklas Holsti wrote:
> 
>> Dmitry has agreed with some of my statements on this point, for example:
>>
>> - A task cannot accumulate execution time at a higher rate than real 
>> time. For example, in one real-time second the CPU_Time of a task cannot 
>> increase by more than one second.
> 
> Hold on, that is only true if a certain model of CPU_Time measurement used.

In my view, it is true within the accuracy of the execution-time 
measurement method.

> There are many potential models. The one we discussed was the model A:
> 
> Model A. Get an RTC reading upon activation. Each time CPU_Time is
> requested by Clock get another RTC reading, build the difference, add the
> accumulator to the result. Upon task deactivation, get the difference and
> update the accumulator.

OK. This is, I think, the most natural model, perhaps with some 
processor performance counter or CPU-clock-cycle counter replacing the RTC.

> This is a very strong model. Weaker models:
> 
> Model A.1. Get RTC upon activation and deactivation. Update the accumulator
> upon deactivation. When the task is active CPU_Time does not change.

That model is not permitted, because the value of 
Ada.Execution_Time.Clock must change at every "CPU tick". The duration 
of a CPU tick is Ada.Execution_Time.CPU_Tick, which is at most one 
millisecond (RM D.14(20/2)).

It is true that CPU_Tick is only the "average length" of the 
constant-Clock intervals, but the implementation is also required to 
document an upper bound, which also forbids your model A.1.

(I think that this "average" definition caters for implementations where 
the execution time counter is incremented by an RTC interrrupt handler 
that may suffer some timing jitter.)

> Model B. Use a time source different from RTC. This what Windows actually
> doest.

I don't think that this is a different "model". No time source provides 
ideal, exact real time. If the time source for Ada.Execution_Time.Clock 
differs much from real time, the accuracy of the implementation is poor 
to that extent. It does not surprise me that this happens on Windows.

I admit that the RM does not specify any required accuracy for 
Ada.Execution_Time.Clock. The accuracy required for 
execution-time-dependent scheduling algorithms is generally low, I believe.

Analogously, there are no accuracy requirements on Ada.Real_Time.Clock.

> Model B.1. Like A.1, CPU_Time freezes when the task is active.

Forbidden like A.1 above.

> Model C. Asynchronous task monitoring process

That sounds weird. Please clarify.

> Note that in either model the counter readings are rounded. Windows rounds
> toward zero, which why you never get more load than 100%. But it is
> thinkable and expectable that some systems would round away from zero or to
> the nearest bound. So the statement holds only if you have A (maybe C) + a
> corresponding rounding.

So it holds within the accuracy of the measurement method, which often 
involves some sampling or rounding error. In my view.

>> - If only one task is executing on a processor, the execution time of 
>> that task increases (or "could increase") at the same rate as real time.
> 
> This also may be wrong if a B model is used. In particular, task switching
> may be (and I think is) driven by the programmable timer interrupts. The
> real-time clock may be driven by the TSC. Since these two are physically
> different, unsynchronized time sources, the effect can be any. It is to
> expect a systematic error accumulated with the time.

Again, it holds within the accuracy of the measurement method and the 
time source, which is all that one can expect.

The points I made were meant to show how CPU_Time is related, in 
principle, to real time. I entirely accept that in practice the 
relationships will be affected by measurement inaccuracies.

For the task scheduling methods that depend on actual execution times, I 
believe that long-term drifts or accumulations of errors in CPU_Time are 
unimportant. The execution time (span) measurements need to be 
reasonably accurate only over time spans similar to the period of the 
longest-period task. The overhead (execution time not charged to tasks) 
will probably be much larger, both in mean value and in variability, 
than the time-source errors.

As discussed earlier, the time source for Ada.Real_Time.Clock that 
determines when time-driven tasks are activated may need higher fidelity 
to real time.

>> The question is how much meaning should be read into ordinary words like 
>> "time" when used in the RM without a formal definition.
> 
> Time as physical concept is not absolute. There is no *the* real time, but
> many real times and even more unreal ones.

When RM D.14(11/2) defines "the execution time of a given task" as "the 
time spent by the system executing that task", the only reasonable 
reading of the second "time" is as the common-sense physical time, as 
measured by your wrist-watch or by some more precise clock.

Let's not go into relativity and quantum mechanics for this.

-- 
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
       .      @       .



^ permalink raw reply	[relevance 10%]

* Re: Ada.Execution_Time
  2010-12-29 16:51 10%                                       ` Ada.Execution_Time Dmitry A. Kazakov
@ 2010-12-29 19:57  4%                                         ` (see below)
  2010-12-29 21:20  5%                                           ` Ada.Execution_Time Dmitry A. Kazakov
  0 siblings, 1 reply; 170+ results
From: (see below) @ 2010-12-29 19:57 UTC (permalink / raw)


On 29/12/2010 16:51, in article jw9ocxiajasa.142oku2z0e6rx$.dlg@40tude.net,
"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote:

> On Wed, 29 Dec 2010 16:19:13 +0000, (see below) wrote:
> 
>> From this I deduce that the intent for CPU_Time is that it be a useful
>> approximation to the sum of the durations (small "d") of the intervals of
>> local inertial-frame physical time in which the task is in the running
>> state.
> 
> There exist more pragmatic considerations than the relativity theory. The
> duration you refer above is according to which clock?

I referred to time dilation to preempt your bringing it up. 8-)

> - Real time CPU counter
> - Programmable timer
> - BIOS clock
> - OS system clock
> - Ada.Real_Time.Clock
> - Ada.Calendar.Clock
> - an NTP server from the given list ...
>   ...

Quite so, but these issues, and rounding, and so on, are all subsumed under
"useful approximation".  They are implementation dependent in the best of
circumstances, and so need to be specified by the implementer.

>> It seems to me that the only grey area is the degree of approximation that
>> is acceptable for the result to be "useful".
> 
> That is the next can of worms to open. Once you decided which clock you
> take, you would have to define the sum of *which* durations according to
> this clock you are going to approximate. This would be way more difficult
> and IMO impossible. What is the "CPU" is presence of many cores? What does
> it mean for the task to be "active" keeping in mind the cases it could get
> blocked without losing the "CPU" (defined above)?

I think you are are creating unnecessary difficulties. Note that I said
nothing about CPUs, only about process states. The elapsed time between
dispatching a task/process and pre-empting or blocking it is a well defined
physical quantity. It has nothing to do with cores, and I've no idea what
"blocked without losing the CPU" means. In my dictionary that is simply
self-contradictory. But if it does mean something in some implementation,
all that is necessary is to inform of the approximation it gives rise to.

> In my humble opinion,

!-)

> ARG defined Ada.Execution_Time in the most
> *reasonable* way, in particular, allowing to deliver whatever garbage the
> underlying OS service spits.

I agree. What else could they do?
And if the implementation documents that, where is the harm?

-- 
Bill Findlay
with blueyonder.co.uk;
use  surname & forename;






^ permalink raw reply	[relevance 4%]

* Re: Ada.Execution_Time
  2010-12-29 16:19  5%                                     ` Ada.Execution_Time (see below)
@ 2010-12-29 16:51 10%                                       ` Dmitry A. Kazakov
  2010-12-29 19:57  4%                                         ` Ada.Execution_Time (see below)
  0 siblings, 1 reply; 170+ results
From: Dmitry A. Kazakov @ 2010-12-29 16:51 UTC (permalink / raw)


On Wed, 29 Dec 2010 16:19:13 +0000, (see below) wrote:

> From this I deduce that the intent for CPU_Time is that it be a useful
> approximation to the sum of the durations (small "d") of the intervals of
> local inertial-frame physical time in which the task is in the running
> state.

There exist more pragmatic considerations than the relativity theory. The
duration you refer above is according to which clock?

- Real time CPU counter
- Programmable timer
- BIOS clock
- OS system clock
- Ada.Real_Time.Clock
- Ada.Calendar.Clock
- an NTP server from the given list ...
  ...

> It seems to me that the only grey area is the degree of approximation that
> is acceptable for the result to be "useful".

That is the next can of worms to open. Once you decided which clock you
take, you would have to define the sum of *which* durations according to
this clock you are going to approximate. This would be way more difficult
and IMO impossible. What is the "CPU" is presence of many cores? What does
it mean for the task to be "active" keeping in mind the cases it could get
blocked without losing the "CPU" (defined above)?

In my humble opinion, ARG defined Ada.Execution_Time in the most
*reasonable* way, in particular, allowing to deliver whatever garbage the
underlying OS service spits.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[relevance 10%]

* Re: Ada.Execution_Time
  2010-12-29 14:30  3%                                   ` Ada.Execution_Time Dmitry A. Kazakov
@ 2010-12-29 16:19  5%                                     ` (see below)
  2010-12-29 16:51 10%                                       ` Ada.Execution_Time Dmitry A. Kazakov
  2010-12-29 20:32 10%                                     ` Ada.Execution_Time Niklas Holsti
  2010-12-30 19:23  5%                                     ` Ada.Execution_Time Niklas Holsti
  2 siblings, 1 reply; 170+ results
From: (see below) @ 2010-12-29 16:19 UTC (permalink / raw)


On 29/12/2010 14:30, in article aooml6t0ezs4.4srxtfm9z00r.dlg@40tude.net,
"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote:

> On Wed, 29 Dec 2010 14:48:20 +0200, Niklas Holsti wrote:
> 
>> Dmitry has agreed with some of my statements on this point, for example:
>> 
>> - A task cannot accumulate execution time at a higher rate than real
>> time. For example, in one real-time second the CPU_Time of a task cannot
>> increase by more than one second.
> 
> Hold on, that is only true if a certain model of CPU_Time measurement used.
> There are many potential models. The one we discussed was the model A:
> 
> Model A. Get an RTC reading upon activation. Each time CPU_Time is
> requested by Clock get another RTC reading, build the difference, add the
> accumulator to the result. Upon task deactivation, get the difference and
> update the accumulator.
> 
> This is a very strong model. Weaker models:
> 
> Model A.1. Get RTC upon activation and deactivation. Update the accumulator
> upon deactivation. When the task is active CPU_Time does not change.
> 
> Model B. Use a time source different from RTC. This what Windows actually
> doest.
> 
> Model B.1. Like A.1, CPU_Time freezes when the task is active.
> 
> Model C. Asynchronous task monitoring process
> 
> ...
> Note that in either model the counter readings are rounded. ...
> 
>> - If only one task is executing on a processor, the execution time of
>> that task increases (or "could increase") at the same rate as real time.
> 
> This also may be wrong if a B model is used. ...
> 
>> The question is how much meaning should be read into ordinary words like
>> "time" when used in the RM without a formal definition.
> 
> Time as physical concept is not absolute. There is no *the* real time, but
> many real times and even more unreal ones.  ...

I hope we can agree that Ada is defined "sensibly".

From this I deduce that the intent for CPU_Time is that it be a useful
approximation to the sum of the durations (small "d") of the intervals of
local inertial-frame physical time in which the task is in the running
state.

It seems to me that the only grey area is the degree of approximation that
is acceptable for the result to be "useful".

Dmitri raises some devils-advocate issues around that. Some of them might be
considered to be dismissed by the assumption of sensible definition. Others
might not be so clear. Perhaps the ARG consider these to be issues of
implementation quality rather than semantics.

-- 
Bill Findlay
with blueyonder.co.uk;
use  surname & forename;





^ permalink raw reply	[relevance 5%]

* Re: Ada.Execution_Time
  2010-12-29 12:48  8%                                 ` Ada.Execution_Time Niklas Holsti
@ 2010-12-29 14:30  3%                                   ` Dmitry A. Kazakov
  2010-12-29 16:19  5%                                     ` Ada.Execution_Time (see below)
                                                       ` (2 more replies)
  2010-12-30  5:06  2%                                   ` Ada.Execution_Time Randy Brukardt
  1 sibling, 3 replies; 170+ results
From: Dmitry A. Kazakov @ 2010-12-29 14:30 UTC (permalink / raw)


On Wed, 29 Dec 2010 14:48:20 +0200, Niklas Holsti wrote:

> Dmitry has agreed with some of my statements on this point, for example:
> 
> - A task cannot accumulate execution time at a higher rate than real 
> time. For example, in one real-time second the CPU_Time of a task cannot 
> increase by more than one second.

Hold on, that is only true if a certain model of CPU_Time measurement used.
There are many potential models. The one we discussed was the model A:

Model A. Get an RTC reading upon activation. Each time CPU_Time is
requested by Clock get another RTC reading, build the difference, add the
accumulator to the result. Upon task deactivation, get the difference and
update the accumulator.

This is a very strong model. Weaker models:

Model A.1. Get RTC upon activation and deactivation. Update the accumulator
upon deactivation. When the task is active CPU_Time does not change.

Model B. Use a time source different from RTC. This what Windows actually
doest.

Model B.1. Like A.1, CPU_Time freezes when the task is active.

Model C. Asynchronous task monitoring process

...

Note that in either model the counter readings are rounded. Windows rounds
toward zero, which why you never get more load than 100%. But it is
thinkable and expectable that some systems would round away from zero or to
the nearest bound. So the statement holds only if you have A (maybe C) + a
corresponding rounding.

> - If only one task is executing on a processor, the execution time of 
> that task increases (or "could increase") at the same rate as real time.

This also may be wrong if a B model is used. In particular, task switching
may be (and I think is) driven by the programmable timer interrupts. The
real-time clock may be driven by the TSC. Since these two are physically
different, unsynchronized time sources, the effect can be any. It is to
expect a systematic error accumulated with the time.

> The question is how much meaning should be read into ordinary words like 
> "time" when used in the RM without a formal definition.

Time as physical concept is not absolute. There is no *the* real time, but
many real times and even more unreal ones. I don't think RM can go into
this. Not only because it would be not Ada's business, but because
otherwise it would have to use some reference time. Ada does not have this,
it intentionally refused to have it when introduced Ada.Real_Time.Time. The
same arguments which were used then apply now. CPU_Time is a third time by
default absolutely independent on the other two.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[relevance 3%]

* Re: Ada.Execution_Time
  2010-12-27 22:11  4%                               ` Ada.Execution_Time Randy Brukardt
@ 2010-12-29 12:48  8%                                 ` Niklas Holsti
  2010-12-29 14:30  3%                                   ` Ada.Execution_Time Dmitry A. Kazakov
  2010-12-30  5:06  2%                                   ` Ada.Execution_Time Randy Brukardt
  0 siblings, 2 replies; 170+ results
From: Niklas Holsti @ 2010-12-29 12:48 UTC (permalink / raw)


Randy, I'm glad that you are participating in this thread. My duologue 
with Dmitry is becoming repetitive and our views entrenched.

We have been discussing several things, although the focus is on the 
intended meaning and properties of Ada.Execution_Time. As I am not an 
ARG member I have based my understanding on the (A)RM text. If the text 
does not reflect the intent of the ARG, I will be glad to know it, but 
perhaps the ARG should then consider resolving the conflict by 
confirming or changing the text.

Randy Brukardt wrote:
> "Niklas Holsti" <niklas.holsti@tidorum.invalid> wrote in message 
> news:8nm30fF7r9U1@mid.individual.net...
>> Dmitry A. Kazakov wrote:
>>> On Sat, 18 Dec 2010 23:20:20 +0200, Niklas Holsti wrote:
> ...
>>> On a such platform the implementation would be as perverse as RM D.14
>>> is. But the perversion is only because of the interpretation.
>> Bah. I think that when RM D.14 says "time", it really means time. You
>> think it means something else, perhaps a CPU cycle count. I think the 
>> burden of proof is on you.
>>
>> It seems evident to me that the text in D.14 must be interpreted using
>> the concepts in D.2.1, "The Task Dispatching Model", which clearly
>> specifies real-time points when a processor starts to execute a task and
>> stops executing a task. To me, and I believe to most readers of the RM,
>> the execution time of a task is the sum of these time slices, thus a
>> physical, real time.
> 
> For the record, I agree more with Dmitry than Niklas here. At least the 
> interpretation *I* had when this package was proposed was that it had only a 
> slight relationship to real-time.

Oh. What would that slight relationship be? Or was it left unspecified?

> My understanding was that it was intended 
> to provide a window into whatever facilities the underlying system had for 
> execution "time" counting.

Of course, as long as those facilities are good enough for the RM 
requirements and for the users; otherwise, the implementor might improve 
on the underlying system as required. The same holds for Ada.Real_Time. 
If the underlying system is a bare-board Ada RTS, the Ada 95 form of the 
RTS probably had to be extended to support Ada.Execution_Time.

I'm sure that the proposers of the package Ada.Execution_Time expected 
the implementation to use the facilities of the underlying system. But I 
am also confident that they had in mind some specific uses of the 
package and that these uses require that the values provided by 
Ada.Execution_Time have certain properties that can reasonably be 
expected of "execution time", whether or not these properties are 
expressly written as requirements in the RM.

Examples of these uses are given in the paper by A. Burns and A.J. 
Wellings, "Programming Execution-Time Servers in Ada 2005," pp.47-56, 
27th IEEE International Real-Time Systems Symposium (RTSS'06), 2006. 
http://doi.ieeecomputersociety.org/10.1109/RTSS.2006.39.

You put "time" in quotes, Randy. Don't you agree that there *is* a 
valid, physical concept of "the execution time of a task" that can be 
measured in units of physical time, seconds say? At least for processors 
that only execute one task at a time, and whether or not the system 
provides facilities for measuring this time?

I think that the concept exists and that it matches the description in 
RM D.14 (11/2), using the background from D.2.1, whether or not the ARG 
intended this match.

If you agree that such "execution time" values are conceptually well 
defined, do you not think that the "execution time counting facilities" 
of real-time OSes are meant to measure these values, to some practical 
level of accuracy?

If so, then even if Ada.Execution_Time is intended as only a window into 
these facilities, it is still intended to provide measures of the 
physical execution time of tasks, to some practical level of accuracy.

> That had no defined relationship with what Ada calls "time".
> As usch, I think the name "execution time" is misleading, (and 
> I recall some discussions about that in the ARG), but no one had a better 
> name that made any sense at all.

Do you remember if these discussions concerned the name of the package, 
the name of the type CPU_Time, or the very concept "execution time"? If 
the question was of the terms "time" versus "duration", I think 
"duration" would have been more consistent with earlier Ada usage, but 
"execution time" is more common outside Ada, for example in the acronym 
WCET for Worst-Case Execution Time.

The fact that Ada.Execution_Time provides a subtraction operator for 
CPU_Time that yields Time_Span, which can be further converted to 
Duration, leads the RM reader to assume some relationship, at least that 
spans of real time and spans of execution time can be measured in the 
same physical units (seconds).

It has already been said, and not only by me, that Ada.Execution_Time is 
intended (among other things, perhaps) to be used for implementing task 
scheduling algorithms that depend on the accumulated execution time of 
the tasks. This is supported by the Burns and Wellings paper referenced 
above. In such algorithms I believe it is essential that the execution 
times are physical times because they are used in formulae that relate 
(sums of) execution-time spans to spans of real time.

Dmitry has agreed with some of my statements on this point, for example:

- A task cannot accumulate execution time at a higher rate than real 
time. For example, in one real-time second the CPU_Time of a task cannot 
increase by more than one second.

- If only one task is executing on a processor, the execution time of 
that task increases (or "could increase") at the same rate as real time.

Do you agree that we can expect these statements to be true? (On the 
second point, system overhead should of course be taken into account, on 
which more below.)

> In particular, there is no requirement in the RM or anywhere else that these 
> "times" sum to any particular answer.

I agree that the RM has no such explicit requirement. I made this claim 
to counter Dmitry's assertion that CPU_Time has no physical meaning, and 
of course I accept that the sum will usually be less than real elapsed 
time because the processor spends some time on non-task activities.

The last sentence of RM D.14 (11/2) says "It is implementation defined 
which task, if any, is charged the execution time that is consumed by 
interrupt handlers and run-time services on behalf of the system". This 
sentence strongly suggests to me that the author of this paragraph had 
in mind that the total available execution time (span) equals the real 
time (span), that some of this total is charged to the tasks, but that 
some of the time spent in interrupt handlers etc. need not be charged to 
tasks.

The question is how much meaning should be read into ordinary words like 
"time" when used in the RM without a formal definition.

If the RM were to say that L is the length of a piece of string S, 
measured in meters, and that some parts of S are colored red, some blue, 
and some parts may not be colored at all, surely we could conclude that 
the sum of the lengths in meters of the red, blue, and uncolored parts 
equals L? And that the sum of the lengths of the red and blue parts is 
at most L? And that, since we like colorful things, we hope that the 
length of the uncolored part is small?

I think the case of summing task execution time spans is analogous.

> I don't quite see how there could be, 
> unless you were going to require a tailored Ada target system (which is 
> definitely not going to be a requirement).

I don't want such a requirement. The acceptable overhead (fraction of 
execution time not charged to tasks) depends on the application.

Moreover, on a multi-process systems (an Ada program running under 
Windows or Linux, for example) some of the CPU time is spent on other 
processes, all of which would be "overhead" from the point of view of 
the Ada program. I don't think that the authors of D.14 had such systems 
in mind.

> Perhaps the proposers (from the IRTAW meetings) had something else in mind, 
> but if so, they communicated it very poorly.

Do you remember who they were? Are the IRTAW minutes or proposals 
accessible on the web?

-- 
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
       .      @       .



^ permalink raw reply	[relevance 8%]

* Re: Ada.Execution_Time
  2010-12-28 22:39  5%                                                     ` Ada.Execution_Time Simon Wright
@ 2010-12-29  9:07  5%                                                       ` Dmitry A. Kazakov
  0 siblings, 0 replies; 170+ results
From: Dmitry A. Kazakov @ 2010-12-29  9:07 UTC (permalink / raw)


On Tue, 28 Dec 2010 22:39:31 +0000, Simon Wright wrote:

> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:
> 
>> Yes, the TSC can be read any time, and, if I correctly remember, each
>> reading will give a new value. PPC RT clock is also reliable.
>>
>> There could be certain issues for multi-core processors, you should
>> take care that the task synchronizing the counter with UTC would not
>> jump from core to core. Alternatively you should synchronize clocks of
>> individual cores.
> 
> Using the TSC on a MacBook Pro gives very unreliable results (I rather
> think the core you're using goes to sleep and you may or may not wake
> up using the same core!).

Can it be because the processor changes the TSC frequency when it goes into
the sleep mode? I thought Intel has fixed that bug.

I cannot think out a good time service for a multi-core with unsynchronized
TSC's. Intel should simply fix that mess.

> However, the system clock (Ada.Calendar) is precise to a microsecond.

It likely uses a programmable timer. TSC gives fraction of nanosecond.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[relevance 5%]

* Re: An Example for Ada.Execution_Time
  2010-12-28  2:31 14% ` BrianG
  2010-12-28 13:43  9%   ` anon
@ 2010-12-29  3:10  9%   ` Randy Brukardt
  2010-12-30 23:51  4%     ` BrianG
  1 sibling, 1 reply; 170+ results
From: Randy Brukardt @ 2010-12-29  3:10 UTC (permalink / raw)


"BrianG" <briang000@gmail.com> wrote in message 
news:ifbi5c$rqt$1@news.eternal-september.org...
...
> I asked for:
>    >> An algorithm comparison program might look like:
>    >>
>    >> with Ada.Execution_Time ;
>    >> with Ada.Execution_Time.Timers ;
>    >Given the below program, please add some of the missing details to
>    >show how this can be useful without also "with Ada.Real_Time".
>    >Neither Execution_Time or Execution_Time.Timers provides any value
>    >that can be used directly.

This seems like a totally silly question. There are a lot of well-designed 
packages in Ada that don't do anything useful without at least one or more 
other packages. Indeed, if you consider "Standard" to be a separate package 
(and it is), there are hardly any packages that *don't* require some other 
package to be useful.

More to the point, you probably need Ada.IO_Exceptions to use 
Ada.Directories effectively (use of any package without error handling is 
toy use); Ada.Streams.Stream_IO require use of Ada.Streams (a separate 
package, which you will need separate use clauses for even if you get it 
imported automatically); Ada.Strings.Maps aren't useful for anything unless 
you combine them with one of the string handling packages, and so on.

Perhaps you would have been happier if Ada.Execution_Time had been a child 
of Ada.Real_Time (exactly as in the string case), but this wouldn't change 
anything.

The odd thing is that Duration is defined in Standard, rather than some more 
appropriate package. Giving this some sort of religious importance is beyond 
silly...

                                   Randy.





^ permalink raw reply	[relevance 9%]

* Re: Ada.Execution_Time
  2010-12-28 20:03  5%                                                   ` Ada.Execution_Time Dmitry A. Kazakov
@ 2010-12-28 22:39  5%                                                     ` Simon Wright
  2010-12-29  9:07  5%                                                       ` Ada.Execution_Time Dmitry A. Kazakov
  0 siblings, 1 reply; 170+ results
From: Simon Wright @ 2010-12-28 22:39 UTC (permalink / raw)


"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:

> Yes, the TSC can be read any time, and, if I correctly remember, each
> reading will give a new value. PPC RT clock is also reliable.
>
> There could be certain issues for multi-core processors, you should
> take care that the task synchronizing the counter with UTC would not
> jump from core to core. Alternatively you should synchronize clocks of
> individual cores.

Using the TSC on a MacBook Pro gives very unreliable results (I rather
think the core you're using goes to sleep and you may or may not wake
up using the same core!). However, the system clock (Ada.Calendar) is
precise to a microsecond.



^ permalink raw reply	[relevance 5%]

* Re: Ada.Execution_Time
  2010-12-28 19:41  5%                                                 ` Ada.Execution_Time (see below)
@ 2010-12-28 20:03  5%                                                   ` Dmitry A. Kazakov
  2010-12-28 22:39  5%                                                     ` Ada.Execution_Time Simon Wright
  0 siblings, 1 reply; 170+ results
From: Dmitry A. Kazakov @ 2010-12-28 20:03 UTC (permalink / raw)


On Tue, 28 Dec 2010 19:41:40 +0000, (see below) wrote:

> On 28/12/2010 16:55, in article 1oq6oggi7rtzj.4u4yyq6m8r74$.dlg@40tude.net,
> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote:
> 
>> On Tue, 28 Dec 2010 16:27:18 +0000, (see below) wrote:
>> 
>>> Dmitri reports a modern computer with a timer having a resolution that is
>>> thousands or millions of times worse than the CPU's logic clock.
>> 
>> You get me wrong, the timer resolution is OK, it is the system service
>> which does not use it properly. In the case of VxWorks the system time is
>> incremented from the timer interrupts, e.g. by 1ms. You can set interrupts
>> to each 1um spending all processor time handling interrupts. It is an OS
>> architecture problem. System time should have been taken from the real time
>> counter.
> 
> Surely the interrupt rate does not matter. The KDF9 clock interrupted once
> every 2^20 us, but could be read to the nearest 32 us. Can the clock you
> speak of not be interrogated between interrupts?

Yes, the TSC can be read any time, and, if I correctly remember, each
reading will give a new value. PPC RT clock is also reliable.

There could be certain issues for multi-core processors, you should take
care that the task synchronizing the counter with UTC would not jump from
core to core. Alternatively you should synchronize clocks of individual
cores.

>>> Why has this aspect of computer architecture degenerated so much, I wonder?
>>> And why have software people not made more of a push for improvements?
>> 
>> The computer architecture did not degenerate. [...] It is
>> usually the standard OS services to blame for not using these clocks.
> 
> I guess the second part of my question stands. 8-)

The OSes did not become degenerate it always were. (:-))

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[relevance 5%]

* Re: Ada.Execution_Time
  2010-12-28 16:55  4%                                               ` Ada.Execution_Time Dmitry A. Kazakov
@ 2010-12-28 19:41  5%                                                 ` (see below)
  2010-12-28 20:03  5%                                                   ` Ada.Execution_Time Dmitry A. Kazakov
  0 siblings, 1 reply; 170+ results
From: (see below) @ 2010-12-28 19:41 UTC (permalink / raw)


On 28/12/2010 16:55, in article 1oq6oggi7rtzj.4u4yyq6m8r74$.dlg@40tude.net,
"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote:

> On Tue, 28 Dec 2010 16:27:18 +0000, (see below) wrote:
> 
>> Dmitri reports a modern computer with a timer having a resolution that is
>> thousands or millions of times worse than the CPU's logic clock.
> 
> You get me wrong, the timer resolution is OK, it is the system service
> which does not use it properly. In the case of VxWorks the system time is
> incremented from the timer interrupts, e.g. by 1ms. You can set interrupts
> to each 1um spending all processor time handling interrupts. It is an OS
> architecture problem. System time should have been taken from the real time
> counter.

Surely the interrupt rate does not matter. The KDF9 clock interrupted once
every 2^20 us, but could be read to the nearest 32 us. Can the clock you
speak of not be interrogated between interrupts?

> 
>> Why has this aspect of computer architecture degenerated so much, I wonder?
>> And why have software people not made more of a push for improvements?
> 
> The computer architecture did not degenerate. [...] It is
> usually the standard OS services to blame for not using these clocks.

I guess the second part of my question stands. 8-)

-- 
Bill Findlay
with blueyonder.co.uk;
use  surname & forename;





^ permalink raw reply	[relevance 5%]

* Re: Ada.Execution_Time
  2010-12-28 16:27  4%                                             ` Ada.Execution_Time (see below)
@ 2010-12-28 16:55  4%                                               ` Dmitry A. Kazakov
  2010-12-28 19:41  5%                                                 ` Ada.Execution_Time (see below)
  0 siblings, 1 reply; 170+ results
From: Dmitry A. Kazakov @ 2010-12-28 16:55 UTC (permalink / raw)


On Tue, 28 Dec 2010 16:27:18 +0000, (see below) wrote:

> Dmitri reports a modern computer with a timer having a resolution that is
> thousands or millions of times worse than the CPU's logic clock.

You get me wrong, the timer resolution is OK, it is the system service
which does not use it properly. In the case of VxWorks the system time is
incremented from the timer interrupts, e.g. by 1ms. You can set interrupts
to each 1um spending all processor time handling interrupts. It is an OS
architecture problem. System time should have been taken from the real time
counter.

> Why has this aspect of computer architecture degenerated so much, I wonder?
> And why have software people not made more of a push for improvements?

The computer architecture did not degenerate. Modern processor and
motherboard have multiple time sources 3-4. Some of them have a very high
resolution and reliable, e.g. keep on counting in the sleep mode etc. It is
usually the standard OS services to blame for not using these clocks.
Practically in any OS there is a backdoor to get a decent real-time clock.
Then the journey begins. You need to synchronize readings from that clock
(usually a 64-bit counter) with the system time (of miserable accuracy) in
order to get a decent UTC stamp. This is doable using some statistical
method, depending on your needs (monotonic, or not, etc). Shame that the OS
does not do this.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[relevance 4%]

* Re: Ada.Execution_Time
  2010-12-28 16:18  5%                                             ` Ada.Execution_Time Simon Wright
@ 2010-12-28 16:34  5%                                               ` Dmitry A. Kazakov
  0 siblings, 0 replies; 170+ results
From: Dmitry A. Kazakov @ 2010-12-28 16:34 UTC (permalink / raw)


On Tue, 28 Dec 2010 16:18:11 +0000, Simon Wright wrote:

> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:
> 
>> Yes, this thing. In our case it was Pentium VxWorks 6.x. (The PPC we
>> used prior to it had poor performance) The problem was that
>> Ada.Real_Time.Clock had the accuracy of the clock interrupts,
>> i.e. 1ms, which is by all accounts catastrophic for a 1.7GHz
>> processor. You can switch some tasks forth and back between two clock
>> changes.
> 
> Our experience was that where there are timing constraints to be met, or
> cyclic timing behaviours to implement, a milliscond is OK.
> 
> We did consider running the VxWorks tick at 100 us but this was quite
> unnecessary!

We actually have it set at 100 us, I believe.

But we need high accuracy clock not for switching tasks. It is for time
stamping and frequency measurements.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[relevance 5%]

* Re: Ada.Execution_Time
  2010-12-28 15:42  9%                                           ` Ada.Execution_Time Dmitry A. Kazakov
@ 2010-12-28 16:27  4%                                             ` (see below)
  2010-12-28 16:55  4%                                               ` Ada.Execution_Time Dmitry A. Kazakov
  0 siblings, 1 reply; 170+ results
From: (see below) @ 2010-12-28 16:27 UTC (permalink / raw)


At least part of this discussion is motivated by the abysmal facilities for
the measurement of elapsed time on some (all?) modern architectures.

My current Ada project is the emulation of the KDF9, a computer introduced
50 years ago. It had a hardware clock register that was incremented by 1
every 32 logic clock cycles and could be read by a single instruction taking
4 logic clock cycles (the CPU ran on a 1MHz logic clock).

Using this feature, the OS could keep track of the CPU time used by a
process to within 32 logic clock cycles per time slice (typically better
than 1 part in 1_000). Summing many such slices gives a total with much
better relative error than that of the individual slices, of course.

Dmitri reports a modern computer with a timer having a resolution that is
thousands or millions of times worse than the CPU's logic clock.

Why has this aspect of computer architecture degenerated so much, I wonder?
And why have software people not made more of a push for improvements?

-- 
Bill Findlay
with blueyonder.co.uk;
use  surname & forename;





^ permalink raw reply	[relevance 4%]

* Re: Ada.Execution_Time
  2010-12-28 15:08  5%                                           ` Ada.Execution_Time Dmitry A. Kazakov
@ 2010-12-28 16:18  5%                                             ` Simon Wright
  2010-12-28 16:34  5%                                               ` Ada.Execution_Time Dmitry A. Kazakov
  2010-12-31  0:40  5%                                             ` Ada.Execution_Time BrianG
  1 sibling, 1 reply; 170+ results
From: Simon Wright @ 2010-12-28 16:18 UTC (permalink / raw)


"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:

> Yes, this thing. In our case it was Pentium VxWorks 6.x. (The PPC we
> used prior to it had poor performance) The problem was that
> Ada.Real_Time.Clock had the accuracy of the clock interrupts,
> i.e. 1ms, which is by all accounts catastrophic for a 1.7GHz
> processor. You can switch some tasks forth and back between two clock
> changes.

Our experience was that where there are timing constraints to be met, or
cyclic timing behaviours to implement, a milliscond is OK.

We did consider running the VxWorks tick at 100 us but this was quite
unnecessary!



^ permalink raw reply	[relevance 5%]

* Re: Ada.Execution_Time
  2010-12-28 14:46  9%                                         ` Ada.Execution_Time Niklas Holsti
@ 2010-12-28 15:42  9%                                           ` Dmitry A. Kazakov
  2010-12-28 16:27  4%                                             ` Ada.Execution_Time (see below)
  0 siblings, 1 reply; 170+ results
From: Dmitry A. Kazakov @ 2010-12-28 15:42 UTC (permalink / raw)


On Tue, 28 Dec 2010 16:46:20 +0200, Niklas Holsti wrote:

> And you seem to forget that Ada.Execution_Time may be implemented by 
> reading a real-time clock. As has been said before.

Only if you have control over the OS or else can hook on task switches. I
think it is doable under VxWorks. But I doubt that AdaCore would do this.

> Dmitry:
>>>> I don't
>>>> care how much processor time my control loop takes so long it manages to
>>>> write the outputs when the actuators expect them.
> Niklas:
>>> You should care, if the processor must also have time for some other 
>>> tasks of lower priority, which are preempted by the control-loop task.
> Dmitry:
>> Why? It is straightforward: the task of higher priority level owns the
>> processor.
> 
> Do you mean that your system has only one task with real-time deadlines, 
> and no CPU time has to be left for lower-priority tasks, and no CPU time 
> is taken by higher-priority tasks? Then scheduling is trivial for your 
> system and your system is a poor example for a discussion about scheduling.

Right, a real-time system is usually a bunch of tasks activated according
to their priority levels. This is why I doubted that Ada.Execution_Time
might be useful there.

> Niklas:
>>> For example, assume that the computation in a control algorithm consists 
>>> of two consecutive stages where the first stage processes the inputs 
>>> into a state model and the second stage computes the control outputs 
>>> from the state model. Using Ada.Execution_Time or 
>>> Ada.Execution_Time.Timers the program could detect an unexpectedly high 
>>> CPU usage in the first stage, and fall back to a simpler, faster 
>>> algorithm in the second stage, to ensure that some control outputs are 
>>> computed before the deadline.
> Dmitry:
>> No, in our systems we use a different schema.
> 
> So what? I said nothing (and know nothing) about your system, any 
> resemblance is coincidental. And there can be several valid schemas.

No, the point was rather that your schema is not typical for a real-time
system.

>> Consider a system with n-processors. The execution
>> time second will be 1/n of the real time second.
> 
> No. If you have n workers digging a ditch, you must pay each of them the 
> same amount of money each hour as if you had one worker. So the "digging 
> hour" is still one hour, although the total amount of work that can be 
> done in one hour is n digging-hours. You are confusing the total amount 
> of work with the amount of work per worker.

The ditch has 10 digging hours, this is a virtual time, which can be 1 real
hour if I had 10 workers or 26 hours if I have only one (26 = 24(8) + 2, 8
hours per day). With one worker it can even be 26 + 48 if he starts Friday,
or even more if she takes a leave for child rearing (:-)).

The sum of working hours is a measure of work. It is not a measure of time.

   Time = Work / Power

You can use it to estimate the real time required to complete the work ...
or just re-read the The Mythical Man-Month... (:-))

> With n processors the system can do n seconds worth of execution in one 
> real-time second. But each processor still executes for one second. And 
> as I understand the Ada task dispatching/scheduling model, one task 
> cannot execute at the same time on more than one processor, so one task 
> cannot accumulate more than one second of execution time in one second 
> of real time.

Yes.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[relevance 9%]

* Re: Ada.Execution_Time
  2010-12-28 14:14  9%                                         ` Ada.Execution_Time Simon Wright
@ 2010-12-28 15:08  5%                                           ` Dmitry A. Kazakov
  2010-12-28 16:18  5%                                             ` Ada.Execution_Time Simon Wright
  2010-12-31  0:40  5%                                             ` Ada.Execution_Time BrianG
  0 siblings, 2 replies; 170+ results
From: Dmitry A. Kazakov @ 2010-12-28 15:08 UTC (permalink / raw)


On Tue, 28 Dec 2010 14:14:57 +0000, Simon Wright wrote:

> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:
> 
>> And conversely, the catastrophic accuracy of the VxWorks real-time
>> clock service does not hinder its usability for real-time application.
> 
> Catastrophic?
>
> The Radstone PPC7A cards (to take one example) have two facilities: (a)
> the PowerPC decrementer, run off a crystal with some not-too-good quoted
> accuracy (50 ppm, I think), and (b) a "real time clock".
> 
> The RTC would be much better termed a time-of-day clock, since what it
> provides is the date and time to 1 second precision. It also needs
> battery backup, not easy to justify on naval systems (partly because it
> adversely affects the shelf life of the boards, partly because navies
> don't like noxious chemicals in their equipment).
> 
> We never used the RTC.
>
> The decrementer is the facility used by VxWorks, and hence by GNAT under
> VxWorks, to support time; both Ada.Calendar and Ada.Real_Time (we are
> still at Ada 95 so I have no idea about Ada.Execution_Time). We run with
> clock interrupts at 1 ms and (so far as we can tell from using bus
> analysers) the interrupts behave perfectly reliably.

Yes, this thing. In our case it was Pentium VxWorks 6.x. (The PPC we used
prior to it had poor performance) The problem was that Ada.Real_Time.Clock
had the accuracy of the clock interrupts, i.e. 1ms, which is by all
accounts catastrophic for a 1.7GHz processor. You can switch some tasks
forth and back between two clock changes.
 
> For a higher-resolution view of time we've extended Ada.Calendar, using
> the PowerPC's mftb (Move From Time Base) instruction to measure sub-tick
> intervals (down to 40 ns).

So did we in our case. VxWorks has means to access the Pentium high
resolution counter. We took Ada.Real_Time and replaced the Clock function
with one that used the counter multiplied by its frequency.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[relevance 5%]

* Re: Ada.Execution_Time
  2010-12-27 21:53  3%                                       ` Ada.Execution_Time Dmitry A. Kazakov
  2010-12-28 14:14  9%                                         ` Ada.Execution_Time Simon Wright
@ 2010-12-28 14:46  9%                                         ` Niklas Holsti
  2010-12-28 15:42  9%                                           ` Ada.Execution_Time Dmitry A. Kazakov
  1 sibling, 1 reply; 170+ results
From: Niklas Holsti @ 2010-12-28 14:46 UTC (permalink / raw)


Dmitry, from now on I am going to respond only to those of your comments 
that to me seem to make some kind of sense and have not been discussed 
before. From my omissions you may deduce my opinion of the remainder.

Dmitry A. Kazakov wrote:
>>> 2. It would be extremely ill-advised to use Ada.Execution_Time instead of
>>> direct measures for an implementation of time sharing algorithms.
Niklas Holsti replied:
>> If by "direct measures" you mean the use of some external measuring 
>> device such as an oscilloscope or logic analyzer, such measures are 
>> available only externally, to the developers, not within the Ada program 
>> itself.
Dmitry:
> You forgot one external device called real time clock.

And you seem to forget that Ada.Execution_Time may be implemented by 
reading a real-time clock. As has been said before.

Dmitry:
>>> I don't
>>> care how much processor time my control loop takes so long it manages to
>>> write the outputs when the actuators expect them.
Niklas:
>> You should care, if the processor must also have time for some other 
>> tasks of lower priority, which are preempted by the control-loop task.
Dmitry:
> Why? It is straightforward: the task of higher priority level owns the
> processor.

Do you mean that your system has only one task with real-time deadlines, 
and no CPU time has to be left for lower-priority tasks, and no CPU time 
is taken by higher-priority tasks? Then scheduling is trivial for your 
system and your system is a poor example for a discussion about scheduling.

Niklas:
>> For example, assume that the computation in a control algorithm consists 
>> of two consecutive stages where the first stage processes the inputs 
>> into a state model and the second stage computes the control outputs 
>> from the state model. Using Ada.Execution_Time or 
>> Ada.Execution_Time.Timers the program could detect an unexpectedly high 
>> CPU usage in the first stage, and fall back to a simpler, faster 
>> algorithm in the second stage, to ensure that some control outputs are 
>> computed before the deadline.
Dmitry:
> No, in our systems we use a different schema.

So what? I said nothing (and know nothing) about your system, any 
resemblance is coincidental. And there can be several valid schemas.

> The "fall back" values are
> always evaluated first. They must become ready at the end of each cycle. A
> finer estimation is evaluated in background and used when ready.

So your problem is different (the coarse values are the rule). Different 
problem, different solution.

> In general, real-time systems are
> usually designed for the worst case scenario,

Yes, but this is often criticized as inefficient, and there are 
scheduling methods that make good use of the difference (slack) between 
actual and worst-case execution times. These methods need something like 
Ada.Execution_Time. As has been said before.

> Consider a system with n-processors. The execution
> time second will be 1/n of the real time second.

No. If you have n workers digging a ditch, you must pay each of them the 
same amount of money each hour as if you had one worker. So the "digging 
hour" is still one hour, although the total amount of work that can be 
done in one hour is n digging-hours. You are confusing the total amount 
of work with the amount of work per worker.

With n processors the system can do n seconds worth of execution in one 
real-time second. But each processor still executes for one second. And 
as I understand the Ada task dispatching/scheduling model, one task 
cannot execute at the same time on more than one processor, so one task 
cannot accumulate more than one second of execution time in one second 
of real time.

(At the risk of introducing another side issue, I note that an 
automatically parallelizing compiler might make a task use several 
processors in parallel, at least for some of the time. But I don't think 
that RM D2.1 considers this possibility.)

> With shared memory it will
> be f*1/n, where f is some unknown factor.

I agree that variable memory access times, and other dynamic timing that 
may depend on the number of processors and on how they share resources, 
is a complicating factor in the analysis of CPU loads and 
schedulability. Indeed this is why I fear that the concept of "the 
execution time of task" is becoming fearsomely context-dependent and 
therefore problematic -- something that you disagreed with.

For the definition of Ada.Execution_Time, memory access latency is 
relevant only if the Ada RTS suspends tasks that are waiting for memory 
access, so that they are "not executing" until the memory access is 
completed. Most RTOSes do not suspend tasks in that way, I believe.

-- 
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
       .      @       .



^ permalink raw reply	[relevance 9%]

* Re: Ada.Execution_Time
  2010-12-28 10:01  4%                                         ` Ada.Execution_Time Niklas Holsti
@ 2010-12-28 14:17  5%                                           ` Simon Wright
  0 siblings, 0 replies; 170+ results
From: Simon Wright @ 2010-12-28 14:17 UTC (permalink / raw)


Niklas Holsti <niklas.holsti@tidorum.invalid> writes:

> Randy's last post in this thread, in which he agrees with Dmitry, has
> the same effect on me. I hope that further discussion with Randy will
> converge to something.
>
> Did your earlier understanding resemble Dmitry's, or mine? Or neither?

Yours, I think.

It seems strange to have CPU_Time bearing only a tenuous relationship
to what we might normally call "time", and then to have the difference
between two CPU_Times turn out to be a Time_Span!



^ permalink raw reply	[relevance 5%]

* Re: Ada.Execution_Time
  2010-12-27 21:53  3%                                       ` Ada.Execution_Time Dmitry A. Kazakov
@ 2010-12-28 14:14  9%                                         ` Simon Wright
  2010-12-28 15:08  5%                                           ` Ada.Execution_Time Dmitry A. Kazakov
  2010-12-28 14:46  9%                                         ` Ada.Execution_Time Niklas Holsti
  1 sibling, 1 reply; 170+ results
From: Simon Wright @ 2010-12-28 14:14 UTC (permalink / raw)


"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:

> And conversely, the catastrophic accuracy of the VxWorks real-time
> clock service does not hinder its usability for real-time application.

Catastrophic?

The Radstone PPC7A cards (to take one example) have two facilities: (a)
the PowerPC decrementer, run off a crystal with some not-too-good quoted
accuracy (50 ppm, I think), and (b) a "real time clock".

The RTC would be much better termed a time-of-day clock, since what it
provides is the date and time to 1 second precision. It also needs
battery backup, not easy to justify on naval systems (partly because it
adversely affects the shelf life of the boards, partly because navies
don't like noxious chemicals in their equipment).

We never used the RTC.

The decrementer is the facility used by VxWorks, and hence by GNAT under
VxWorks, to support time; both Ada.Calendar and Ada.Real_Time (we are
still at Ada 95 so I have no idea about Ada.Execution_Time). We run with
clock interrupts at 1 ms and (so far as we can tell from using bus
analysers) the interrupts behave perfectly reliably.

For a higher-resolution view of time we've extended Ada.Calendar, using
the PowerPC's mftb (Move From Time Base) instruction to measure sub-tick
intervals (down to 40 ns).




^ permalink raw reply	[relevance 9%]

* Re: An Example for Ada.Execution_Time
  2010-12-28  2:31 14% ` BrianG
@ 2010-12-28 13:43  9%   ` anon
  2010-12-29  3:10  9%   ` Randy Brukardt
  1 sibling, 0 replies; 170+ results
From: anon @ 2010-12-28 13:43 UTC (permalink / raw)


In <ifbi5c$rqt$1@news.eternal-september.org>, BrianG <briang000@gmail.com> writes:
>anon@att.net wrote:
>> You ask for an example.
>I'll assume I am the "you".  (Since you don't specify and didn't relate 
>it to the original thread.)
>
>I asked for:
>    >> An algorithm comparison program might look like:
>    >>
>    >> with Ada.Execution_Time ;
>    >> with Ada.Execution_Time.Timers ;
>    >Given the below program, please add some of the missing details to
>    >show how this can be useful without also "with Ada.Real_Time".
>    >Neither Execution_Time or Execution_Time.Timers provides any value
>    >that can be used directly.
>    >
>    >>
>    >> procedure Compare_Algorithm is
>    >...]
>
>You provided:
>> 
>> Here is an example for packages (Tested using MaRTE):
>....
>> with Ada.Execution_Time ;
>> with Ada.Real_Time ;
>....
>> with Ada.Execution_Time ;
>> with Ada.Execution_Time.Timers ;
>> with Ada.Real_Time ;
>....
>
>As the extracts show, you could not do what I asked for.  You said in 
>the original post you could do it with only the 2 Execution_Time 
>'with's.  My entire point in this thread is that Execution_Time, as 
>defined, is useless by itself.
>
>--BrianG

I changed the title so there was no need for reference lines

Since 2006, a number of people have ask about Ada.Execution_Time package 
and no one has given a true example that uses that package for what it was 
created for. Until now! I gave you a simple example that can be easily 
modified to a more complex program. Problem: may be whats the maximum 
number of timers that can be allocated.


and for your info (changes in work.adb)

  Task_0 : Work_Task ;
  Task_1 : Work_Task ;
  Task_2 : Work_Task ;
  Task_3 : Work_Task ;

begin
  Initialize ( Work_Algorithm'Access ) ;
  Task_0.Start ( False ) ;
  Task_1.Start ( False ) ;
  Task_2.Start ( True ) ;  -- just for a change
  Task_3.Start ( True ) ;
  ...

will gives four tasks with four different timers. 

     2 timers that have the same interval 
     2 timers that changes the interval

All that is needed is protect the Counter variable for each task and change the 
Counter print routine. And I will let others do that modification.





^ permalink raw reply	[relevance 9%]

* Re: Ada.Execution_Time
  2010-12-27 21:34  5%                                       ` Ada.Execution_Time Simon Wright
@ 2010-12-28 10:01  4%                                         ` Niklas Holsti
  2010-12-28 14:17  5%                                           ` Ada.Execution_Time Simon Wright
  0 siblings, 1 reply; 170+ results
From: Niklas Holsti @ 2010-12-28 10:01 UTC (permalink / raw)


Simon Wright wrote:
> Niklas Holsti <niklas.holsti@tidorum.invalid> writes:
> 
>> Nonsense. I spend some part of my time asleep, some time awake. Both
>> "sleeping time" and "awake time" are (pieces of) real time. A task
>> spends some of its time being executed, some of its time not being
>> executed (waiting or ready).
> 
> And, just to be clear, CPU_Time corresponds to the "awake time"?

Perhaps that is the natural choice, but I did not mean the choice to be 
significant. Dmitry was saying that "execution time" is not "time", just 
because it uses the prefix or qualification "execution". By that 
reasoning, "awake time" would not be "time", "chicken soup" would not be 
"soup", etc.

> I thought I understood pretty much what was intended in the execution
> time annex, even if it didn't seem to have much relevance to my work,
> but this discussion has managed to confuse me thoroughly.

Randy's last post in this thread, in which he agrees with Dmitry, has 
the same effect on me. I hope that further discussion with Randy will 
converge to something.

Did your earlier understanding resemble Dmitry's, or mine? Or neither?

> A minor aside -- as a user, I find the use of Time_Span here and in
> Ada.Real_Time very annoying. It's perfectly clear that what's meant is
> Duration.

I think Time_Span and Duration are different representations of the same 
physical thing, a span of time that can be physically measured in 
seconds. The reasons for having two (possibly) different representations 
(two types) have been discussed before: different requirements on range 
and precision. Still, the differences are important only for processors 
that are very small and weak, in today's scale, so perhaps this 
distinction is no longer needed and the types could be merged.

-- 
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
       .      @       .



^ permalink raw reply	[relevance 4%]

* Re: An Example for Ada.Execution_Time
  2010-12-27 18:26 12% An Example for Ada.Execution_Time anon
@ 2010-12-28  2:31 14% ` BrianG
  2010-12-28 13:43  9%   ` anon
  2010-12-29  3:10  9%   ` Randy Brukardt
  0 siblings, 2 replies; 170+ results
From: BrianG @ 2010-12-28  2:31 UTC (permalink / raw)


anon@att.net wrote:
> You ask for an example.
I'll assume I am the "you".  (Since you don't specify and didn't relate 
it to the original thread.)

I asked for:
    >> An algorithm comparison program might look like:
    >>
    >> with Ada.Execution_Time ;
    >> with Ada.Execution_Time.Timers ;
    >Given the below program, please add some of the missing details to
    >show how this can be useful without also "with Ada.Real_Time".
    >Neither Execution_Time or Execution_Time.Timers provides any value
    >that can be used directly.
    >
    >>
    >> procedure Compare_Algorithm is
    >...]

You provided:
> 
> Here is an example for packages (Tested using MaRTE):
...
> with Ada.Execution_Time ;
> with Ada.Real_Time ;
...
> with Ada.Execution_Time ;
> with Ada.Execution_Time.Timers ;
> with Ada.Real_Time ;
...

As the extracts show, you could not do what I asked for.  You said in 
the original post you could do it with only the 2 Execution_Time 
'with's.  My entire point in this thread is that Execution_Time, as 
defined, is useless by itself.

--BrianG



^ permalink raw reply	[relevance 14%]

* Re: Ada.Execution_Time
  2010-12-27 22:02  5%                                   ` Ada.Execution_Time Randy Brukardt
@ 2010-12-27 22:43  4%                                     ` Robert A Duff
  0 siblings, 0 replies; 170+ results
From: Robert A Duff @ 2010-12-27 22:43 UTC (permalink / raw)


"Randy Brukardt" <randy@rrsoftware.com> writes:

> They're supposed to provide a useless implementation that raises Use_Error 
> for most of the operations. There are a pair of Notes to that effect 
> (A.16(129-130)).

OK, good enough.

>... I don't think features should ever be designed so that 
> implementations have to appeal to 1.1.3(6) - my preference would be that 
> that paragraph not exist with the language itself sufficiently flexible 
> where it matters.

But something like 1.1.3(6) has to exist in every language definition,
at least implicitly.  Computers are finite machines, so there will
always be things that are "impossible or impractical", and it is
impossible for any language designer to predict what that means
in all cases.

>...(Otherwise, implementations could leave out anything that 
> they want and appeal to 1.1.3(6).

Well, not really.  That para says "given the execution environment", not
"given the compiler-writer's whim, or laziness, or lack of interest".
One could claim that "if X = 0..." is impractical to implement, but of
course people would laugh at that claim.

>... I think that both interfaces and 
> coextensions are "impractical" to implement for the benefit gained, so does 
> that mean I can ignore them and still have a complete Ada compiler??)

I definitely disagree about interfaces.  I might agree about
coextensions, depending on my mood on any particular day of the week.
But yeah, you can implement what you like, and claim it's an Ada
compiler.  Whether people buy it is a question that lies outside any ISO
standard.

Standards are optional!  The Ada standard doesn't require anybody to do
anything.  Still, there's a community that can decide, informally, that
so-and-so compiler is an implementation of Ada 2005 (despite a few minor
bugs), and such-and-such compiler is not.

- Bob



^ permalink raw reply	[relevance 4%]

* Re: Ada.Execution_Time
  2010-12-25 11:31  7%                             ` Ada.Execution_Time Niklas Holsti
  2010-12-26 10:25 11%                               ` Ada.Execution_Time Dmitry A. Kazakov
@ 2010-12-27 22:11  4%                               ` Randy Brukardt
  2010-12-29 12:48  8%                                 ` Ada.Execution_Time Niklas Holsti
  1 sibling, 1 reply; 170+ results
From: Randy Brukardt @ 2010-12-27 22:11 UTC (permalink / raw)


"Niklas Holsti" <niklas.holsti@tidorum.invalid> wrote in message 
news:8nm30fF7r9U1@mid.individual.net...
> Dmitry A. Kazakov wrote:
>> On Sat, 18 Dec 2010 23:20:20 +0200, Niklas Holsti wrote:
...
>> On a such platform the implementation would be as perverse as RM D.14
>> is. But the perversion is only because of the interpretation.
>
> Bah. I think that when RM D.14 says "time", it really means time. You
> think it means something else, perhaps a CPU cycle count. I think the 
> burden of proof is on you.
>
> It seems evident to me that the text in D.14 must be interpreted using
> the concepts in D.2.1, "The Task Dispatching Model", which clearly
> specifies real-time points when a processor starts to execute a task and
> stops executing a task. To me, and I believe to most readers of the RM,
> the execution time of a task is the sum of these time slices, thus a
> physical, real time.

For the record, I agree more with Dmitry than Niklas here. At least the 
interpretation *I* had when this package was proposed was that it had only a 
slight relationship to real-time. My understanding was that it was intended 
to provide a window into whatever facilities the underlying system had for 
execution "time" counting. That had no defined relationship with what Ada 
calls "time". As usch, I think the name "execution time" is misleading, (and 
I recall some discussions about that in the ARG), but no one had a better 
name that made any sense at all.

In particular, there is no requirement in the RM or anywhere else that these 
"times" sum to any particular answer. I don't quite see how there could be, 
unless you were going to require a tailored Ada target system (which is 
definitely not going to be a requirement).

Perhaps the proposers (from the IRTAW meetings) had something else in mind, 
but if so, they communicated it very poorly.

                             Randy.





^ permalink raw reply	[relevance 4%]

* Re: Ada.Execution_Time
  2010-12-27 17:24  5%                                 ` Ada.Execution_Time Robert A Duff
@ 2010-12-27 22:02  5%                                   ` Randy Brukardt
  2010-12-27 22:43  4%                                     ` Ada.Execution_Time Robert A Duff
  0 siblings, 1 reply; 170+ results
From: Randy Brukardt @ 2010-12-27 22:02 UTC (permalink / raw)


"Robert A Duff" <bobduff@shell01.TheWorld.com> wrote in message 
news:wcc39pj86y4.fsf@shell01.TheWorld.com...
> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:
>
>> Nope, one of the killer arguments ARG people deploy to reject most
>> reasonable AI's is: too difficult to implement on some obscure platform 
>> for
>> which Ada never existed and never will. (:-))
>
> The ARG and others have been guilty of that sort of argument in the
> past, although I think "most reasonable AI's" is an exaggeration.
> I think that line of reasoning is wrong -- I think it's just fine to have
> things like Ada.Directories, even though many embedded systems don't
> have directories.  It means that there's some standardization across
> systems that DO have directories.  Those that don't can either
> provide some minimal/useless implementation, or else appeal
> to RM-1.1.3(6).

They're supposed to provide a useless implementation that raises Use_Error 
for most of the operations. There are a pair of Notes to that effect 
(A.16(129-130)). I don't think features should ever be designed so that 
implementations have to appeal to 1.1.3(6) - my preference would be that 
that paragraph not exist with the language itself sufficiently flexible 
where it matters. (Otherwise, implementations could leave out anything that 
they want and appeal to 1.1.3(6). I think that both interfaces and 
coextensions are "impractical" to implement for the benefit gained, so does 
that mean I can ignore them and still have a complete Ada compiler??)

                                 Randy.





^ permalink raw reply	[relevance 5%]

* Re: Ada.Execution_Time
  2010-12-27 20:11  9%                                     ` Ada.Execution_Time Niklas Holsti
  2010-12-27 21:34  5%                                       ` Ada.Execution_Time Simon Wright
@ 2010-12-27 21:53  3%                                       ` Dmitry A. Kazakov
  2010-12-28 14:14  9%                                         ` Ada.Execution_Time Simon Wright
  2010-12-28 14:46  9%                                         ` Ada.Execution_Time Niklas Holsti
  1 sibling, 2 replies; 170+ results
From: Dmitry A. Kazakov @ 2010-12-27 21:53 UTC (permalink / raw)


On Mon, 27 Dec 2010 22:11:18 +0200, Niklas Holsti wrote:

> Dmitry A. Kazakov wrote:
>> Technically, CPU_Time is not a number in any sense. It is not a numeric Ada
>> type and it is not a model of a mathematical number (not even additive).
> 
> RM D.14(12/2): "The type CPU_Time represents the execution time of a 
> task. The set of values of this type corresponds one-to-one with an 
> implementation-defined range of mathematical integers". Thus, a number.

This is true for any value of any type due to finiteness. I think
D.14(12/2) refers to an ability to implement Split. But that is irrelevant.
CPU_Time is not declared numeric, it does not have +.

>>>> it is the Ada.Execution_Time implementation, which needs some
>>>> input from the scheduler.
>>> We must be precise about our terms, here. Using terms as defined in 
>>> http://en.wikipedia.org/wiki/Task_scheduling, Ada.Execution_Time needs 
>>> input from the task *dispatcher* -- the part of the kernel that suspends 
>>> and resumes tasks.
>> 
>> Let's call it dispatcher. Time sharing needs some measure of consumed CPU
>> time. My points stand:
>> 
>> 1. Time sharing has little to do with real-time systems.
> 
> What do you mean by "time sharing"? The classical mainframe system used 
> interactively by many terminals? What on earth does that have to do with 
> our discussion? Such a system of course must have concurrent tasks or 
> processes in some form, but so what?

Because this is the only case where execution time might be any relevant to
the algorithm of task switching.

> If by "time sharing algorithms" (below in your point 2) you mean what is 
> usually called "task scheduling algorithms", where several tasks 
> time-share the same processor by a task-switching (dispatching) 
> mechanism, your point 1 is bizarre. Priority-scheduled task switching is 
> the canonical architecture for real-time systems.

, which architecture makes execution time irrelevant to switching
decisions.

>> 2. It would be extremely ill-advised to use Ada.Execution_Time instead of
>> direct measures for an implementation of time sharing algorithms.
> 
> If by "direct measures" you mean the use of some external measuring 
> device such as an oscilloscope or logic analyzer, such measures are 
> available only externally, to the developers, not within the Ada program 
> itself.

You forgot one external device called real time clock.

> The whole point of Ada.Execution_Time is that it is available to 
> the Ada program itself, enabling run-time decisions based on the actual 
> execution times of tasks.

, which would be ill-advised to do.

>>>>> You have
>>>>> not given any arguments, based on the RM text, to support your position.
>>>> I am not a language lawyer to interpret the RM texts. My argument was to
>>>> common sense.
>>> To me it seems that your argument is based on the difficulty (in your 
>>> opinion) of implementing Ada.Execution_Time in some OSes such as MS 
>>> Windows, if the RM word "time" is taken to mean real time.
>>>
>>> It is common sense that some OSes are not designed (or not well 
>>> designed) for real-time systems. Even a good real-time OS may not 
>>> support all real-time methodologies, for example scheduling algorithms 
>>> that depend on actual execution times.
>> 
>> I disagree with almost everything here. To start with, comparing real-time
>> clock services of Windows and of VxWorks, we would notice that Windows is
>> far superior in both accuracy and precision.
> 
> So what? Real-time systems need determinism. The clock only has to be 
> accurate enough.

Sorry, are you saying that lesser accuracy of real-time clock is a way to
achieve determinism?
 
> If your tasks suffer arbitrary millisecond-scale suspensions or 
> dispatching delays (as is rumored for Windows) a microsecond-level clock 
> accuracy is no help.

And conversely, the catastrophic accuracy of the VxWorks real-time clock
service does not hinder its usability for real-time application. Which is
my point. You don't need good real-time clock in so many real-time
applications, and you never need execution time there.

> Anyway, your point has do to with the "time" that *activates* tasks, not 
> with the measurement of task-specific execution times.

Exactly

> So this is irrelevant.

No, it is the execution time which is irrelevant for real-time systems,
because of the way tasks are activated there.

>> I don't
>> care how much processor time my control loop takes so long it manages to
>> write the outputs when the actuators expect them.
> 
> You should care, if the processor must also have time for some other 
> tasks of lower priority, which are preempted by the control-loop task.

Why? It is straightforward: the task of higher priority level owns the
processor.

>> Measuring the CPU time
>> would bring me nothing. It is useless before the run, because it is not a
>> proof that the deadlines will be met.
> 
> In some cases (simple code or extensive tests, deterministic processor) 
> CPU-time measurements can be used to prove that deadlines are met.

It is difficult to imagine. This is done either statically or else by
running tests. Execution time has drawbacks of both approaches and
advantages of none.

> For example, assume that the computation in a control algorithm consists 
> of two consecutive stages where the first stage processes the inputs 
> into a state model and the second stage computes the control outputs 
> from the state model. Using Ada.Execution_Time or 
> Ada.Execution_Time.Timers the program could detect an unexpectedly high 
> CPU usage in the first stage, and fall back to a simpler, faster 
> algorithm in the second stage, to ensure that some control outputs are 
> computed before the deadline.

No, in our systems we use a different schema. The "fall back" values are
always evaluated first. They must become ready at the end of each cycle. A
finer estimation is evaluated in background and used when ready. Actually,
according to your schema, the finer estimation always "fail" because it is
guaranteed too complex for one cycle. It takes 10-100 cycles to compute at
least. So your schema would not work. In general, real-time systems are
usually designed for the worst case scenario, because when something
unanticipated indeed happens you could have no time to do anything else.
 
>>> Is your point that Ada.Execution_Time was accepted only because the ARG 
>>> decided that the word "time" in RM D.14 should not be understood to mean 
>>> real time? I doubt that very much... Surely such an unusual meaning of 
>>> "time" should have been explained in the RM.
>> 
>> It is explained by its name: "execution time." Execution means not real,
>> unreal time (:-)).
> 
> Nonsense. I spend some part of my time asleep, some time awake. Both 
> "sleeping time" and "awake time" are (pieces of) real time. A task 
> spends some of its time being executed, some of its time not being 
> executed (waiting or ready).

A very good example. Now consider your perception of time. Does it
correspond to the real time? No it does not. The time spent in sleep can be
sensed from very short to very long. This felt time is an analogue of the
task execution time. You better do not use this subjective time to decide
when to have next meal. That could end in obesity.

>>>> If that was the intent, then I really do not understand why CPU_Time was
>>>> introduced in addition to Ada.Real_Time.Time / Time_Span.
>>> Because (as I understand it) different processors/OSes have different 
>>> mechanisms for measuring execution times and real times, and the 
>>> mechanism most convenient for CPU_Time may use a different numerical 
>>> type (range, scale, and precision) than the mechanisms and types used 
>>> for Ada.Real_Time.Time, Time_Span, and Duration.
>> 
>> I see no single reason why this could happen. Obviously, if talking about a
>> real-time system as you insist, the only possible choice for CPU_Time is
>> Time_Span, because to be consistent with the interpretation you propose it
>> must be derived from Ada.Real_Time clock.
> 
> I have only said that the sum of the CPU_Times of all tasks executing on 
> the same processor should be close to the real elapsed time, since the 
> CPU's time is shared between the tasks. This does not mean that 
> Ada.Execution_Time.CPU_Time and Ada.Real_Time.Time must have a common 
> time source, only that both time sources must approximate physical, real 
> time.

What is the reason to use different sources?

>> My point is that RM intentionally leaves it up to the implementation to
>> choose a CPU_Time source independent on Ada.Real_Time.Clock. This why
>> different range and precision come into consideration.
> 
> I agree. But in both cases the intent is to approximate physical, real 
> time, not some "simulation time" where one "simulation second" could be 
> one year of real time.

Certainly the latter. Consider a system with n-processors. The execution
time second will be 1/n of the real time second. With shared memory it will
be f*1/n, where f is some unknown factor. That works in both directions, on
a single processor board with memory connected over some bus system, it
could be f<1, because some external devices might block CPU (and thus your
task) while accessing the memory. Note that the error is systematic. It is
not an approximation of real time.

>> Under some conditions (e.g. no task switching) an execution time interval
>> could be numerically equal to a real time interval.
> 
> Yes! Therefore, under these conditions, CPU_Time (when converted to a 
> Time_Span or Duration) does have a physical meaning. So we agree. At last.

Since these conditions are never met, the model you have in mind is
inadequate (wrong).

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[relevance 3%]

* Re: Ada.Execution_Time
  2010-12-27 20:11  9%                                     ` Ada.Execution_Time Niklas Holsti
@ 2010-12-27 21:34  5%                                       ` Simon Wright
  2010-12-28 10:01  4%                                         ` Ada.Execution_Time Niklas Holsti
  2010-12-27 21:53  3%                                       ` Ada.Execution_Time Dmitry A. Kazakov
  1 sibling, 1 reply; 170+ results
From: Simon Wright @ 2010-12-27 21:34 UTC (permalink / raw)


Niklas Holsti <niklas.holsti@tidorum.invalid> writes:

> Nonsense. I spend some part of my time asleep, some time awake. Both
> "sleeping time" and "awake time" are (pieces of) real time. A task
> spends some of its time being executed, some of its time not being
> executed (waiting or ready).

And, just to be clear, CPU_Time corresponds to the "awake time"?

I thought I understood pretty much what was intended in the execution
time annex, even if it didn't seem to have much relevance to my work,
but this discussion has managed to confuse me thoroughly.

A minor aside -- as a user, I find the use of Time_Span here and in
Ada.Real_Time very annoying. It's perfectly clear that what's meant is
Duration.



^ permalink raw reply	[relevance 5%]

* Re: Ada.Execution_Time
  2010-12-27 15:28  6%                                   ` Ada.Execution_Time Dmitry A. Kazakov
@ 2010-12-27 20:11  9%                                     ` Niklas Holsti
  2010-12-27 21:34  5%                                       ` Ada.Execution_Time Simon Wright
  2010-12-27 21:53  3%                                       ` Ada.Execution_Time Dmitry A. Kazakov
  0 siblings, 2 replies; 170+ results
From: Niklas Holsti @ 2010-12-27 20:11 UTC (permalink / raw)


Dmitry A. Kazakov wrote:
> On Mon, 27 Dec 2010 14:44:53 +0200, Niklas Holsti wrote:
> 
>> Dmitry A. Kazakov wrote:
>>> On Sat, 25 Dec 2010 13:31:27 +0200, Niklas Holsti wrote:
>>>
>>>> Dmitry A. Kazakov wrote:
>>>>> On Sat, 18 Dec 2010 23:20:20 +0200, Niklas Holsti wrote:
>>>> ...
>>>>>> The concept and measurement of "the execution time of a task" does
>>>>>> become problematic in complex processors that have hardware 
>>>>>> multi-threading and can run several tasks in more or less parallel
>>>>>> fashion, without completely isolating the tasks from each other.
>>>>> No, the concept is just fine,
>>>> Fine for what? For schedulability analysis, fine for on-line scheduling,
>>>> ... ?
>>> Applicability of a concept is not what makes it wrong or right.
>> I think it does. This concept, "the execution time of a task", stands 
>> for a number. A number is useless if it has no application in a 
>> calculation.
> 
> Technically, CPU_Time is not a number in any sense. It is not a numeric Ada
> type and it is not a model of a mathematical number (not even additive).

RM D.14(12/2): "The type CPU_Time represents the execution time of a 
task. The set of values of this type corresponds one-to-one with an 
implementation-defined range of mathematical integers". Thus, a number.

However, the sub-thread above was not about CPU_Time in 
Ada.Execution_Time, but about the general concept "the execution time of 
a task". (Serves me right for introducing side issues, although my 
intentions were good, I think.)

> Anyway, the execution time can be used in calculations independently on
> whether and how you could apply the results of such calculations.
> 
> You can add the height of your house to the distance to the Moon, the
> interpretation is up to you.

You are being absurd.

Dmitri, your arguments are becoming so weird that I am starting to think 
that you are just trolling or goading me.

>>> it is the Ada.Execution_Time implementation, which needs some
>>> input from the scheduler.
>> We must be precise about our terms, here. Using terms as defined in 
>> http://en.wikipedia.org/wiki/Task_scheduling, Ada.Execution_Time needs 
>> input from the task *dispatcher* -- the part of the kernel that suspends 
>> and resumes tasks.
> 
> Let's call it dispatcher. Time sharing needs some measure of consumed CPU
> time. My points stand:
> 
> 1. Time sharing has little to do with real-time systems.

What do you mean by "time sharing"? The classical mainframe system used 
interactively by many terminals? What on earth does that have to do with 
our discussion? Such a system of course must have concurrent tasks or 
processes in some form, but so what?

If by "time sharing algorithms" (below in your point 2) you mean what is 
usually called "task scheduling algorithms", where several tasks 
time-share the same processor by a task-switching (dispatching) 
mechanism, your point 1 is bizarre. Priority-scheduled task switching is 
the canonical architecture for real-time systems.

> 2. It would be extremely ill-advised to use Ada.Execution_Time instead of
> direct measures for an implementation of time sharing algorithms.

If by "direct measures" you mean the use of some external measuring 
device such as an oscilloscope or logic analyzer, such measures are 
available only externally, to the developers, not within the Ada program 
itself. The whole point of Ada.Execution_Time is that it is available to 
the Ada program itself, enabling run-time decisions based on the actual 
execution times of tasks.

> Real-time systems work with the real-time, they real-time intervals
> (durations) are of much minor interest. Execution time is of no interest,
> because a real-time system does not care to balance the CPU load.

I am made speechless (or should I say "typing-less"). If that is your 
view, there is no point in continuing this discussion because we do not 
agree on what a real-time program is.

>>>> You have
>>>> not given any arguments, based on the RM text, to support your position.
>>> I am not a language lawyer to interpret the RM texts. My argument was to
>>> common sense.
>> To me it seems that your argument is based on the difficulty (in your 
>> opinion) of implementing Ada.Execution_Time in some OSes such as MS 
>> Windows, if the RM word "time" is taken to mean real time.
>>
>> It is common sense that some OSes are not designed (or not well 
>> designed) for real-time systems. Even a good real-time OS may not 
>> support all real-time methodologies, for example scheduling algorithms 
>> that depend on actual execution times.
> 
> I disagree with almost everything here. To start with, comparing real-time
> clock services of Windows and of VxWorks, we would notice that Windows is
> far superior in both accuracy and precision.

So what? Real-time systems need determinism. The clock only has to be 
accurate enough.

If your tasks suffer arbitrary millisecond-scale suspensions or 
dispatching delays (as is rumored for Windows) a microsecond-level clock 
accuracy is no help.

> Yet Windows is a half-baked
> time-sharing OS, while VxWorks is one of the leading real-time OSes. Why is
> it so? Because real-time applications do not need clock much. They are
> real-time because their sources of time are *real*. These are hardware
> interrupts, while timer interrupts are of a much lesser interest.

Both are important. Many control systems are driven by timers that 
trigger periodic tasks. In my experience (admittedly limited), it is 
rare for sensors to generate periodic input streams on their own, they 
must usually be sampled by periodic reads. You are right, however, that 
some systems, such as automobile engine control units, have external 
triggers, such as shaft-rotation interrupts.

Anyway, your point has do to with the "time" that *activates* tasks, not 
with the measurement of task-specific execution times. So this is 
irrelevant.

> I don't
> care how much processor time my control loop takes so long it manages to
> write the outputs when the actuators expect them.

You should care, if the processor must also have time for some other 
tasks of lower priority, which are preempted by the control-loop task.

> Measuring the CPU time
> would bring me nothing. It is useless before the run, because it is not a
> proof that the deadlines will be met.

In some cases (simple code or extensive tests, deterministic processor) 
CPU-time measurements can be used to prove that deadlines are met. But 
of course static analysis of the worst-case execution time is better.

> It useless at run-time because there
> are easier and safer ways to detect faults.

If you mean "deadline missed" or "task overrun" faults, you are right 
that there are other detection methods. Still, Ada.Execution_Time may 
help to *anticipate*, and thus mitigate, such faults.

For example, assume that the computation in a control algorithm consists 
of two consecutive stages where the first stage processes the inputs 
into a state model and the second stage computes the control outputs 
from the state model. Using Ada.Execution_Time or 
Ada.Execution_Time.Timers the program could detect an unexpectedly high 
CPU usage in the first stage, and fall back to a simpler, faster 
algorithm in the second stage, to ensure that some control outputs are 
computed before the deadline.

But you are again ignoring other run-time uses of execution-time 
measurements, such as advanced scheduling algorithms.

>> Is your point that Ada.Execution_Time was accepted only because the ARG 
>> decided that the word "time" in RM D.14 should not be understood to mean 
>> real time? I doubt that very much... Surely such an unusual meaning of 
>> "time" should have been explained in the RM.
> 
> It is explained by its name: "execution time." Execution means not real,
> unreal time (:-)).

Nonsense. I spend some part of my time asleep, some time awake. Both 
"sleeping time" and "awake time" are (pieces of) real time. A task 
spends some of its time being executed, some of its time not being 
executed (waiting or ready).

>>> If that was the intent, then I really do not understand why CPU_Time was
>>> introduced in addition to Ada.Real_Time.Time / Time_Span.
>> Because (as I understand it) different processors/OSes have different 
>> mechanisms for measuring execution times and real times, and the 
>> mechanism most convenient for CPU_Time may use a different numerical 
>> type (range, scale, and precision) than the mechanisms and types used 
>> for Ada.Real_Time.Time, Time_Span, and Duration.
> 
> I see no single reason why this could happen. Obviously, if talking about a
> real-time system as you insist, the only possible choice for CPU_Time is
> Time_Span, because to be consistent with the interpretation you propose it
> must be derived from Ada.Real_Time clock.

I have only said that the sum of the CPU_Times of all tasks executing on 
the same processor should be close to the real elapsed time, since the 
CPU's time is shared between the tasks. This does not mean that 
Ada.Execution_Time.CPU_Time and Ada.Real_Time.Time must have a common 
time source, only that both time sources must approximate physical, real 
time.

> My point is that RM intentionally leaves it up to the implementation to
> choose a CPU_Time source independent on Ada.Real_Time.Clock. This why
> different range and precision come into consideration.

I agree. But in both cases the intent is to approximate physical, real 
time, not some "simulation time" where one "simulation second" could be 
one year of real time.

> Under some conditions (e.g. no task switching) an execution time interval
> could be numerically equal to a real time interval.

Yes! Therefore, under these conditions, CPU_Time (when converted to a 
Time_Span or Duration) does have a physical meaning. So we agree. At last.

And under the task dispatching model in RM D.2.1, these conditions can 
be extended to task switching scenarios with the result that the sum of 
the CPU_Times of the tasks (for one processor) will be numerically close 
to the elapsed real time interval.

> But in my view the execution time is not even a simulation time of some
> ideal (real) clock. It is a simulation time of some lax recurrent process,
> e.g. scheduling activity, of which frequency is not even considered
> constant.

You may well have this view, but I don't see that your view has anything 
to do with Ada.Execution_Time as defined in the Ada RM.

-- 
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
       .      @       .



^ permalink raw reply	[relevance 9%]

* An Example for Ada.Execution_Time
@ 2010-12-27 18:26 12% anon
  2010-12-28  2:31 14% ` BrianG
  0 siblings, 1 reply; 170+ results
From: anon @ 2010-12-27 18:26 UTC (permalink / raw)


You ask for an example.

Here is an example for packages (Tested using MaRTE):
  Ada.Execution_Time         -- Defines Time type and main operations as
                             -- well as the primary Clock for this type

  Ada.Execution_Time.Timers  -- Links to Timers 


Altering this example one could use a number of timers to be set 
using different times. Or testing a number of algorithms using 
the timer.

Also, I do have a Real_Time non-Task version that this example was 
based on.


-------------------------------
-- Work.adb -- Main program

with Ada.Integer_Text_IO ;
with Ada.Text_IO ;
with Work_Algorithm ;         -- Contains worker algorithm
with Work_Execution_Time ;    -- Contains execution Timing routines

procedure Work is

  use Ada.Integer_Text_IO ;
  use Ada.Text_IO ;
  use Work_Execution_Time ; 

  Task_0 : Work_Task ;

begin -- Work
  Initialize ( Work_Algorithm'Access ) ;
  Task_0.Start ( False ) ;

  -- Prints results of Test

  New_Line ;
  Put ( "Event occured " ) ;
  Put ( Item => Counter, Width => 3 ) ;
  Put_Line ( " Times." ) ;
  New_Line ;

end Work ;

-------------------------------
-- Work_Algorithm.ads

procedure Work_Algorithm ;

-------------------------------
-- Work_Algorithm.adb

with Ada.Integer_Text_IO ;
with Ada.Text_IO ;

procedure Work_Algorithm is
    use Ada.Integer_Text_IO ;
    use Ada.Text_IO ;
  begin
    for Index in 0 .. 15 loop
      Put ( "Paused =>" ) ;
      Put ( Index ) ;
      New_Line ;
      delay 1.0 ;
    end loop ;
  end Work_Algorithm ;


-------------------------------
-- Work.Execution_Time.ads

with Ada.Execution_Time ;
with Ada.Real_Time ;

package Work_Execution_Time is

  type Algorithm_Type is access procedure ;

  task type Work_Task is
    entry Start ( Active : in Boolean ) ;
  end Work_Task ;


  Counter    : Natural ;

  procedure Initialize ( A : in Algorithm_Type ) ;

private 

  Algorithm  : Algorithm_Type ;

  Start_Time : Ada.Execution_Time.CPU_Time ;
  At_Time    : Ada.Execution_Time.CPU_Time ;
  In_Time    : Ada.Real_Time.Time_Span ;

end Work_Execution_Time ;

-------------------------------
-- Work_Execution_Time ;
with Ada.Integer_Text_IO ;
with Ada.Text_IO ;
with Ada.Execution_Time ;
with Ada.Execution_Time.Timers ;
with Ada.Real_Time ;
with Ada.Task_Identification ;

package body Work_Execution_Time is


    use Ada.Execution_Time ;
    use Ada.Real_Time ;
    use Timers ;

    package D_IO is new Ada.Text_IO.Fixed_IO ( Duration ) ;
    package S_IO is new Ada.Text_IO.Integer_IO
                                      ( Ada.Real_Time.Seconds_Count ) ;

  protected Handlers is
    -- Handler: Single Event
      procedure Handler_1 ( TM : in out Timer ) ;

    -- Handler: Multple Event
      procedure Handler_2 ( TM : in out Timer ) ;
  end Handlers ;



  task body Work_Task is

      use Ada.Task_Identification ;

    ID         : aliased Task_Id := Current_Task ;
    TM         : Timers.Timer ( ID'Access ) ;

    Cancelled  : Boolean := False ;

  begin
    Counter := 0 ;
    loop
      select
        Accept Start ( Active : in Boolean ) do
          if Active then
            Start_Time := Ada.Execution_Time.Clock ;
            At_Time := Start_Time + Milliseconds ( 5 ) ;
            Set_Handler ( TM, AT_Time, Handlers.Handler_2'Access ) ;
          else
            Start_Time := Ada.Execution_Time.Clock ;
            In_Time := Milliseconds ( 10 ) ;
            Set_Handler ( TM, In_Time, Handlers.Handler_2'Access ) ;
          end if ;

          Algorithm.all ;  -- Execute Test algorithm

          Timers.Cancel_Handler ( TM, Cancelled ) ;
        end Start ;
      or
        terminate ;
      end select ;
    end loop ;
  end Work_Task ;


  --
  -- Timer Event Routines
  --
  protected body Handlers is 

    -- Handler: Single Event

    procedure Handler_1 ( TM : in out Timer ) is

        Value     : Time_Span ;
        Cancelled : Boolean ;

      begin
        Value := Time_Remaining ( TM ) ;
        Ada.Text_IO.Put ( "Timing Event Occured at " ) ;
        D_IO.Put ( To_Duration ( Value ) ) ;
        Ada.Text_IO.New_Line ;
        Counter := Counter + 1 ;
        Cancel_Handler ( TM, Cancelled ) ;
      end Handler_1 ;

    -- Handler: Multple Event

    procedure Handler_2 ( TM : in out Timer ) is

        Value   : Time_Span ;

      begin
        Value := Time_Remaining ( TM ) ;
        Ada.Text_IO.Put ( "Timing Event Occured at " ) ;
        D_IO.Put ( To_Duration ( Value ) ) ;
        Ada.Text_IO.New_Line ;
        Counter := Counter + 1 ;

        Start_Time := Ada.Execution_Time.Clock ;
        In_Time := Ada.Real_Time.Milliseconds ( 10 ) ;
        Set_Handler ( TM, In_Time, Handlers.Handler_2'Access ) ;
      end Handler_2 ;

  end Handlers ;


  -- Initialize: Set Algorithm and Counter

  procedure Initialize ( A : in Algorithm_Type ) is
    begin
      Algorithm := A ;
      Counter := 0 ;
    end Initialize ;

end Work_Execution_Time ;




^ permalink raw reply	[relevance 12%]

* Re: Ada.Execution_Time
  2010-12-26 10:25 11%                               ` Ada.Execution_Time Dmitry A. Kazakov
  2010-12-27 12:44 10%                                 ` Ada.Execution_Time Niklas Holsti
@ 2010-12-27 17:24  5%                                 ` Robert A Duff
  2010-12-27 22:02  5%                                   ` Ada.Execution_Time Randy Brukardt
  1 sibling, 1 reply; 170+ results
From: Robert A Duff @ 2010-12-27 17:24 UTC (permalink / raw)


"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:

> Nope, one of the killer arguments ARG people deploy to reject most
> reasonable AI's is: too difficult to implement on some obscure platform for
> which Ada never existed and never will. (:-))

The ARG and others have been guilty of that sort of argument in the
past, although I think "most reasonable AI's" is an exaggeration.
I think that line of reasoning is wrong -- I think it's just fine to have
things like Ada.Directories, even though many embedded systems don't
have directories.  It means that there's some standardization across
systems that DO have directories.  Those that don't can either
provide some minimal/useless implementation, or else appeal
to RM-1.1.3(6).

I think today's ARG is less inclined to follow that wrong line
of reasoning.

(I don't much like the design of Ada.Directories, and I think
you (Dmitry) agree with me about that, but it's beside the
point.)

- Bob



^ permalink raw reply	[relevance 5%]

* Re: Ada.Execution_Time
  2010-12-27 12:44 10%                                 ` Ada.Execution_Time Niklas Holsti
@ 2010-12-27 15:28  6%                                   ` Dmitry A. Kazakov
  2010-12-27 20:11  9%                                     ` Ada.Execution_Time Niklas Holsti
  0 siblings, 1 reply; 170+ results
From: Dmitry A. Kazakov @ 2010-12-27 15:28 UTC (permalink / raw)


On Mon, 27 Dec 2010 14:44:53 +0200, Niklas Holsti wrote:

> Dmitry A. Kazakov wrote:
>> On Sat, 25 Dec 2010 13:31:27 +0200, Niklas Holsti wrote:
>> 
>>> Dmitry A. Kazakov wrote:
>>>> On Sat, 18 Dec 2010 23:20:20 +0200, Niklas Holsti wrote:
>>> ...
>>>>> The concept and measurement of "the execution time of a task" does
>>>>> become problematic in complex processors that have hardware 
>>>>> multi-threading and can run several tasks in more or less parallel
>>>>> fashion, without completely isolating the tasks from each other.
>>>> No, the concept is just fine,
>>> Fine for what? For schedulability analysis, fine for on-line scheduling,
>>> ... ?
>> 
>> Applicability of a concept is not what makes it wrong or right.
> 
> I think it does. This concept, "the execution time of a task", stands 
> for a number. A number is useless if it has no application in a 
> calculation.

Technically, CPU_Time is not a number in any sense. It is not a numeric Ada
type and it is not a model of a mathematical number (not even additive).

Anyway, the execution time can be used in calculations independently on
whether and how you could apply the results of such calculations.

You can add the height of your house to the distance to the Moon, the
interpretation is up to you.

>>> However, this is a side issue, since we are (or at least I am)
>>> discussing what the RM intends with Ada.Execution_Time, which must be
>>> read in the context of D.2.1, which assumes that there is a clearly
>>> defined set of "processors" and each processor executes exactly one
>>> task at a time.
>> 
>> Why?
> 
> Because RM D.14 uses terms defined in RM D.2.1, for example "executing".

These terms stay valid for non-real time systems.

>> Scheduling does not need Ada.Execution_Time,
> 
> The standard schedulers defined in the RM do no not need 
> Ada.Execution_Time but, as remarked earlier in this thread, one of the 
> purposes of Ada.Execution_Time is to support the implementation of 
> non-standard scheduling algorithms that may make on-line scheduling 
> decisions that depend on the actual execution times of tasks. For 
> example, scheduling based on "slack time".

Even if somebody liked to undergo such an adventure, he also could use
Elementary_Functions in the scheduler. That would not make the nature of
Elementary_Functions any different.

>> it is the Ada.Execution_Time implementation, which needs some
>> input from the scheduler.
> 
> We must be precise about our terms, here. Using terms as defined in 
> http://en.wikipedia.org/wiki/Task_scheduling, Ada.Execution_Time needs 
> input from the task *dispatcher* -- the part of the kernel that suspends 
> and resumes tasks.

Let's call it dispatcher. Time sharing needs some measure of consumed CPU
time. My points stand:

1. Time sharing has little to do with real-time systems.

2. It would be extremely ill-advised to use Ada.Execution_Time instead of
direct measures for an implementation of time sharing algorithms.

>> How do you explain that CPU_Time, a thing about time sharing, appears in
>> the real-time systems annex D?
> 
> You don't think that execution time is important for real-time systems?

Real-time systems work with the real-time, they real-time intervals
(durations) are of much minor interest. Execution time is of no interest,
because a real-time system does not care to balance the CPU load.

> In my view, CPU_Time is a measure of "real time", so its place in annex 
> D is natural. In your view, CPU_Time is not "real time", which should 
> make *you* surprised that it appears in annex D.

It does not surprise me, because there is no "time-sharing systems" annex,
or better it be "it is not what you think" or "if you think you need this,
your are wrong." There are some other Ada features we could move there.
(:-))

>>> You have
>>> not given any arguments, based on the RM text, to support your position.
>> 
>> I am not a language lawyer to interpret the RM texts. My argument was to
>> common sense.
> 
> To me it seems that your argument is based on the difficulty (in your 
> opinion) of implementing Ada.Execution_Time in some OSes such as MS 
> Windows, if the RM word "time" is taken to mean real time.
> 
> It is common sense that some OSes are not designed (or not well 
> designed) for real-time systems. Even a good real-time OS may not 
> support all real-time methodologies, for example scheduling algorithms 
> that depend on actual execution times.

I disagree with almost everything here. To start with, comparing real-time
clock services of Windows and of VxWorks, we would notice that Windows is
far superior in both accuracy and precision. Yet Windows is a half-baked
time-sharing OS, while VxWorks is one of the leading real-time OSes. Why is
it so? Because real-time applications do not need clock much. They are
real-time because their sources of time are *real*. These are hardware
interrupts, while timer interrupts are of a much lesser interest. I don't
care how much processor time my control loop takes so long it manages to
write the outputs when the actuators expect them. Measuring the CPU time
would bring me nothing. It is useless before the run, because it is not a
proof that the deadlines will be met. It useless at run-time because there
are easier and safer ways to detect faults.

>>>>>> I am not a language lawyer, but I bet that an implementation of 
>>>>>> Ada.Execution_Time.Split that ignores any CPU frequency changes
>>>>>> when summing up processor ticks consumed by the task would be
>>>>>> legal.
>>>>> Whether or not such an implementation is formally legal, that would
>>>>> require very perverse interpretations of the text in RM D.14.
>>>> RM D.14 defines CPU_Tick constant, of which physical equivalent (if
>>>> we tried to enforce your interpretation) is not constant for many
>>>> CPU/OS combinations.
>>> The behaviour of some CPU/OS is irrelevant to the intent of the RM.
>> 
>> Nope, one of the killer arguments ARG people deploy to reject most
>> reasonable AI's is: too difficult to implement on some obscure platform for
>> which Ada never existed and never will. (:-))
> 
> Apparently such arguments, if any were made in this case, were not valid 
> enough to prevent the addition of Ada.Execution_Time to the RM.

That is because ARG didn't intend to reject it! Somebody wanted it no
matter what (like interfaces, limited results, asserts, then, like, I am
afraid, if-operators now). The rest was minimizing the damage...

> Is your point that Ada.Execution_Time was accepted only because the ARG 
> decided that the word "time" in RM D.14 should not be understood to mean 
> real time? I doubt that very much... Surely such an unusual meaning of 
> "time" should have been explained in the RM.

It is explained by its name: "execution time." Execution means not real,
unreal time (:-)).

>>> As already said, an Ada implementation on such CPU/OS could
>>> use its own mechanisms for execution-time measurements.
>> 
>> Could or must? Does GNAT this?
> 
> I don't much care, it is irrelevant for understanding what the RM means. 
> Perhaps the next version of MS Windows will have better support for 
> measuring real task execution times; would that change the intent of the 
> RM? Of course not.

You suggested that Ada implementations would/could attempt to be consistent
with your interpretation of CPU_Time. But it seems that at least one of the
leading Ada vendors does not care. Is it a laziness on their side or maybe
you just expected too much?

>>> It seems evident to me that the text in D.14 must be interpreted using
>>> the concepts in D.2.1, "The Task Dispatching Model", which clearly
>>> specifies real-time points when a processor starts to execute a task and
>>> stops executing a task. To me, and I believe to most readers of the RM,
>>> the execution time of a task is the sum of these time slices, thus a
>>> physical, real time.
>> 
>> If that was the intent, then I really do not understand why CPU_Time was
>> introduced in addition to Ada.Real_Time.Time / Time_Span.
> 
> Because (as I understand it) different processors/OSes have different 
> mechanisms for measuring execution times and real times, and the 
> mechanism most convenient for CPU_Time may use a different numerical 
> type (range, scale, and precision) than the mechanisms and types used 
> for Ada.Real_Time.Time, Time_Span, and Duration.

I see no single reason why this could happen. Obviously, if talking about a
real-time system as you insist, the only possible choice for CPU_Time is
Time_Span, because to be consistent with the interpretation you propose it
must be derived from Ada.Real_Time clock.

My point is that RM intentionally leaves it up to the implementation to
choose a CPU_Time source independent on Ada.Real_Time.Clock. This why
different range and precision come into consideration.

>>> And you still have not defined what you mean by "simulation time", and
>>> how you come there from the RM text.
>> 
>> Simulation time is a model of real time a physical process might have under
>> certain conditions.
> 
> Thank you. But I still do not see how your definition could be applied 
> in this context, so we are back at the start of the post... :-)

Under some conditions (e.g. no task switching) an execution time interval
could be numerically equal to a real time interval.

But in my view the execution time is not even a simulation time of some
ideal (real) clock. It is a simulation time of some lax recurrent process,
e.g. scheduling activity, of which frequency is not even considered
constant. It can be any garbage, and it likely is.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[relevance 6%]

* Re: Ada.Execution_Time
  2010-12-26 10:25 11%                               ` Ada.Execution_Time Dmitry A. Kazakov
@ 2010-12-27 12:44 10%                                 ` Niklas Holsti
  2010-12-27 15:28  6%                                   ` Ada.Execution_Time Dmitry A. Kazakov
  2010-12-27 17:24  5%                                 ` Ada.Execution_Time Robert A Duff
  1 sibling, 1 reply; 170+ results
From: Niklas Holsti @ 2010-12-27 12:44 UTC (permalink / raw)


Dmitry A. Kazakov wrote:
> On Sat, 25 Dec 2010 13:31:27 +0200, Niklas Holsti wrote:
> 
>> Dmitry A. Kazakov wrote:
>>> On Sat, 18 Dec 2010 23:20:20 +0200, Niklas Holsti wrote:
>> ...
>>>> The concept and measurement of "the execution time of a task" does
>>>> become problematic in complex processors that have hardware 
>>>> multi-threading and can run several tasks in more or less parallel
>>>> fashion, without completely isolating the tasks from each other.
>>> No, the concept is just fine,
>> Fine for what? For schedulability analysis, fine for on-line scheduling,
>> ... ?
> 
> Applicability of a concept is not what makes it wrong or right.

I think it does. This concept, "the execution time of a task", stands 
for a number. A number is useless if it has no application in a 
calculation. I don't know of any other meaning of "wrongness" for a 
numerical concept.

>> However, this is a side issue, since we are (or at least I am)
>> discussing what the RM intends with Ada.Execution_Time, which must be
>> read in the context of D.2.1, which assumes that there is a clearly
>> defined set of "processors" and each processor executes exactly one
>> task at a time.
> 
> Why?

Because RM D.14 uses terms defined in RM D.2.1, for example "executing".

> Scheduling does not need Ada.Execution_Time,

The standard schedulers defined in the RM do no not need 
Ada.Execution_Time but, as remarked earlier in this thread, one of the 
purposes of Ada.Execution_Time is to support the implementation of 
non-standard scheduling algorithms that may make on-line scheduling 
decisions that depend on the actual execution times of tasks. For 
example, scheduling based on "slack time".

> it is the Ada.Execution_Time implementation, which needs some
> input from the scheduler.

We must be precise about our terms, here. Using terms as defined in 
http://en.wikipedia.org/wiki/Task_scheduling, Ada.Execution_Time needs 
input from the task *dispatcher* -- the part of the kernel that suspends 
and resumes tasks. It does not need input from the *scheduler*, which is 
the kernel part that selects the next task to be executed from among the 
ready tasks.

The term "dispatching" is defined differently in RM D.2.1 to mean the 
same as "scheduling" in the Wikipedia entry.

> How do you explain that CPU_Time, a thing about time sharing, appears in
> the real-time systems annex D?

You don't think that execution time is important for real-time systems?

In my view, CPU_Time is a measure of "real time", so its place in annex 
D is natural. In your view, CPU_Time is not "real time", which should 
make *you* surprised that it appears in annex D.

>> You have
>> not given any arguments, based on the RM text, to support your position.
> 
> I am not a language lawyer to interpret the RM texts. My argument was to
> common sense.

To me it seems that your argument is based on the difficulty (in your 
opinion) of implementing Ada.Execution_Time in some OSes such as MS 
Windows, if the RM word "time" is taken to mean real time.

It is common sense that some OSes are not designed (or not well 
designed) for real-time systems. Even a good real-time OS may not 
support all real-time methodologies, for example scheduling algorithms 
that depend on actual execution times.

>>>>> I am not a language lawyer, but I bet that an implementation of 
>>>>> Ada.Execution_Time.Split that ignores any CPU frequency changes
>>>>> when summing up processor ticks consumed by the task would be
>>>>> legal.
>>>> Whether or not such an implementation is formally legal, that would
>>>> require very perverse interpretations of the text in RM D.14.
>>> RM D.14 defines CPU_Tick constant, of which physical equivalent (if
>>> we tried to enforce your interpretation) is not constant for many
>>> CPU/OS combinations.
>> The behaviour of some CPU/OS is irrelevant to the intent of the RM.
> 
> Nope, one of the killer arguments ARG people deploy to reject most
> reasonable AI's is: too difficult to implement on some obscure platform for
> which Ada never existed and never will. (:-))

Apparently such arguments, if any were made in this case, were not valid 
enough to prevent the addition of Ada.Execution_Time to the RM.

Is your point that Ada.Execution_Time was accepted only because the ARG 
decided that the word "time" in RM D.14 should not be understood to mean 
real time? I doubt that very much... Surely such an unusual meaning of 
"time" should have been explained in the RM.

>> As already said, an Ada implementation on such CPU/OS could
>> use its own mechanisms for execution-time measurements.
> 
> Could or must? Does GNAT this?

I don't much care, it is irrelevant for understanding what the RM means. 
Perhaps the next version of MS Windows will have better support for 
measuring real task execution times; would that change the intent of the 
RM? Of course not.

>> It seems evident to me that the text in D.14 must be interpreted using
>> the concepts in D.2.1, "The Task Dispatching Model", which clearly
>> specifies real-time points when a processor starts to execute a task and
>> stops executing a task. To me, and I believe to most readers of the RM,
>> the execution time of a task is the sum of these time slices, thus a
>> physical, real time.
> 
> If that was the intent, then I really do not understand why CPU_Time was
> introduced in addition to Ada.Real_Time.Time / Time_Span.

Because (as I understand it) different processors/OSes have different 
mechanisms for measuring execution times and real times, and the 
mechanism most convenient for CPU_Time may use a different numerical 
type (range, scale, and precision) than the mechanisms and types used 
for Ada.Real_Time.Time, Time_Span, and Duration. This is evident in the 
different minimum ranges and precisions defined in the RM for these 
types. Randy remarked on this earlier in this thread.

>> And you still have not defined what you mean by "simulation time", and
>> how you come there from the RM text.
> 
> Simulation time is a model of real time a physical process might have under
> certain conditions.

Thank you. But I still do not see how your definition could be applied 
in this context, so we are back at the start of the post... :-)

-- 
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
       .      @       .



^ permalink raw reply	[relevance 10%]

* Task execution time test
@ 2010-12-26 10:25  7% Dmitry A. Kazakov
  0 siblings, 0 replies; 170+ results
From: Dmitry A. Kazakov @ 2010-12-26 10:25 UTC (permalink / raw)


Here is a small test for task execution time.

Five worker tasks are used to generate background CPU load. When the
measured task enters delay 0.1ms (on a system where delay is non-busy) it
should lose the CPU prematurely.

Under at least some Windows systems the test might fail because Windows
performance counters are CPU quants 1ms or 10ms, depending on settings.

Under VxWorks the test may fail because real-time clock there is driven by
the timer interrupts, so it is impossible to have non-busy wait for 0.1ms.

(I cannot say anything about Linux, because I never used it for RT
applications, maybe other people could comment)

----------------------------------------------------
with Ada.Execution_Time;  use Ada.Execution_Time;
with Ada.Real_Time;       use Ada.Real_Time;
with Ada.Text_IO;         use Ada.Text_IO;

with Ada.Numerics.Elementary_Functions;

procedure Executuion_Time is
   task type Measured;
   task body Measured is
      Count    : Seconds_Count;
      Fraction : Time_Span;
   begin
      for I in 1..1_000 loop
         delay 0.000_1;
      end loop;
      Split (Ada.Execution_Time.Clock, Count, Fraction);
      Put_Line
      (  "Seconds" & Seconds_Count'Image (Count) &
         " Fraction" & Duration'Image (To_Duration (Fraction))
      );
   end Measured;

   task type Worker; -- Used to generate CPU load
   task body Worker is
      use Ada.Numerics.Elementary_Functions;
      X : Float;
   begin
      for I in Positive'Range loop
         X := sin (Float (I));
      end loop;
   end Worker;
   
begin
   delay 0.1; -- This might be needed for some buggy versions of GNAT
   declare
      Workers : array (1..5) of Worker;
      Test    : Measured;
   begin
      null;
   end;
end Executuion_Time;
-----------------------------------------------------
On my Windows XP SP3 the test yields 0-0.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[relevance 7%]

* Re: Ada.Execution_Time
  2010-12-25 11:31  7%                             ` Ada.Execution_Time Niklas Holsti
@ 2010-12-26 10:25 11%                               ` Dmitry A. Kazakov
  2010-12-27 12:44 10%                                 ` Ada.Execution_Time Niklas Holsti
  2010-12-27 17:24  5%                                 ` Ada.Execution_Time Robert A Duff
  2010-12-27 22:11  4%                               ` Ada.Execution_Time Randy Brukardt
  1 sibling, 2 replies; 170+ results
From: Dmitry A. Kazakov @ 2010-12-26 10:25 UTC (permalink / raw)


On Sat, 25 Dec 2010 13:31:27 +0200, Niklas Holsti wrote:

> Dmitry A. Kazakov wrote:
>> On Sat, 18 Dec 2010 23:20:20 +0200, Niklas Holsti wrote:
> ...
>>> The concept and measurement of "the execution time of a task" does
>>> become problematic in complex processors that have hardware 
>>> multi-threading and can run several tasks in more or less parallel
>>> fashion, without completely isolating the tasks from each other.
>> 
>> No, the concept is just fine,
> 
> Fine for what? For schedulability analysis, fine for on-line scheduling,
> ... ?

Applicability of a concept is not what makes it wrong or right.

> However, this is a side issue, since we are (or at least I am)
> discussing what the RM intends with Ada.Execution_Time, which must be
> read in the context of D.2.1, which assumes that there is a clearly
> defined set of "processors" and each processor executes exactly one
> task at a time.

Why? Scheduling does not need Ada.Execution_Time, it is the
Ada.Execution_Time implementation, which needs some input from the
scheduler.

How do you explain that CPU_Time, a thing about time sharing, appears in
the real-time systems annex D?

> You have
> not given any arguments, based on the RM text, to support your position.

I am not a language lawyer to interpret the RM texts. My argument was to
common sense.

>>>> I am not a language lawyer, but I bet that an implementation of 
>>>> Ada.Execution_Time.Split that ignores any CPU frequency changes
>>>> when summing up processor ticks consumed by the task would be
>>>> legal.
>>> Whether or not such an implementation is formally legal, that would
>>> require very perverse interpretations of the text in RM D.14.
>> 
>> RM D.14 defines CPU_Tick constant, of which physical equivalent (if
>> we tried to enforce your interpretation) is not constant for many
>> CPU/OS combinations.
> 
> The behaviour of some CPU/OS is irrelevant to the intent of the RM.

Nope, one of the killer arguments ARG people deploy to reject most
reasonable AI's is: too difficult to implement on some obscure platform for
which Ada never existed and never will. (:-))

> As 
> already said, an Ada implementation on such CPU/OS could use its own 
> mechanisms for execution-time measurements.

Could or must? Does GNAT this?

>> On a such platform the implementation would be as perverse as RM D.14
>> is. But the perversion is only because of the interpretation.
> 
> Bah. I think that when RM D.14 says "time", it really means time. You
> think it means something else, perhaps a CPU cycle count. I think the 
> burden of proof is on you.
> 
> It seems evident to me that the text in D.14 must be interpreted using
> the concepts in D.2.1, "The Task Dispatching Model", which clearly
> specifies real-time points when a processor starts to execute a task and
> stops executing a task. To me, and I believe to most readers of the RM,
> the execution time of a task is the sum of these time slices, thus a
> physical, real time.

If that was the intent, then I really do not understand why CPU_Time was
introduced in addition to Ada.Real_Time.Time / Time_Span.

> And you still have not defined what you mean by "simulation time", and
> how you come there from the RM text.

Simulation time is a model of real time a physical process might have under
certain conditions.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[relevance 11%]

* Re: Ada.Execution_Time
  2010-12-19  9:57  3%                           ` Ada.Execution_Time Dmitry A. Kazakov
@ 2010-12-25 11:31  7%                             ` Niklas Holsti
  2010-12-26 10:25 11%                               ` Ada.Execution_Time Dmitry A. Kazakov
  2010-12-27 22:11  4%                               ` Ada.Execution_Time Randy Brukardt
  0 siblings, 2 replies; 170+ results
From: Niklas Holsti @ 2010-12-25 11:31 UTC (permalink / raw)


Dmitry A. Kazakov wrote:
> On Sat, 18 Dec 2010 23:20:20 +0200, Niklas Holsti wrote:
...
>> The concept and measurement of "the execution time of a task" does
>> become problematic in complex processors that have hardware 
>> multi-threading and can run several tasks in more or less parallel
>> fashion, without completely isolating the tasks from each other.
> 
> No, the concept is just fine,

Fine for what? For schedulability analysis, fine for on-line scheduling,
... ?

However, this is a side issue, since we are (or at least I am)
discussing what the RM intends with Ada.Execution_Time, which must be
read in the context of D.2.1, which assumes that there is a clearly
defined set of "processors" and each processor executes exactly one
task at a time.

> it is the interpretation of the measured values in the way you
> wanted, which causes problems. That is the core of my point. The
> measure is not real time.

I still disagree, if we are talking about the intent of the RM. You have
not given any arguments, based on the RM text, to support your position.

>>> I am not a language lawyer, but I bet that an implementation of 
>>> Ada.Execution_Time.Split that ignores any CPU frequency changes
>>> when summing up processor ticks consumed by the task would be
>>> legal.
>> Whether or not such an implementation is formally legal, that would
>> require very perverse interpretations of the text in RM D.14.
> 
> RM D.14 defines CPU_Tick constant, of which physical equivalent (if
> we tried to enforce your interpretation) is not constant for many
> CPU/OS combinations.

The behaviour of some CPU/OS is irrelevant to the intent of the RM. As 
already said, an Ada implementation on such CPU/OS could use its own 
mechanisms for execution-time measurements.

> On a such platform the implementation would be as perverse as RM D.14
> is. But the perversion is only because of the interpretation.

Bah. I think that when RM D.14 says "time", it really means time. You
think it means something else, perhaps a CPU cycle count. I think the 
burden of proof is on you.

It seems evident to me that the text in D.14 must be interpreted using
the concepts in D.2.1, "The Task Dispatching Model", which clearly
specifies real-time points when a processor starts to execute a task and
stops executing a task. To me, and I believe to most readers of the RM,
the execution time of a task is the sum of these time slices, thus a
physical, real time.

>> (In fact, some variable-frequency scheduling methods may prefer to
>> measure task "execution times" in units of processor ticks, not in
>> real-time units like seconds.)
> 
> Exactly. As a simulation time RM D.14 is perfectly OK.

I put my comment, above, in parentheses because it is a side issue.

And you still have not defined what you mean by "simulation time", and
how you come there from the RM text.

> It can be used for CPU load estimations,

How, if it has "no physical meaning", as you claim?

> while the "real time" implementation could not.

Why not? That is the way ordinary schedulability analysis works. And
again, it is irrevelant that some CPU/OS do not provide a good way to
measure the physical execution time of tasks.

> BTW, even for measurements people usually have in mind (e.g.
> comparing resources consumed by tasks), simulation time would be more
> fair.

What is "simulation time"?

> The problem is with I/O, because I/O is a "real" thing.

I assume by "I/O" you mean time consumed in waiting for some external
event. I agree that such I/O is a (solvable) problem in schedulability 
analysis, but it is not relevant for understanding the intent of RM D.14.

-- 
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
       .      @       .



^ permalink raw reply	[relevance 7%]

* Re: Ada.Execution_Time
  2010-12-22 14:30 14%                 ` Ada.Execution_Time anon
@ 2010-12-22 20:09  4%                   ` BrianG
  0 siblings, 0 replies; 170+ results
From: BrianG @ 2010-12-22 20:09 UTC (permalink / raw)


anon@att.net wrote:
> In <iemhm8$4up$1@news.eternal-september.org>, BrianG <briang000@gmail.com> writes:
>> anon@att.net wrote:
>> I have no problem with what Execution_Time does (as evidenced by the 
>> fact that I asked a question about its use) - it measures exactly what I 
>> want, an estimate of the CPU time used by a task.  My problem is with 
>> the way it is defined - it provides, by itself, no "value" of that that 
>> a using program can make use of to print or calculate (i.e. you also 
>> need Real_Time for that, which is silly - and I disagree that that is 
>> necessarily required in any case - my program didn't need it otherwise).
>> --BrianG
> There has been many third party versions of this package over the years 
> and most of them included a Linux version. The Windows is specific to GNAT. 
> And it just like GNAT the Windows version is not complete either.
My comments have been about the RM-defined package and its lack of 
usability (if that wasn't already clear).  What may or may not be 
provided by implementation-specific packages is irrelevant.

> 
> For Execution_Time there are three package. GNAT has 
>   Ada.Execution_Time                    for both Linux-MaRTE and Windows
>   Ada.Execution_Time.Timers             only for Linux-MaRTE
>   Ada.Execution_Time.Group_Budgets      Unimplemented Yet
> 
When you say "GNAT" here you should probably specify what version you 
mean.  I haven't looked in this area, but I was under the impression 
that GNAT has implemented all of Ada'05 for quite a while now (in Pro; 
maybe it has not yet all reached GCC or GPL releases?).

> 
> 
> In using the Linux-MaRTE version ( 2 packages ) 
>   Ada.Execution_Time 
>   Ada.Execution_Time.Timers 
See below.
> 
> Of course in this example you could use ( Note: they are fully implemented )
>   Ada.Real_Time
>   Ada.Real_Time.Timing_Events
So, instead of using a functionality already provided (with a slight 
kludgy issue), I should implement it entirely myself?  I still don't see 
how this could get me CPU_Time (even a "simulation" value).

> 
> 
> An algorithm comparison program might look like:
> 
> with Ada.Execution_Time ;
> with Ada.Execution_Time.Timers ;
Given the below program, please add some of the missing details to show 
how this can be useful without also "with Ada.Real_Time".  Neither 
Execution_Time or Execution_Time.Timers provides any value that can be 
used directly.

> 
> procedure Compare_Algorithm is
> 
>     task type Algorithm_1 is
>       use Ada.Execution_Time.Timers ;
>     begin
>       begin
>         Set_Handler ( ... ) ; -- start timer
>         Algorithm_1 ;
>         Time_Remaining ( ... ) ; -- sample timer
>       end ;
>       Cancel_Handler ( ... ) ; release timer
>     end ;    
> 
>     task type Algorithm_2 is
>       use Ada.Execution_Time.Timers ;
>     begin
>       begin
>         Set_Handler ( ... ) ; -- start timer
>         Algorithm_2 ;
>         Time_Remaining ( ... ) ; -- sample timer
>       end ;
>       Cancel_Handler ( ... ) ; release timer
>     end ;    
> 
>   use Ada.Execution_Time ;
> 
> begin
>   -- Start tasks
>   -- wait until all tasks finish
>   -- compare times using Execution_Time
Provide details here (line above and line below)- without relying on 
CPU_Time or Time_Span, which are both private.
>   -- Print summary of comparison
> end ;
> 
> 
--BrianG



^ permalink raw reply	[relevance 4%]

* Re: Ada.Execution_Time
  2010-12-20  3:14  4%               ` Ada.Execution_Time BrianG
@ 2010-12-22 14:30 14%                 ` anon
  2010-12-22 20:09  4%                   ` Ada.Execution_Time BrianG
  0 siblings, 1 reply; 170+ results
From: anon @ 2010-12-22 14:30 UTC (permalink / raw)


In <iemhm8$4up$1@news.eternal-september.org>, BrianG <briang000@gmail.com> writes:
>anon@att.net wrote:
>> In <iejsu9$lct$1@news.eternal-september.org>, BrianG <briang000@gmail.com> writes:
>>> anon@att.net wrote:
>>>> In <ie91co$cko$1@news.eternal-september.org>, BrianG <briang000@gmail.com> writes:
>>>>> Georg Bauhaus wrote:
>>>>>> On 12/12/10 10:59 PM, BrianG wrote:
>>> ....
>> 
>> Kind of funny you cutting down "TOP" which can be found on most Linux 
>> boxes  Which allow anyone with the CPU options turn on to see both the 
>> Real_Time and CPU_Time.
>Please provide any comment I made for/against "top".  Since my original 
>question was based on Windows and I've stated that my (current) Linux 
>doesn't support this package, that seems unlikely (even if I didn't 
>trust my memory).
>> 
>....
>> Note: Not sure of the name for the Windows version of Linux's top.
>There is (that I know of) no real equiv to "top" - as in a command-line 
>program.  The equiv to GNOME's "System Monitor" (for example - a gui 
>program) would be Task Manager (and I made no comment about that either).
>> 
>....
>> 
>> How to you think these or any other web hosting company could measure the 
>> complete system resources without measuring the CPU Execution Time on a 
>> given user or application! In Ada, the routines can now use the package 
>> called "Ada.CPU_Execution_Time".
>> 
>I have no problem with what Execution_Time does (as evidenced by the 
>fact that I asked a question about its use) - it measures exactly what I 
>want, an estimate of the CPU time used by a task.  My problem is with 
>the way it is defined - it provides, by itself, no "value" of that that 
>a using program can make use of to print or calculate (i.e. you also 
>need Real_Time for that, which is silly - and I disagree that that is 
>necessarily required in any case - my program didn't need it otherwise).
>> 
>--BrianG
There has been many third party versions of this package over the years 
and most of them included a Linux version. The Windows is specific to GNAT. 
And it just like GNAT the Windows version is not complete either.

For Execution_Time there are three package. GNAT has 
  Ada.Execution_Time                    for both Linux-MaRTE and Windows
  Ada.Execution_Time.Timers             only for Linux-MaRTE
  Ada.Execution_Time.Group_Budgets      Unimplemented Yet



In using the Linux-MaRTE version ( 2 packages ) 
  Ada.Execution_Time 
  Ada.Execution_Time.Timers 

Of course in this example you could use ( Note: they are fully implemented )
  Ada.Real_Time
  Ada.Real_Time.Timing_Events


An algorithm comparison program might look like:

with Ada.Execution_Time ;
with Ada.Execution_Time.Timers ;

procedure Compare_Algorithm is

    task type Algorithm_1 is
      use Ada.Execution_Time.Timers ;
    begin
      begin
        Set_Handler ( ... ) ; -- start timer
        Algorithm_1 ;
        Time_Remaining ( ... ) ; -- sample timer
      end ;
      Cancel_Handler ( ... ) ; release timer
    end ;    

    task type Algorithm_2 is
      use Ada.Execution_Time.Timers ;
    begin
      begin
        Set_Handler ( ... ) ; -- start timer
        Algorithm_2 ;
        Time_Remaining ( ... ) ; -- sample timer
      end ;
      Cancel_Handler ( ... ) ; release timer
    end ;    

  use Ada.Execution_Time ;

begin
  -- Start tasks
  -- wait until all tasks finish
  -- compare times using Execution_Time
  -- Print summary of comparison
end ;





^ permalink raw reply	[relevance 14%]

* Re: Ada.Execution_Time
  2010-12-21 17:19  5%                         ` Ada.Execution_Time Robert A Duff
@ 2010-12-21 17:43  5%                           ` Dmitry A. Kazakov
  0 siblings, 0 replies; 170+ results
From: Dmitry A. Kazakov @ 2010-12-21 17:43 UTC (permalink / raw)


On Tue, 21 Dec 2010 12:19:47 -0500, Robert A Duff wrote:

> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:
> 
>> On Mon, 20 Dec 2010 22:23:12 -0500, Robert A Duff wrote:
>>
>>> Anyway, the strong syntactic separation between declarations
>>> and statements makes no sense in a language where declarations
>>> are executable code.  I think it's just wrong-headed thinking
>>> inherited from Pascal.
>>
>> It still does have sense in a language with lexical scopes. You need some
>> syntactically recognizable point where all things of the same scope become
>> usable.
> 
> Yes, you need such a "syntactically recognizable point".
> But that doesn't require any separation of declarations
> from statements.  If My_Assert is just a regular
> user-defined procedure, then I see nothing wrong with:
> 
>     procedure P is
>         X : T1 := ...;
>         My_Assert(Is_Good(X)); -- Not Ada!
>         Y : T2 := ...;
>         My_Assert(not Is_Evil(Y));
>         ...
> 
> without sprinkling "declares" and "begins" all over.

There are too many issues which are wrong here.

1. The exceptions from My_Assert cannot be handled in P.

2. Checking an instance of T1 is not bound to the type. It is done upon
some arbitrary usage of T1 in an arbitrary procedure P.

3. Assuming that checking is really bound to the procedure P, then I want
to be sure that checking is not premature, that P has elaborated all stuff
belonging there, *before* I am starting to check things.

4. Ada *cannot* handle errors upon initialization and finalization. We have
to fix that first before even considering checks in such contexts.

etc.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[relevance 5%]

* Re: Ada.Execution_Time
  2010-12-21  8:04  5%                       ` Ada.Execution_Time Dmitry A. Kazakov
@ 2010-12-21 17:19  5%                         ` Robert A Duff
  2010-12-21 17:43  5%                           ` Ada.Execution_Time Dmitry A. Kazakov
  0 siblings, 1 reply; 170+ results
From: Robert A Duff @ 2010-12-21 17:19 UTC (permalink / raw)


"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:

> On Mon, 20 Dec 2010 22:23:12 -0500, Robert A Duff wrote:
>
>> Anyway, the strong syntactic separation between declarations
>> and statements makes no sense in a language where declarations
>> are executable code.  I think it's just wrong-headed thinking
>> inherited from Pascal.
>
> It still does have sense in a language with lexical scopes. You need some
> syntactically recognizable point where all things of the same scope become
> usable.

Yes, you need such a "syntactically recognizable point".
But that doesn't require any separation of declarations
from statements.  If My_Assert is just a regular
user-defined procedure, then I see nothing wrong with:

    procedure P is
        X : T1 := ...;
        My_Assert(Is_Good(X)); -- Not Ada!
        Y : T2 := ...;
        My_Assert(not Is_Evil(Y));
        ...

without sprinkling "declares" and "begins" all over.

- Bob



^ permalink raw reply	[relevance 5%]

* Re: Ada.Execution_Time
  2010-12-21  3:23  5%                     ` Ada.Execution_Time Robert A Duff
@ 2010-12-21  8:04  5%                       ` Dmitry A. Kazakov
  2010-12-21 17:19  5%                         ` Ada.Execution_Time Robert A Duff
  0 siblings, 1 reply; 170+ results
From: Dmitry A. Kazakov @ 2010-12-21  8:04 UTC (permalink / raw)


On Mon, 20 Dec 2010 22:23:12 -0500, Robert A Duff wrote:

> Anyway, the strong syntactic separation between declarations
> and statements makes no sense in a language where declarations
> are executable code.  I think it's just wrong-headed thinking
> inherited from Pascal.

It still does have sense in a language with lexical scopes. You need some
syntactically recognizable point where all things of the same scope become
usable.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[relevance 5%]

* Re: Ada.Execution_Time
  2010-12-20 21:28  5%                   ` Ada.Execution_Time Keith Thompson
@ 2010-12-21  3:23  5%                     ` Robert A Duff
  2010-12-21  8:04  5%                       ` Ada.Execution_Time Dmitry A. Kazakov
  0 siblings, 1 reply; 170+ results
From: Robert A Duff @ 2010-12-21  3:23 UTC (permalink / raw)


Keith Thompson <kst-u@mib.org> writes:

> So add an assert operator that always yields True:
>
>     declare
>         Dummy: constant Boolean := assert some_expression; -- assert operator
>     begin
>         assert some_other_expression; -- assert statement
>     end;

If asserts had their own syntax, we could allow them wherever
we like -- "assert blah;" could be both a statement and
a declarative_item.

I really don't like having to declare dummy booleans.
It gets even more annoying when you have several.
What are you going to call them?  Dummy_1, Dummy_2,
and Dummy_3?  And then after some maintenance,
Dummy_1, Dummy_2_and_a_half, and Dummy_3?
Seems like an awful lot of noise -- assertions should
be easy!  (Of course, you might remember me complaining
that the declare/begin/end is just noise, too.)

Anyway, the strong syntactic separation between declarations
and statements makes no sense in a language where declarations
are executable code.  I think it's just wrong-headed thinking
inherited from Pascal.

> Though the use of "Assert" as an identifier in existing code is
> certainly an issue.

I've certainly written procedures called Assert that do
what you might expect.  This was quite common before
pragma Assert existed.

- Bob



^ permalink raw reply	[relevance 5%]

* Re: Ada.Execution_Time
  2010-12-21  0:37  5%                 ` Ada.Execution_Time Randy Brukardt
@ 2010-12-21  1:20  5%                   ` Jeffrey Carter
  0 siblings, 0 replies; 170+ results
From: Jeffrey Carter @ 2010-12-21  1:20 UTC (permalink / raw)


On 12/20/2010 05:37 PM, Randy Brukardt wrote:
>
> The rules for Duration were chosen so that it would not require more than a
> 32-bit type. Not all embedded processors are set up to handle 64-bit numbers
> efficiently...

Not much of a difference on an 8-bit processor, surely?

-- 
Jeff Carter
"Crucifixion's a doddle."
Monty Python's Life of Brian
82



^ permalink raw reply	[relevance 5%]

* Re: Ada.Execution_Time
  2010-12-19 11:00  6%               ` Ada.Execution_Time Niklas Holsti
@ 2010-12-21  0:37  5%                 ` Randy Brukardt
  2010-12-21  1:20  5%                   ` Ada.Execution_Time Jeffrey Carter
  0 siblings, 1 reply; 170+ results
From: Randy Brukardt @ 2010-12-21  0:37 UTC (permalink / raw)


"Niklas Holsti" <niklas.holsti@tidorum.invalid> wrote in message 
news:8n66ucFnavU1@mid.individual.net...
...
> A problem that you don't mention is that the use of Duration may cause 
> loss of precision. Duration'Small may be as large as 20 milliseconds (RM 
> 9.6(27)), although at most 100 microseconds are advised (RM 9.6(30)), 
> while the Time_Span resolution must be 20 microseconds or better (RM 
> D.8(30)). Perhaps Annex D should require better Duration resolution?

The rules for Duration were chosen so that it would not require more than a 
32-bit type. Not all embedded processors are set up to handle 64-bit numbers 
efficiently...

(As time moves on, this is less of a consideration than it used to be, but 
it still seems like a possible problem. Time_Span itself doesn't suffer from 
the problem since as a private type it can be represented as a record with 
several components. Duration is a visible fixed point type.) 





^ permalink raw reply	[relevance 5%]

* Re: Ada.Execution_Time
  2010-12-19  4:01  8%             ` Ada.Execution_Time Vinzent Hoefler
  2010-12-19 11:00  6%               ` Ada.Execution_Time Niklas Holsti
  2010-12-19 12:27  5%               ` Ada.Execution_Time Dmitry A. Kazakov
@ 2010-12-21  0:32 13%               ` Randy Brukardt
  2 siblings, 0 replies; 170+ results
From: Randy Brukardt @ 2010-12-21  0:32 UTC (permalink / raw)


"Vinzent Hoefler" <0439279208b62c95f1880bf0f8776eeb@t-domaingrabbing.de> 
wrote in message news:op.vnxz4kmplzeukk@jellix.jlfencey.com...
...
> Yes, but so what? The intention of Ada.Execution_Time wasn't to provide 
> the
> user with means to instrument the software and to Text_IO some mostly
> meaningless values (any decent profiler can do that for you), but rather a
> way to implement user-defined schedulers based on actual CPU usage. You 
> may
> want to take a look at the child packages Timers and Group_Budget to see 
> the
> intended usage.

Probably, but I wanted to use Ada.Execution_Time to *write* a "decent 
profiler" for our Ada programs. Since Janus/Ada doesn't use the system 
tasking facilities (mostly for historical reasons), existing profilers don't 
do a good job if the program includes any tasks. I've used various hacks 
based on Windows facilities to do part of the job, but Ada.Execution_Time 
would provide better information (especially for tasks).

Similarly, anyone that wanted portable profiling information probably would 
prefer to use Ada.Execution_Time rather than to invent something new (and 
neecessarily not as portable). I found this sort of usage at least as 
compelling as the real-time usages.

                                       Randy.





^ permalink raw reply	[relevance 13%]

* Re: Ada.Execution_Time
  2010-12-17  0:44  5%                 ` Ada.Execution_Time Randy Brukardt
  2010-12-17 17:54  5%                   ` Ada.Execution_Time Warren
@ 2010-12-20 21:28  5%                   ` Keith Thompson
  2010-12-21  3:23  5%                     ` Ada.Execution_Time Robert A Duff
  1 sibling, 1 reply; 170+ results
From: Keith Thompson @ 2010-12-20 21:28 UTC (permalink / raw)


"Randy Brukardt" <randy@rrsoftware.com> writes:
> "Adam Beneschan" <adam@irvine.com> wrote in message 
> news:dfcf048b-bb6e-4993-b62a-9147bad3a6ff@j32g2000prh.googlegroups.com...
> On Dec 15, 2:52 pm, Keith Thompson <ks...@mib.org> wrote:
> ...
>>> So, um, why is Assert a pragma rather than a statement?
>>>
>>> if Debug_Mode then
>>> assert Is_Good(X);
>>> end if;
> ...
>>Somebody on the ARG might have a more authoritative answer.  My
>>reading of AI95-286 is that a number of Ada compilers had already
>>implemented the Assert pragma and there was a lot of code using it.
>>Of course, those compilers couldn't have added "assert" as a statement
>>on their own, but adding an implementation-defined pragma is OK.
>
> That's one reason. The other is that you can't put a statement into a 
> declarative part (well, you can, but you need to use a helper generic and a 
> helper procedure, along with an instantiation, which is insane -- although 
> it is not that unusual to see that done in a program). A lot of asserts fit 
> most naturally into the declarative part (precondition ones, for instance, 
> although those will be better defined separately in Ada 2012).

So add an assert operator that always yields True:

    declare
        Dummy: constant Boolean := assert some_expression; -- assert operator
    begin
        assert some_other_expression; -- assert statement
    end;

Though the use of "Assert" as an identifier in existing code is
certainly an issue.

-- 
Keith Thompson (The_Other_Keith) kst-u@mib.org  <http://www.ghoti.net/~kst>
Nokia
"We must do something.  This is something.  Therefore, we must do this."
    -- Antony Jay and Jonathan Lynn, "Yes Minister"



^ permalink raw reply	[relevance 5%]

* Re: Ada.Execution_Time
  2010-12-19 22:54  3%             ` Ada.Execution_Time anon
@ 2010-12-20  3:14  4%               ` BrianG
  2010-12-22 14:30 14%                 ` Ada.Execution_Time anon
  0 siblings, 1 reply; 170+ results
From: BrianG @ 2010-12-20  3:14 UTC (permalink / raw)


anon@att.net wrote:
> In <iejsu9$lct$1@news.eternal-september.org>, BrianG <briang000@gmail.com> writes:
>> anon@att.net wrote:
>>> In <ie91co$cko$1@news.eternal-september.org>, BrianG <briang000@gmail.com> writes:
>>>> Georg Bauhaus wrote:
>>>>> On 12/12/10 10:59 PM, BrianG wrote:
>> ....
> 
> Kind of funny you cutting down "TOP" which can be found on most Linux 
> boxes  Which allow anyone with the CPU options turn on to see both the 
> Real_Time and CPU_Time.
Please provide any comment I made for/against "top".  Since my original 
question was based on Windows and I've stated that my (current) Linux 
doesn't support this package, that seems unlikely (even if I didn't 
trust my memory).
> 
...
> Note: Not sure of the name for the Windows version of Linux's top.
There is (that I know of) no real equiv to "top" - as in a command-line 
program.  The equiv to GNOME's "System Monitor" (for example - a gui 
program) would be Task Manager (and I made no comment about that either).
> 
...
> 
> How to you think these or any other web hosting company could measure the 
> complete system resources without measuring the CPU Execution Time on a 
> given user or application! In Ada, the routines can now use the package 
> called "Ada.CPU_Execution_Time".
> 
I have no problem with what Execution_Time does (as evidenced by the 
fact that I asked a question about its use) - it measures exactly what I 
want, an estimate of the CPU time used by a task.  My problem is with 
the way it is defined - it provides, by itself, no "value" of that that 
a using program can make use of to print or calculate (i.e. you also 
need Real_Time for that, which is silly - and I disagree that that is 
necessarily required in any case - my program didn't need it otherwise).
> 
--BrianG



^ permalink raw reply	[relevance 4%]

* Re: Ada.Execution_Time
  2010-12-19  3:07  5%           ` Ada.Execution_Time BrianG
  2010-12-19  4:01  8%             ` Ada.Execution_Time Vinzent Hoefler
@ 2010-12-19 22:54  3%             ` anon
  2010-12-20  3:14  4%               ` Ada.Execution_Time BrianG
  1 sibling, 1 reply; 170+ results
From: anon @ 2010-12-19 22:54 UTC (permalink / raw)


In <iejsu9$lct$1@news.eternal-september.org>, BrianG <briang000@gmail.com> writes:
>anon@att.net wrote:
>> In <ie91co$cko$1@news.eternal-september.org>, BrianG <briang000@gmail.com> writes:
>>> Georg Bauhaus wrote:
>>>> On 12/12/10 10:59 PM, BrianG wrote:
>....
>[Lots of meaningless comments deleted.]
>> For the average Ada programmer, its another Ada package that most will 
>> never use because they will just use Ada.Real_Time.  
>[etc.]
>>  In some cases the Ada.Execution_Time
>> package can replace the Ada.Real_Time with only altering the with/use 
>> statements.
>If you mean that they both define a Clock and a Split, maybe.  If you mean
>any program that actually does anything, that's not possible.  That was my
>original comment:  Execution_Time does not provide any types/operations
>useful, without also 'with'ing Real_Time.
>> 
>[etc.]
>(Don't know why I bother w/ this msg.)


Kind of funny you cutting down "TOP" which can be found on most Linux 
boxes  Which allow anyone with the CPU options turn on to see both the 
Real_Time and CPU_Time.

An Example: Just use you your favorite music/video player and play a 2 
min song.  Real_Time will show the 2 min for the song and the 1 to 3 
seconds for the CPU_Time. 

And I know one professor that is now station in Poland that did enjoy 
giving his student assignment to decrease the amount of CPU execution 
time that an algorithm used. Which meant a student normally had to 
rewrite the algorithm. His student just loved him for that!


Note: Not sure of the name for the Windows version of Linux's top.


As for "Pragmas" that may effect the "CPU Execution Time. 
Three may pragmas that have been disabled in GNAT and they are:

    pragma Optimize     :  Gnat uses gcc command line -O(0,1,2,3.4).
                           Which does effect compiler language 
                           translation.

    pragma System_Name  :  Gnat uses default or gcc command line to 
    pragma Storage_Unit :  determine target. Target processor and its 
                           normal data size can effect CPU time.



And for web Servers:

Check with any main web hosting for the "Terms" or "Agreement".  In 
the document you will see a paragraph that states the web site excess 
from 10 to 25% of the resources will be shutdown.

Examples: 

    From Host: http://www.micfo.com/agreement

    6 SERVER RESOURCE USAGE


        The Client agrees to utilize "micfo's" Server Resources as set 
        out in clause 6.2.1, 6.2,2:

    6.2.1
        Shared Hosting; 7% of the CPU in any given twenty two (22) 
        Business Days.
    6.2.2
        Reseller Hosting; 10% of the CPU in any given twenty two 
        (22) Business Days.


        Also: from Host: http://www.hostgator.com/tos/tos.php

        User may not: 

        1) Use 25% or more of system resources for longer then 90 
           seconds. There are numerous activities that could cause 
           such problems; these include: CGI scripts, FTP, PHP, 
           HTTP, etc.

       12) Only use https protocol when necessary; encrypting and 
           decrypting communications is noticeably more CPU-intensive 
           than unencrypted communications.


How to you think these or any other web hosting company could measure the 
complete system resources without measuring the CPU Execution Time on a 
given user or application! In Ada, the routines can now use the package 
called "Ada.CPU_Execution_Time".





^ permalink raw reply	[relevance 3%]

* Re: Ada.Execution_Time
  2010-12-19  4:01  8%             ` Ada.Execution_Time Vinzent Hoefler
  2010-12-19 11:00  6%               ` Ada.Execution_Time Niklas Holsti
@ 2010-12-19 12:27  5%               ` Dmitry A. Kazakov
  2010-12-21  0:32 13%               ` Ada.Execution_Time Randy Brukardt
  2 siblings, 0 replies; 170+ results
From: Dmitry A. Kazakov @ 2010-12-19 12:27 UTC (permalink / raw)


On Sun, 19 Dec 2010 05:01:22 +0100, Vinzent Hoefler wrote:

> And well, if you're ranting about CPU_Time, Real_Time.Time_Span is not much
> better. It's a pain in the ass to convert an Ada.Real.Time_Span to another
> type to interface with OS-specific time types (like time_t) if you're opting
> for speed, portability and accuracy.

Well, speaking from my experience, the OS-specific time types are hardly
usable because of the OS services working with these types. The problem is
not conversion, but the implementations, which *do* use these [broken]
services. They might have a catastrophic accuracy. For example, under
VxWorks we had to replace the AdaCore implementation of Ada.Real_Time with
our own implementation. This stuff is inherently non-portable.

But the interface of Ada.Real_Time is portable, so I see absolutely no
point in replacing it with something OS-specific. It won't get either
portability or accuracy.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[relevance 5%]

* Re: Ada.Execution_Time
  2010-12-19  4:01  8%             ` Ada.Execution_Time Vinzent Hoefler
@ 2010-12-19 11:00  6%               ` Niklas Holsti
  2010-12-21  0:37  5%                 ` Ada.Execution_Time Randy Brukardt
  2010-12-19 12:27  5%               ` Ada.Execution_Time Dmitry A. Kazakov
  2010-12-21  0:32 13%               ` Ada.Execution_Time Randy Brukardt
  2 siblings, 1 reply; 170+ results
From: Niklas Holsti @ 2010-12-19 11:00 UTC (permalink / raw)


Vinzent Hoefler wrote:
> BrianG wrote:
> 
>> If you mean that they both define a Clock and a Split, maybe.  If you 
>> meanany program that actually does anything, that's not possible.
>> That was my original comment:  Execution_Time does not provide any
>>types/operations useful, without also 'with'ing Real_Time.
> 
> Yes, but so what? The intention of Ada.Execution_Time wasn't to provide the
> user with means to instrument the software and to Text_IO some mostly
> meaningless values (any decent profiler can do that for you), but rather a
> way to implement user-defined schedulers based on actual CPU usage.

That is also my understanding of the intention. Moreover, since task 
scheduling for real-time systems unavoidably deals both with execution 
times and with real times, I think it is natural that both 
Ada.Execution_Time and Ada.Real_Time are required.

> And well, if you're ranting about CPU_Time, Real_Time.Time_Span is not much
> better. It's a pain in the ass to convert an Ada.Real.Time_Span to another
> type to interface with OS-specific time types (like time_t) if you're 
> opting for speed, portability and accuracy.

If your target type is OS-specific, it seems harsh to require full 
portability of the conversion.

> BTW, has anyone any better ideas to convert TimeSpan into a record 
> containing seconds and nanoseconds than this:

I may not have better ideas, but I do have some comments on your code.

>    function To_Interval (TS : in Ada.Real_Time.Time_Span)
>                          return ASAAC_Types.TimeInterval is

The following are constants independent of the parameters:

>       Nanos_Per_Sec : constant                         := 1_000_000_000.0;
>       One_Second    : constant Ada.Real_Time.Time_Span :=
>                         Ada.Real_Time.Milliseconds (1000);

(Why not ... := Ada.Real_Time.Seconds (1) ?)

>       Max_Interval  : constant Ada.Real_Time.Time_Span :=
>                         Integer (ASAAC_Types.Second'Last) * One_Second;

... so I would move the above declarations into the surrounding package, 
at least for One_Second and Max_Interval. Of course a compiler might do 
that optimization in the code, too. (By the way, Max_Interval is a bit 
less than the largest value of TimeInterval, since the above expression 
has no NSec part.)

>       Seconds       : ASAAC_Types.Second;
>       Nano_Seconds  : ASAAC_Types.Nanosec;
>    begin
>       declare
>          Sub_Seconds : Ada.Real_Time.Time_Span;
>       begin

The following tests for ranges seem unavoidable in any conversion 
between types defined by different sources. I don't see how 
Ada.Real_Time can be blamed for this.  Of course I cannot judge if the 
result (saturation at 'Last or 'First) is right for your application. As 
you say, there are potential overflow problems, already in the 
computation of Max_Interval above.

>          if TS >= Max_Interval then
>             Seconds      := ASAAC_Types.Second'Last;
>             Nano_Seconds := ASAAC_Types.Nanosec'Last;

An alternative approach to the over-range condition TS >= Max_Interval 
is to make the definition of the application-specific type 
ASAAC_Types.Second depend on the actual range of Ada.Real_Time.Time_Span 
so that over-range becomes impossible. Unfortunately I don't see how 
this could be done portably by static expressions in the declaration of 
ASAAC_Types.Second, so it would have to be a subtype declaration with an 
upper bound of To_Duration(Time_Span_Last)-1.0. This raises 
Constraint_Error at elaboration if the base type is too narrow.

>          elsif TS < Ada.Real_Time.Time_Span_Zero then
>             Seconds      := ASAAC_Types.Second'First;
>             Nano_Seconds := ASAAC_Types.Nanosec'First;

The above under-range test seems to be forced by the fact that 
ASAAC_Types.TimeInterval is unable to represent negative time intervals, 
  while Ada.Real_Time.Time_Span can do that. This problem is hardly a 
shortcoming in Ada.Real_Time.

>          else
>             Seconds      := ASAAC_Types.Second (TS / One_Second);
>             Sub_Seconds  := TS - (Integer (Seconds) * One_Second);
>             Nano_Seconds :=
>               ASAAC_Types.Nanosec
>                 (Nanos_Per_Sec * Ada.Real_Time.To_Duration (Sub_Seconds));

An alternative method converts the whole TS to Duration and then 
extracts the seconds and nanoseconds:

    TS_Dur : Duration;

    TS_Dur := To_Duration (TS);
    Seconds := ASAAC_Types.Second (TS_Dur - 0.5);
    Nano_Seconds := ASAAC_Types.Nanosec (
       Nanos_Per_Sec * (TS_Dur - Duration (Seconds)));

This, too, risks overflow in the multiplication, since the guaranteed 
range of Duration only extends to 86_400. Moreover, using Duration may 
lose precision (see below).

>          end if;
>       end;
> 
>       return
>         ASAAC_Types.TimeInterval'(Sec  => Seconds,
>                                   NSec => Nano_Seconds);
>    end To_Interval;
> 
> The solution I came up with here generally works, but suffers some 
> potential overflow problems

I think they are unavoidable unless you take care to make the range of 
the application-defined types (ASAAC_Types) depend on the range of the 
implementations of the standard types and also do the multiplication in 
some type with sufficient range, that you define.

> and doesn't look very efficient to me (although that'a minor
> problem given the context it's usually used in).

Apart from the definition of the constants (which can be moved out of 
the function), and the range checks (which depend on the application 
types in ASAAC_Types), the real conversion consists of a division, a 
subtraction, two multiplications and one call of To_Duration. This does 
not seem excessive to me, considering the nature of that target type. 
The alternative method that starts by converting all of TS to Duration 
avoids the division.

Still, this example suggests that Ada.Real_Time perhaps should provide a 
Split operation that divides a Time_Span into an integer number of 
Seconds and a sub-second Duration.

A problem that you don't mention is that the use of Duration may cause 
loss of precision. Duration'Small may be as large as 20 milliseconds (RM 
9.6(27)), although at most 100 microseconds are advised (RM 9.6(30)), 
while the Time_Span resolution must be 20 microseconds or better (RM 
D.8(30)). Perhaps Annex D should require better Duration resolution?

Loss of precision could be avoided by doing the multiplication in 
Time_Span instead of in Duration:

    Nano_Seconds := ASAAC_Types.Nanosec (
       To_Duration (Nanos_Per_Sec * Sub_Seconds));

but the overflow risk is perhaps larger, since Time_Span_Last may not be 
larger than 3600 (RM D.8(31)).

I have met with similar tricky problems in conversions between types of 
different origins in other contexts, too. I don't think that these 
problems mean that Ada.Real_Time is defective.

-- 
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
       .      @       .



^ permalink raw reply	[relevance 6%]

* Re: Ada.Execution_Time
  2010-12-18 21:20  7%                         ` Ada.Execution_Time Niklas Holsti
@ 2010-12-19  9:57  3%                           ` Dmitry A. Kazakov
  2010-12-25 11:31  7%                             ` Ada.Execution_Time Niklas Holsti
  0 siblings, 1 reply; 170+ results
From: Dmitry A. Kazakov @ 2010-12-19  9:57 UTC (permalink / raw)


On Sat, 18 Dec 2010 23:20:20 +0200, Niklas Holsti wrote:

> Dmitry A. Kazakov wrote:
>> On Fri, 17 Dec 2010 13:50:26 +0200, Niklas Holsti wrote:
>> 
> Next, we can argue if quarks have physical meaning, or if only hadrons 
> do... :-)

I thought about it! (:-)) But the example of a "half of a car" might be
better.

> The concept and measurement of "the execution time of a task" does 
> become problematic in complex processors that have hardware 
> multi-threading and can run several tasks in more or less parallel 
> fashion, without completely isolating the tasks from each other. 

No, the concept is just fine, it is the interpretation of the measured
values in the way you wanted, which causes problems. That is the core of my
point. The measure is not real time.

>>> By "CPU frequency slowdowns" I assume you mean a system that varies the 
>>> CPU clock frequency, for example to reduce energy consumption when load 
>>> is low. This do not necessarily conflict with Ada.Execution_Time and the 
>>> physical meaning of CPU_Time, although it may make implementation 
>>> harder. One implementation could be to drive the CPU-time counter by a 
>>> fixed clock (a timer clock), not by the CPU clock.
>> 
>> I am not a language lawyer, but I bet that an implementation of
>> Ada.Execution_Time.Split that ignores any CPU frequency changes when
>> summing up processor ticks consumed by the task would be legal.
> 
> Whether or not such an implementation is formally legal, that would 
> require very perverse interpretations of the text in RM D.14.

RM D.14 defines CPU_Tick constant, of which physical equivalent (if we
tried to enforce your interpretation) is not constant for many CPU/OS
combinations. On a such platform the implementation would be as perverse as
RM D.14 is. But the perversion is only because of the interpretation.

> (In fact, some variable-frequency 
> scheduling methods may prefer to measure task "execution times" in units 
> of processor ticks, not in real-time units like seconds.)

Exactly. As a simulation time RM D.14 is perfectly OK. It can be used for
CPU load estimations, while the "real time" implementation could not. BTW,
even for measurements people usually have in mind (e.g. comparing resources
consumed by tasks), simulation time would be more fair. The problem is with
I/O, because I/O is a "real" thing.

> But the 
> implementation could not implement the function "-" (Left, Right : 
> CPU_Time) return Time_Span to give a meaningful result, with the normal 
> meaning of Time_Span, since the result would be the same Time_Span for a 
> high CPU frequency and for a low one.

The result is not meaningful as real time, but it is as simulation time.

> Moreover, an Ada RTS running on 
> Windows could of course use another clock or timer to measure execution 
> time, if the Windows functionality is unsuitable.

I read one study of Java RTS, unfortunately I lost the link. They faced
this problem. In order to measure the real (statistically unbiased etc) CPU
time, they implemented a Windows driver or service (I don't remember if
they also had some hardware), which monitored, frequently enough, the
thread occupying the processor. Ada RTS could use a similar technique.
However even this were not enough, one should really hook on the OS
scheduler to get fair real CPU time.

BTW, I never checked Linux or VxWorks for that. Anybody knows if it were
possible to implement RM D.14 in this interpretation under these OSes?

The requirement is that the OS scheduler accumulated the differences
between the RT clock readings when the task lost and gained the processor.
From what I know about VxWorks, I doubt it much.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[relevance 3%]

* Re: Ada.Execution_Time
  2010-12-19  3:07  5%           ` Ada.Execution_Time BrianG
@ 2010-12-19  4:01  8%             ` Vinzent Hoefler
  2010-12-19 11:00  6%               ` Ada.Execution_Time Niklas Holsti
                                 ` (2 more replies)
  2010-12-19 22:54  3%             ` Ada.Execution_Time anon
  1 sibling, 3 replies; 170+ results
From: Vinzent Hoefler @ 2010-12-19  4:01 UTC (permalink / raw)


BrianG wrote:

> If you mean that they both define a Clock and a Split, maybe.  If you mean
> any program that actually does anything, that's not possible.  That was my
> original comment:  Execution_Time does not provide any types/operations
> useful, without also 'with'ing Real_Time.

Yes, but so what? The intention of Ada.Execution_Time wasn't to provide the
user with means to instrument the software and to Text_IO some mostly
meaningless values (any decent profiler can do that for you), but rather a
way to implement user-defined schedulers based on actual CPU usage. You may
want to take a look at the child packages Timers and Group_Budget to see the
intended usage.

And well, if you're ranting about CPU_Time, Real_Time.Time_Span is not much
better. It's a pain in the ass to convert an Ada.Real.Time_Span to another
type to interface with OS-specific time types (like time_t) if you're opting
for speed, portability and accuracy.

BTW, has anyone any better ideas to convert TimeSpan into a record containing
seconds and nanoseconds than this:

    function To_Interval (TS : in Ada.Real_Time.Time_Span)
                          return ASAAC_Types.TimeInterval is
       Nanos_Per_Sec : constant                         := 1_000_000_000.0;
       One_Second    : constant Ada.Real_Time.Time_Span :=
                         Ada.Real_Time.Milliseconds (1000);
       Max_Interval  : constant Ada.Real_Time.Time_Span :=
                         Integer (ASAAC_Types.Second'Last) * One_Second;
       Seconds       : ASAAC_Types.Second;
       Nano_Seconds  : ASAAC_Types.Nanosec;
    begin
       declare
          Sub_Seconds : Ada.Real_Time.Time_Span;
       begin
          if TS >= Max_Interval then
             Seconds      := ASAAC_Types.Second'Last;
             Nano_Seconds := ASAAC_Types.Nanosec'Last;
          elsif TS < Ada.Real_Time.Time_Span_Zero then
             Seconds      := ASAAC_Types.Second'First;
             Nano_Seconds := ASAAC_Types.Nanosec'First;
          else
             Seconds      := ASAAC_Types.Second (TS / One_Second);
             Sub_Seconds  := TS - (Integer (Seconds) * One_Second);
             Nano_Seconds :=
               ASAAC_Types.Nanosec
                 (Nanos_Per_Sec * Ada.Real_Time.To_Duration (Sub_Seconds));
          end if;
       end;

       return
         ASAAC_Types.TimeInterval'(Sec  => Seconds,
                                   NSec => Nano_Seconds);
    end To_Interval;

The solution I came up with here generally works, but suffers some potential
overflow problems and doesn't look very efficient to me (although that'a minor
problem given the context it's usually used in).


Vinzent.

-- 
You know, we're sitting on four million pounds of fuel, one nuclear weapon,
and a thing that has 270,000 moving parts built by the lowest bidder.
Makes you feel good, doesn't it?
   --  Rockhound, "Armageddon"



^ permalink raw reply	[relevance 8%]

* Re: Ada.Execution_Time
  2010-12-17  8:59 10%         ` Ada.Execution_Time anon
@ 2010-12-19  3:07  5%           ` BrianG
  2010-12-19  4:01  8%             ` Ada.Execution_Time Vinzent Hoefler
  2010-12-19 22:54  3%             ` Ada.Execution_Time anon
  0 siblings, 2 replies; 170+ results
From: BrianG @ 2010-12-19  3:07 UTC (permalink / raw)


anon@att.net wrote:
> In <ie91co$cko$1@news.eternal-september.org>, BrianG <briang000@gmail.com> writes:
>> Georg Bauhaus wrote:
>>> On 12/12/10 10:59 PM, BrianG wrote:
...
[Lots of meaningless comments deleted.]
> For the average Ada programmer, its another Ada package that most will 
> never use because they will just use Ada.Real_Time.  
[etc.]
>  In some cases the Ada.Execution_Time
> package can replace the Ada.Real_Time with only altering the with/use 
> statements.
If you mean that they both define a Clock and a Split, maybe.  If you mean
any program that actually does anything, that's not possible.  That was my
original comment:  Execution_Time does not provide any types/operations
useful, without also 'with'ing Real_Time.
> 
[etc.]
(Don't know why I bother w/ this msg.)



^ permalink raw reply	[relevance 5%]

* Re: Ada.Execution_Time
  2010-12-17 13:10 10%                       ` Ada.Execution_Time Dmitry A. Kazakov
@ 2010-12-18 21:20  7%                         ` Niklas Holsti
  2010-12-19  9:57  3%                           ` Ada.Execution_Time Dmitry A. Kazakov
  0 siblings, 1 reply; 170+ results
From: Niklas Holsti @ 2010-12-18 21:20 UTC (permalink / raw)


Dmitry A. Kazakov wrote:
> On Fri, 17 Dec 2010 13:50:26 +0200, Niklas Holsti wrote:
> 
>> Dmitry A. Kazakov wrote:
>>> On Fri, 17 Dec 2010 10:49:26 +0200, Niklas Holsti wrote:
>>>
>>>> Dmitry A. Kazakov wrote:
>>>>> CPU_Time has no physical meaning. 2s might be 2.5s
>>>>> real time or 1 year real time.
>>>> CPU_Time values have physical meaning after being summed over all tasks. 
>>>> The sum should be the real time, as closely as possible.
>>> 1. Not tasks, but threads + kernel services + kernel drivers + CPU
>>> frequency slowdowns => a) wrong; b) out of Ada scope => cannot be mandated
>> I believe we are talking about the intended meaning of 
>> Ada.Execution_Time.CPU_Time, not about how far it can be precisely 
>> "mandated" (standardized).
>>
>> Appendix D is about real-time systems, and I believe it is aimed in 
>> particular at systems built with Ada tasks and the Ada RTS. In such 
>> systems there may or may not be CPU time -- "overhead" -- that is not 
>> included in the CPU_Time of any task. See the last sentence in RM D.14 
>> 11/2: "It is implementation defined which task, if any, is charged the 
>> execution time that is consumed by interrupt handlers and run-time 
>> services on behalf of the system". In most systems there will be some 
>> such non-task overhead, but in a "pure Ada" system it should be small 
>> relative to the total CPU_Time of the tasks.
> 
> Yes, this is what I meant. CPU_Time does not have the meaning:
> 
> "CPU_Time values have physical meaning after being summed over all tasks. 
> The sum should be the real time, as closely as possible."

I said "as closely as possible", and I don't expect to find many systems 
in which they are exactly equal. But I still think that this (ideally 
exact, in practice approximate) relationship reflects the intended 
physical meaning of Ada.Execution_Time.CPU_Time: the elapsed real time 
(Time_Span) is divided into task execution times (CPU_Time) through task 
scheduling.

> Anyway, even if the sum of components has a physical meaning that does not
> imply that the components have it.

If you have only one task, the sum is identical to the term, so their 
physical meanings are the same. Generalize for the case of many tasks.

Next, we can argue if quarks have physical meaning, or if only hadrons 
do... :-)

The concept and measurement of "the execution time of a task" does 
become problematic in complex processors that have hardware 
multi-threading and can run several tasks in more or less parallel 
fashion, without completely isolating the tasks from each other. 
Schedulability analysis in such systems is difficult since the 
"execution time" of a task depends on which other tasks are running at 
the same time.

>> By "CPU frequency slowdowns" I assume you mean a system that varies the 
>> CPU clock frequency, for example to reduce energy consumption when load 
>> is low. This do not necessarily conflict with Ada.Execution_Time and the 
>> physical meaning of CPU_Time, although it may make implementation 
>> harder. One implementation could be to drive the CPU-time counter by a 
>> fixed clock (a timer clock), not by the CPU clock.
> 
> I am not a language lawyer, but I bet that an implementation of
> Ada.Execution_Time.Split that ignores any CPU frequency changes when
> summing up processor ticks consumed by the task would be legal.

Whether or not such an implementation is formally legal, that would 
require very perverse interpretations of the text in RM D.14.  You would 
have to argue that a system with a lowered CPU clock frequency, running 
a single task with no interrupts, is only "executing the task" for a 
small part of each clock cycle, and the rest of each clock cycle is 
spent on some kind of system overhead. I don't think that is what the RM 
authors intended.

You may be right that the RM has no formal requirement that would 
prevent such an implementation. (In fact, some variable-frequency 
scheduling methods may prefer to measure task "execution times" in units 
of processor ticks, not in real-time units like seconds.) But the 
implementation could not implement the function "-" (Left, Right : 
CPU_Time) return Time_Span to give a meaningful result, with the normal 
meaning of Time_Span, since the result would be the same Time_Span for a 
high CPU frequency and for a low one.

>>> 2. Not so anyway under many OSes => again, cannot be mandated
>> Whether or not all OSes support the concepts of Ada.Execution_Time is 
>> irrelevant to a discussion of the intended meaning of CPU_Time.
> 
> ARM usually does not intend what would be impossible to implement.

Not all OSes are designed for real-time systems. As I understand it, the 
ARM sensibly intends Annex D to be implemented in real-time OSes or in 
bare-Ada-RTS systems, not under Windows.

Even under Windows, as I understand earlier posts in this thread, 
problems arise only if task interruptions, suspensions, and preemptions 
are so frequent that the "quant" truncation is a significant part of the 
typical uninterrupted execution time. Moreover, an Ada RTS running on 
Windows could of course use another clock or timer to measure execution 
time, if the Windows functionality is unsuitable.

-- 
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
       .      @       .



^ permalink raw reply	[relevance 7%]

* Re: Ada.Execution_Time
  2010-12-17  0:44  5%                 ` Ada.Execution_Time Randy Brukardt
@ 2010-12-17 17:54  5%                   ` Warren
  2010-12-20 21:28  5%                   ` Ada.Execution_Time Keith Thompson
  1 sibling, 0 replies; 170+ results
From: Warren @ 2010-12-17 17:54 UTC (permalink / raw)


Randy Brukardt expounded in news:ieebpl$rt1$1@munin.nbi.dk:

> "Adam Beneschan" <adam@irvine.com> wrote in message 
> news:dfcf048b-bb6e-4993-b62a-9147bad3a6ff@j32g2000prh.google
> groups.com... On Dec 15, 2:52 pm, Keith Thompson
> <ks...@mib.org> wrote: ...
>>> So, um, why is Assert a pragma rather than a statement?
>>>
>>> if Debug_Mode then
>>> assert Is_Good(X);
>>> end if;
> ...
>>Somebody on the ARG might have a more authoritative answer.
>> My reading of AI95-286 is that a number of Ada compilers
>>had already implemented the Assert pragma and there was a
>>lot of code using it. Of course, those compilers couldn't
>>have added "assert" as a statement on their own, but adding
>>an implementation-defined pragma is OK. 
> 
> That's one reason. The other is that you can't put a
> statement into a declarative part (well, you can, but you
> need to use a helper generic and a helper procedure, along
> with an instantiation, which is insane -- although it is
> not that unusual to see that done in a program). A lot of
> asserts fit most naturally into the declarative part
> (precondition ones, for instance, although those will be
> better defined separately in Ada 2012). 
> 
>                                                   Randy.

I've never even thought to try that. I'll have to remember 
that.

Warren



^ permalink raw reply	[relevance 5%]

* Re: Ada.Execution_Time
  2010-12-17 11:50 11%                     ` Ada.Execution_Time Niklas Holsti
@ 2010-12-17 13:10 10%                       ` Dmitry A. Kazakov
  2010-12-18 21:20  7%                         ` Ada.Execution_Time Niklas Holsti
  0 siblings, 1 reply; 170+ results
From: Dmitry A. Kazakov @ 2010-12-17 13:10 UTC (permalink / raw)


On Fri, 17 Dec 2010 13:50:26 +0200, Niklas Holsti wrote:

> Dmitry A. Kazakov wrote:
>> On Fri, 17 Dec 2010 10:49:26 +0200, Niklas Holsti wrote:
>> 
>>> Dmitry A. Kazakov wrote:
>>>> CPU_Time has no physical meaning. 2s might be 2.5s
>>>> real time or 1 year real time.
>>> CPU_Time values have physical meaning after being summed over all tasks. 
>>> The sum should be the real time, as closely as possible.
>> 
>> 1. Not tasks, but threads + kernel services + kernel drivers + CPU
>> frequency slowdowns => a) wrong; b) out of Ada scope => cannot be mandated
> 
> I believe we are talking about the intended meaning of 
> Ada.Execution_Time.CPU_Time, not about how far it can be precisely 
> "mandated" (standardized).
> 
> Appendix D is about real-time systems, and I believe it is aimed in 
> particular at systems built with Ada tasks and the Ada RTS. In such 
> systems there may or may not be CPU time -- "overhead" -- that is not 
> included in the CPU_Time of any task. See the last sentence in RM D.14 
> 11/2: "It is implementation defined which task, if any, is charged the 
> execution time that is consumed by interrupt handlers and run-time 
> services on behalf of the system". In most systems there will be some 
> such non-task overhead, but in a "pure Ada" system it should be small 
> relative to the total CPU_Time of the tasks.

Yes, this is what I meant. CPU_Time does not have the meaning:

"CPU_Time values have physical meaning after being summed over all tasks. 
The sum should be the real time, as closely as possible."

Anyway, even if the sum of components has a physical meaning that does not
imply that the components have it.

> By "CPU frequency slowdowns" I assume you mean a system that varies the 
> CPU clock frequency, for example to reduce energy consumption when load 
> is low. This do not necessarily conflict with Ada.Execution_Time and the 
> physical meaning of CPU_Time, although it may make implementation 
> harder. One implementation could be to drive the CPU-time counter by a 
> fixed clock (a timer clock), not by the CPU clock.

I am not a language lawyer, but I bet that an implementation of
Ada.Execution_Time.Split that ignores any CPU frequency changes when
summing up processor ticks consumed by the task would be legal.

>> 2. Not so anyway under many OSes => again, cannot be mandated
> 
> Whether or not all OSes support the concepts of Ada.Execution_Time is 
> irrelevant to a discussion of the intended meaning of CPU_Time.

ARM usually does not intend what would be impossible to implement.

> "Simulation", "projection"... convey no meaning to me.

http://en.wikipedia.org/wiki/Discrete_event_simulation

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[relevance 10%]

* Re: Ada.Execution_Time
  2010-12-17  9:32  5%                   ` Ada.Execution_Time Dmitry A. Kazakov
@ 2010-12-17 11:50 11%                     ` Niklas Holsti
  2010-12-17 13:10 10%                       ` Ada.Execution_Time Dmitry A. Kazakov
  0 siblings, 1 reply; 170+ results
From: Niklas Holsti @ 2010-12-17 11:50 UTC (permalink / raw)


Dmitry A. Kazakov wrote:
> On Fri, 17 Dec 2010 10:49:26 +0200, Niklas Holsti wrote:
> 
>> Dmitry A. Kazakov wrote:
>>> CPU_Time has no physical meaning. 2s might be 2.5s
>>> real time or 1 year real time.
>> CPU_Time values have physical meaning after being summed over all tasks. 
>> The sum should be the real time, as closely as possible.
> 
> 1. Not tasks, but threads + kernel services + kernel drivers + CPU
> frequency slowdowns => a) wrong; b) out of Ada scope => cannot be mandated

I believe we are talking about the intended meaning of 
Ada.Execution_Time.CPU_Time, not about how far it can be precisely 
"mandated" (standardized).

Appendix D is about real-time systems, and I believe it is aimed in 
particular at systems built with Ada tasks and the Ada RTS. In such 
systems there may or may not be CPU time -- "overhead" -- that is not 
included in the CPU_Time of any task. See the last sentence in RM D.14 
11/2: "It is implementation defined which task, if any, is charged the 
execution time that is consumed by interrupt handlers and run-time 
services on behalf of the system". In most systems there will be some 
such non-task overhead, but in a "pure Ada" system it should be small 
relative to the total CPU_Time of the tasks.

By "CPU frequency slowdowns" I assume you mean a system that varies the 
CPU clock frequency, for example to reduce energy consumption when load 
is low. This do not necessarily conflict with Ada.Execution_Time and the 
physical meaning of CPU_Time, although it may make implementation 
harder. One implementation could be to drive the CPU-time counter by a 
fixed clock (a timer clock), not by the CPU clock.

> 2. Not so anyway under many OSes => again, cannot be mandated

Whether or not all OSes support the concepts of Ada.Execution_Time is 
irrelevant to a discussion of the intended meaning of CPU_Time.

> 3. The intended purpose of CPU_Time has nothing to do with this constraint.
> Nobody is interested in knowing if the actual sum is close or not to the
> real time duration.

Real-time task scheduling and schedulability analysis is *all* about 
adding up task execution times (CPU_Time values, in principle) and 
comparing the sums to real-time deadlines (durations). I do believe 
there are some people, here and there, who are interested in such things...

In practice, since tasks in real-time Ada systems are usually created 
once at system start and are thereafter repeatedly activated (triggered) 
for each job (each deadline), the total CPU_Time of a task is less 
relevant for scheduling decisions than is the increase in CPU_Time since 
the last activation of the task. Using the services of 
Ada.Execution_Time, that increment is represented as a Time_Span. From 
this point of view, it is understandable that Ada.Execution_Time does 
not provide an addition operation "+" (Left, Right : CPU_Time) return 
CPU_Time.

> It is a simulation time, which *could* be projected to
> the real time in order to estimate potential CPU load.

"Simulation", "projection"... convey no meaning to me.

-- 
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
       .      @       .



^ permalink raw reply	[relevance 11%]

* Re: Ada.Execution_Time
  2010-12-17  8:49 10%                 ` Ada.Execution_Time Niklas Holsti
@ 2010-12-17  9:32  5%                   ` Dmitry A. Kazakov
  2010-12-17 11:50 11%                     ` Ada.Execution_Time Niklas Holsti
  0 siblings, 1 reply; 170+ results
From: Dmitry A. Kazakov @ 2010-12-17  9:32 UTC (permalink / raw)


On Fri, 17 Dec 2010 10:49:26 +0200, Niklas Holsti wrote:

> Dmitry A. Kazakov wrote:
>> CPU_Time has no physical meaning. 2s might be 2.5s
>> real time or 1 year real time.
> 
> CPU_Time values have physical meaning after being summed over all tasks. 
> The sum should be the real time, as closely as possible.

1. Not tasks, but threads + kernel services + kernel drivers + CPU
frequency slowdowns => a) wrong; b) out of Ada scope => cannot be mandated

2. Not so anyway under many OSes => again, cannot be mandated

3. The intended purpose of CPU_Time has nothing to do with this constraint.
Nobody is interested in knowing if the actual sum is close or not to the
real time duration. It is a simulation time, which *could* be projected to
the real time in order to estimate potential CPU load. And the result
depends on the premises made.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[relevance 5%]

* Re: Ada.Execution_Time
  2010-12-15  0:16 10%       ` Ada.Execution_Time BrianG
                           ` (2 preceding siblings ...)
  2010-12-15 22:05  3%         ` Ada.Execution_Time Randy Brukardt
@ 2010-12-17  8:59 10%         ` anon
  2010-12-19  3:07  5%           ` Ada.Execution_Time BrianG
  3 siblings, 1 reply; 170+ results
From: anon @ 2010-12-17  8:59 UTC (permalink / raw)


In <ie91co$cko$1@news.eternal-september.org>, BrianG <briang000@gmail.com> writes:
>Georg Bauhaus wrote:
>> On 12/12/10 10:59 PM, BrianG wrote:
>> 
>> 
>>> But my question still remains: What's the intended use of 
>>> Ada.Execution_Time? Is there an intended use where its content 
>>> (CPU_Time, Seconds_Count and Time_Span, "+", "<", etc.) is useful?
>> 
>> I think that your original posting mentions a use that is quite
>> consistent with what the rationale says: each task has its own time.
>> Points in time objects can be split into values suitable for
>> arithmetic, using Time_Span objects.  Then, from the result of
>> arithmetic, produce an object suitable for print, as desired.
>> 
>> 
>> While this seems like having to write a bit much,
>> it makes things explicit, like Ada forces one to be
>> in many cases.   That' how I explain the series of
>> steps to myself.
>> 
>> Isn't it just like "null;" being required to express
>> the null statement?  It seems to me to be a logical
>> consequence of requiring that intents must be stated
>> explicitly.
>> 
>I have no problem with verbosity or explicitness, and that's not what I 
>was asking about.
>
>My problem is that what is provided in the package in question does not 
>provide any "values suitable for arithmetic" or provide "an object 
>suitable for print" (unless all you care about is the number of whole 
>seconds with no information about the (required) fraction, which seems 
>rather limiting).  Time_Span is a private type, defined in another 
>package.  If all I want is CPU_Time (in some form), why do I need 
>Ada.Real_Time?  Also, why are "+" and "-" provided as they are defined? 
>  (And why Time_Span?  I thought that was the difference between two 
>times, not the fractional part of time.)
>
>Given the rest of this thread, I would guess my answer is "No, no one 
>actually uses Ada.Execution_Time".
>
>--BrianG

Ada.Execution_Time is use for performance and reliability monitoring of the 
cpu resource aka a MCP (Tron). The control program can monitor the cpu 
usage of each task and decide which task needs to give up the CPU for the 
next task. 

With a shared server system it monitors which web site is over using the 
cpu and shutdown that web site temporality or permanent. One example too 
many Java Servlet on a web site.

For Ada 2012, it will mostly like will be use in the Ada runtime to balance 
the load for an Ada partition on multiple cores. Aka job scheduler for multiple 
tasks on multiple CPUs.

For the average Ada programmer, its another Ada package that most will 
never use because they will just use Ada.Real_Time.  The only problem 
is that Ada.Real_Time is an accumulation of times. A small list of these 
times includes VS swapping, IO processing, any cpu handled interrupts, 
as well as times for the task to execute as well as time the task sleeps 
while other tasks are executing. In some cases the Ada.Execution_Time
package can replace the Ada.Real_Time with only altering the with/use 
statements.

Some programmers might use this package to try to improve performance of 
an algorithm.

And a few might use this package for debugging like to prevent tasks from 
running away with the CPU resources. Such as stopping this type of 
logical condition from occurring at runtime:

  with x86 ; -- defines x86 instructions subset
  use  x86 ;

  task body run is

    begin
      Disable_Interrupts ;
      loop            -- Endless loop. 
        null ;        -- Which optimizes to one jump instruction
      end loop ;
    end run ;


Which when optimize can shutdown a cpu or computer system. And requires 
either a non-maskable reset or a full power cold restart without being able 
to save critical data or closing files.

Also, in history this package would be use to calculate the cpu usage charges 
for a customer.




^ permalink raw reply	[relevance 10%]

* Re: Ada.Execution_Time
  2010-12-16 17:52  4%               ` Ada.Execution_Time Dmitry A. Kazakov
@ 2010-12-17  8:49 10%                 ` Niklas Holsti
  2010-12-17  9:32  5%                   ` Ada.Execution_Time Dmitry A. Kazakov
  0 siblings, 1 reply; 170+ results
From: Niklas Holsti @ 2010-12-17  8:49 UTC (permalink / raw)



>>> On Wed, 15 Dec 2010 16:05:16 -0600, Randy Brukardt wrote:
>>>
>>>> I think you are missing the point of CPU_Time. It is an abstract 
>>>> representation of some underlying counter. There is no requirement that this 
>>>> counter have any particular value -- in particular it is not necessarily 
>>>> zero when a task is created.

Are you sure, Randy? RM D.14 13/2 says "For each task, the execution 
time value is set to zero at the creation of the task."

>>>> So the only operations that are meaningful on a 
>>>> value of type CPU_Time are comparisons and differences. Arguably, CPU_Time 
>>>> is misnamed, because it is *not* some sort of time type.

Additions of CPU_Time values should also be meaningful. As I understand 
it, Ada.Execution_Time is meant for use in task scheduling, where it is 
essential that when several tasks start at the same time on one 
processor, the sum of their CPU_Time values at any later instant is 
close to the real elapsed time, Ada.Real_Time.Time_Span. Assuming that 
CPU_Time starts at zero for each task, see above.

Dmitry A. Kazakov wrote:
> CPU_Time has no physical meaning. 2s might be 2.5s
> real time or 1 year real time.

CPU_Time values have physical meaning after being summed over all tasks. 
The sum should be the real time, as closely as possible.

-- 
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
       .      @       .



^ permalink raw reply	[relevance 10%]

* Re: Ada.Execution_Time
  2010-12-16  3:54 12%             ` Ada.Execution_Time jpwoodruff
@ 2010-12-17  7:11  5%               ` Stephen Leake
  0 siblings, 0 replies; 170+ results
From: Stephen Leake @ 2010-12-17  7:11 UTC (permalink / raw)


jpwoodruff <jpwoodruff@gmail.com> writes:

> Here's a flippant suggestion: maybe there should be a pragma to set
> stack size.  

If it set a _minimum_ stack size, that might be useful. It doesn't know
what else I have in my task, so it can't set the
_actual_ stack size!

-- 
-- Stephe



^ permalink raw reply	[relevance 5%]

* Re: Ada.Execution_Time
  2010-12-15 23:14  5%               ` Ada.Execution_Time Adam Beneschan
@ 2010-12-17  0:44  5%                 ` Randy Brukardt
  2010-12-17 17:54  5%                   ` Ada.Execution_Time Warren
  2010-12-20 21:28  5%                   ` Ada.Execution_Time Keith Thompson
  0 siblings, 2 replies; 170+ results
From: Randy Brukardt @ 2010-12-17  0:44 UTC (permalink / raw)


"Adam Beneschan" <adam@irvine.com> wrote in message 
news:dfcf048b-bb6e-4993-b62a-9147bad3a6ff@j32g2000prh.googlegroups.com...
On Dec 15, 2:52 pm, Keith Thompson <ks...@mib.org> wrote:
...
>> So, um, why is Assert a pragma rather than a statement?
>>
>> if Debug_Mode then
>> assert Is_Good(X);
>> end if;
...
>Somebody on the ARG might have a more authoritative answer.  My
>reading of AI95-286 is that a number of Ada compilers had already
>implemented the Assert pragma and there was a lot of code using it.
>Of course, those compilers couldn't have added "assert" as a statement
>on their own, but adding an implementation-defined pragma is OK.

That's one reason. The other is that you can't put a statement into a 
declarative part (well, you can, but you need to use a helper generic and a 
helper procedure, along with an instantiation, which is insane -- although 
it is not that unusual to see that done in a program). A lot of asserts fit 
most naturally into the declarative part (precondition ones, for instance, 
although those will be better defined separately in Ada 2012).

                                                  Randy.





^ permalink raw reply	[relevance 5%]

* Re: New AdaIC site (was: Ada.Execution_Time)
  2010-12-16 11:37  5%             ` Ada.Execution_Time Simon Wright
  2010-12-16 17:24  5%               ` Ada.Execution_Time BrianG
@ 2010-12-17  0:35  5%               ` Randy Brukardt
  1 sibling, 0 replies; 170+ results
From: Randy Brukardt @ 2010-12-17  0:35 UTC (permalink / raw)


"Simon Wright" <simon@pushface.org> wrote in message 
news:m24oaerlsi.fsf@pushface.org...
...
> I see the standards have moved, time to update my bookmarks!

I'd wait a day or two while we get the glitches out of these new sites. The 
domains are pointing a mix of old and new servers at the moment...

                                    Randy.





^ permalink raw reply	[relevance 5%]

* Re: Ada.Execution_Time
  2010-12-16 17:45  4%                 ` Ada.Execution_Time Adam Beneschan
@ 2010-12-16 21:13  5%                   ` Jeffrey Carter
  0 siblings, 0 replies; 170+ results
From: Jeffrey Carter @ 2010-12-16 21:13 UTC (permalink / raw)


On 12/16/2010 10:45 AM, Adam Beneschan wrote:
>
> It should probably say "language-defined or implementation-defined".
> D.8(18) does say that Real_Time.Time is one of those "time types".

That's a horrible way to phrase it. I would say, "some other time type".

-- 
Jeff Carter
"Have you gone berserk? Can't you see that that man is a ni?"
Blazing Saddles
38



^ permalink raw reply	[relevance 5%]

* Re: Ada.Execution_Time
  2010-12-16  1:14  4%           ` Ada.Execution_Time BrianG
                               ` (2 preceding siblings ...)
  2010-12-16 13:08  5%             ` Ada.Execution_Time Peter C. Chapin
@ 2010-12-16 18:17  5%             ` Jeffrey Carter
  3 siblings, 0 replies; 170+ results
From: Jeffrey Carter @ 2010-12-16 18:17 UTC (permalink / raw)


On 12/15/2010 06:14 PM, BrianG wrote:
>
> One of my problems is that difference (and sum) isn't provided between
> CPU_Time's, only with a Time_Span. But you can only convert a portion of a
> CPU_Time to Time_Span. When is that useful (as opposed to Splitting both
> CPU_Times)?
>
> A function "-" (L, R : CPU_Time) return Time_Span (or better, Duration) would be
> required for what you describe above (actually, if what you say is true, then
> that and Clock are all that's required).

> But that is not provided - that would require a "-" between two CPU_Time's
> returning a Time_Span. Unless all CPU_Time's are always less than a second, you
> can't get there easily.

 From ARM D.14:

function "-"  (Left : CPU_Time; Right : CPU_Time)  return Time_Span;

-- 
Jeff Carter
"Clear? Why, a 4-yr-old child could understand this
report. Run out and find me a 4-yr-old child. I can't
make head or tail out of it."
Duck Soup
94



^ permalink raw reply	[relevance 5%]

* Re: Ada.Execution_Time
  2010-12-16 16:49  4%             ` Ada.Execution_Time BrianG
@ 2010-12-16 17:52  4%               ` Dmitry A. Kazakov
  2010-12-17  8:49 10%                 ` Ada.Execution_Time Niklas Holsti
  0 siblings, 1 reply; 170+ results
From: Dmitry A. Kazakov @ 2010-12-16 17:52 UTC (permalink / raw)


On Thu, 16 Dec 2010 11:49:13 -0500, BrianG wrote:

> Dmitry A. Kazakov wrote:
>> On Wed, 15 Dec 2010 16:05:16 -0600, Randy Brukardt wrote:
>> 
>>> I think you are missing the point of CPU_Time. It is an abstract 
>>> representation of some underlying counter. There is no requirement that this 
>>> counter have any particular value -- in particular it is not necessarily 
>>> zero when a task is created. So the only operations that are meaningful on a 
>>> value of type CPU_Time are comparisons and differences. Arguably, CPU_Time 
>>> is misnamed, because it is *not* some sort of time type.
>> 
>> Any computer time is a representation of some counter. I think the point is
>> CPU_Time is not a real time, i.e. a time (actually the process driving the
>> corresponding counter) related to what people used to call "time" in the
>> external world. CPU_Time is what is usually called "simulation time." One
>> could use Duration or Time_Span in place of CPU_Time, but the concern is
>> that on some architectures, with multiple time sources, this would
>> introduce an additional inaccuracy. Another argument against it is that
>> there could be no fair translation from the CPU usage counter to
>> Duration/Time_Span (which is the case for Windows).
>> 
> Isn't any "time" related to a computer nothing but a "simulation time"? 

No, Ada.Calendar.Time and Ada.Real_Time.Time are derived from quartz
generators, which are physical devices. CPU_Time is derived from the time
the task owned a processor. This is a task time, the time in a simulated
Universe where nothing but the task exists. This Universe is not real, so
its time is not.

Or to put it otherwise, when Time has a value T, then under certain
conditions this has some meaning invariant to the program and the task
being run. For Ada.Real_Time.Time it is only the time differences T2 - T1,
which have this meaning. CPU_Time has no physical meaning. 2s might be 2.5s
real time or 1 year real time.

> Yes, some times may be intended to emulate clock-on-the-wall-time, but 
> that doesn't mean they're a very good emulation (ever measure the 
> accuracy of a PC that's not synched to something?

But the intent was to emulate the real time, whatever accuracy the result
night have.

> CPU_Time is 
> obviously an approximation, dependent on the RTS, OS, task scheduler, etc.

An approximation of what?

> What's so particularly bad about Windows (aside from the normal Windows 
> things)?

Windows counts full quants. It means that if the task (thread) enters a
non-busy waiting, e.g. for I/O or for other event, *before* it has spent
its quant, the quant is not counted (if I correctly remember). In effect,
you theoretically could have 0 CPU time with 99% processor load. Using the
task manager, you might frequently observe the effect of this: moderate CPU
load, but everything is frozen.

(I don't checked this behavior since Windows Server 2003, maybe they fixed
it in Vista, 7 etc).

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[relevance 4%]

* Re: Ada.Execution_Time
  2010-12-16 17:24  5%               ` Ada.Execution_Time BrianG
@ 2010-12-16 17:45  4%                 ` Adam Beneschan
  2010-12-16 21:13  5%                   ` Ada.Execution_Time Jeffrey Carter
  0 siblings, 1 reply; 170+ results
From: Adam Beneschan @ 2010-12-16 17:45 UTC (permalink / raw)


On Dec 16, 9:24 am, BrianG <briang...@gmail.com> wrote:
> Simon Wright wrote:
> > BrianG <briang...@gmail.com> writes:
>
> >>> Arguably, CPU_Time is misnamed, because it is *not*
> >>> some sort of time type.
> >> Then the package is misnamed too - How is "Execution_Time" not a time?
>
> > A 'time type'
> > [http://www.adaic.com/resources/add_content/standards/05rm/html/RM-9-6...]
> > (6) can be used as the argument for a delay statement. Wouldn't make a
> > lot of sense for an execution time! (well, perhaps one could think of
> > some obscure use ...)
>
> (Actually for a delay_until.)  That paragraph seems to contradict the
> previous one which says "any nonlimited type".

No, not really.  The rule that says "any nonlimited type" is a Name
Resolution Rule.  Those rules control how the language resolves
possibly ambiguous statements.  It's important to realize that Name
Resolution Rules are not legality rules, and it's possible for
something to be illegal and still satisfy the Name Resolution Rules,
which means that a "possible interpretation" can still cause an
ambiguity even if it's illegal.  Example:

   type Int is new Integer;
   function Overloaded (N : Integer) return Integer;
   function Overloaded (N : Integer) return Character;

   X : Int := Int (Overloaded (5));

This last call to Overloaded is ambiguous (and therefore illegal) even
though one definition of Overloaded returns a Character which cannot
legally be converted to Int.  The type conversion from a Character-
returning function to Int still satisfies the Name Resolution Rules
(4.6(6)).  Moral: Don't look at Name Resolution Rules if you're trying
to figure out whether something is legal.  (Other than when trying to
figure out whether something is unambiguous.)

> Shouldn't (6) define
> Real_Time.Time since it's not Calendar.Time and isn't
> implementation-defined?  

It should probably say "language-defined or implementation-defined".
D.8(18) does say that Real_Time.Time is one of those "time types".

                                 -- Adam



^ permalink raw reply	[relevance 4%]

* Re: Ada.Execution_Time
  2010-12-16 13:08  5%             ` Ada.Execution_Time Peter C. Chapin
@ 2010-12-16 17:32  5%               ` BrianG
  0 siblings, 0 replies; 170+ results
From: BrianG @ 2010-12-16 17:32 UTC (permalink / raw)


Peter C. Chapin wrote:
> On 2010-12-15 20:14, BrianG wrote:
> 
>> Then the package is misnamed too - How is "Execution_Time" not a time?
>> Wouldn't tying it explicitly to Real_Time imply some relation to "real
>> time" (whether that makes sense or not)?  Using Duration could help
>> that, since it's implementation-defined.
> 
> To me "execution time" sounds like a measure of how long a program has
> run (in some sense). In other words it sounds like some kind of time
> interval. "The execution time of this process was 10.102 seconds."
"CPU_Time" makes it more clear - it's not the execution time of the 
program, but the amount of CPU it has used.  If there's time-sharing, it 
can be less than the elapsed time used.  Or if there's multiple cores, 
it may be greater (although that may be unlikely as the task level).
> 
> However, people often use "time" to refer to some sort of absolute
> clock. "What time is it? It is now 8:07am on 2010-12-16." The basic
> confusion is that the term "time" is extremely ambiguous in ordinary
> usage. Not only is it used both for time intervals and absolute time
> values, but there are several different kinds of time one might talk about.
> 
> Peter



^ permalink raw reply	[relevance 5%]

* Re: Ada.Execution_Time
  2010-12-16 11:37  5%             ` Ada.Execution_Time Simon Wright
@ 2010-12-16 17:24  5%               ` BrianG
  2010-12-16 17:45  4%                 ` Ada.Execution_Time Adam Beneschan
  2010-12-17  0:35  5%               ` New AdaIC site (was: Ada.Execution_Time) Randy Brukardt
  1 sibling, 1 reply; 170+ results
From: BrianG @ 2010-12-16 17:24 UTC (permalink / raw)


Simon Wright wrote:
> BrianG <briang000@gmail.com> writes:
> 
>>> Arguably, CPU_Time is misnamed, because it is *not*
>>> some sort of time type.
>> Then the package is misnamed too - How is "Execution_Time" not a time?
> 
> A 'time type'
> [http://www.adaic.com/resources/add_content/standards/05rm/html/RM-9-6.html]
> (6) can be used as the argument for a delay statement. Wouldn't make a
> lot of sense for an execution time! (well, perhaps one could think of
> some obscure use ...)
(Actually for a delay_until.)  That paragraph seems to contradict the 
previous one which says "any nonlimited type".  Shouldn't (6) define 
Real_Time.Time since it's not Calendar.Time and isn't 
implementation-defined?  At least now Randy's comments make sense - I 
hadn't realized there was a language-defined concept of special 
"time-types" that had special uses (I hadn't realized the standard 
defines a use for certain private types that is not explicitly evident 
in the code - any other "magic" uses like this?)

> 
> I see the standards have moved, time to update my bookmarks!



^ permalink raw reply	[relevance 5%]

* Re: Ada.Execution_Time
  2010-12-16  8:45  5%           ` Ada.Execution_Time Dmitry A. Kazakov
@ 2010-12-16 16:49  4%             ` BrianG
  2010-12-16 17:52  4%               ` Ada.Execution_Time Dmitry A. Kazakov
  0 siblings, 1 reply; 170+ results
From: BrianG @ 2010-12-16 16:49 UTC (permalink / raw)


Dmitry A. Kazakov wrote:
> On Wed, 15 Dec 2010 16:05:16 -0600, Randy Brukardt wrote:
> 
>> I think you are missing the point of CPU_Time. It is an abstract 
>> representation of some underlying counter. There is no requirement that this 
>> counter have any particular value -- in particular it is not necessarily 
>> zero when a task is created. So the only operations that are meaningful on a 
>> value of type CPU_Time are comparisons and differences. Arguably, CPU_Time 
>> is misnamed, because it is *not* some sort of time type.
> 
> Any computer time is a representation of some counter. I think the point is
> CPU_Time is not a real time, i.e. a time (actually the process driving the
> corresponding counter) related to what people used to call "time" in the
> external world. CPU_Time is what is usually called "simulation time." One
> could use Duration or Time_Span in place of CPU_Time, but the concern is
> that on some architectures, with multiple time sources, this would
> introduce an additional inaccuracy. Another argument against it is that
> there could be no fair translation from the CPU usage counter to
> Duration/Time_Span (which is the case for Windows).
> 
Isn't any "time" related to a computer nothing but a "simulation time"? 
  Yes, some times may be intended to emulate clock-on-the-wall-time, but 
that doesn't mean they're a very good emulation (ever measure the 
accuracy of a PC that's not synched to something?  You can get a watch 
free in a box of cereal that's orders of magnitude better.).  That's why 
we have Calendar, Real_Time, and CPU_Time - they're meant to be 
different things, but they are all "time" in some sense.  CPU_Time is 
obviously an approximation, dependent on the RTS, OS, task scheduler, etc.

What's so particularly bad about Windows (aside from the normal Windows 
things)?  Granted, I'm only doing simple prototyping (for non-Windows 
eventual use), but it seems a "fair" approximation.  When I added a 
1-second loop doing nonsense work (to get any measured value), it reads 
about 1 second, within about 5% (which is at least as good as I would 
have expected, given the 'normal' jitter on Delay).

--BrianG



^ permalink raw reply	[relevance 4%]

* Re: Ada.Execution_Time
  2010-12-16  5:46  5%             ` Ada.Execution_Time Jeffrey Carter
@ 2010-12-16 16:13  5%               ` BrianG
  0 siblings, 0 replies; 170+ results
From: BrianG @ 2010-12-16 16:13 UTC (permalink / raw)


Jeffrey Carter wrote:
>  From ARM D.14:
> 
> function "-"  (Left : CPU_Time; Right : CPU_Time)  return Time_Span;
> 

Must be the "draft" I'm still using.  Time to find GNAT's adainclude on 
that computer.



^ permalink raw reply	[relevance 5%]

* Re: Ada.Execution_Time
  2010-12-16  1:14  4%           ` Ada.Execution_Time BrianG
  2010-12-16  5:46  5%             ` Ada.Execution_Time Jeffrey Carter
  2010-12-16 11:37  5%             ` Ada.Execution_Time Simon Wright
@ 2010-12-16 13:08  5%             ` Peter C. Chapin
  2010-12-16 17:32  5%               ` Ada.Execution_Time BrianG
  2010-12-16 18:17  5%             ` Ada.Execution_Time Jeffrey Carter
  3 siblings, 1 reply; 170+ results
From: Peter C. Chapin @ 2010-12-16 13:08 UTC (permalink / raw)


On 2010-12-15 20:14, BrianG wrote:

> Then the package is misnamed too - How is "Execution_Time" not a time?
> Wouldn't tying it explicitly to Real_Time imply some relation to "real
> time" (whether that makes sense or not)?  Using Duration could help
> that, since it's implementation-defined.

To me "execution time" sounds like a measure of how long a program has
run (in some sense). In other words it sounds like some kind of time
interval. "The execution time of this process was 10.102 seconds."

However, people often use "time" to refer to some sort of absolute
clock. "What time is it? It is now 8:07am on 2010-12-16." The basic
confusion is that the term "time" is extremely ambiguous in ordinary
usage. Not only is it used both for time intervals and absolute time
values, but there are several different kinds of time one might talk about.

Peter



^ permalink raw reply	[relevance 5%]

* Re: Ada.Execution_Time
  2010-12-16  1:14  4%           ` Ada.Execution_Time BrianG
  2010-12-16  5:46  5%             ` Ada.Execution_Time Jeffrey Carter
@ 2010-12-16 11:37  5%             ` Simon Wright
  2010-12-16 17:24  5%               ` Ada.Execution_Time BrianG
  2010-12-17  0:35  5%               ` New AdaIC site (was: Ada.Execution_Time) Randy Brukardt
  2010-12-16 13:08  5%             ` Ada.Execution_Time Peter C. Chapin
  2010-12-16 18:17  5%             ` Ada.Execution_Time Jeffrey Carter
  3 siblings, 2 replies; 170+ results
From: Simon Wright @ 2010-12-16 11:37 UTC (permalink / raw)


BrianG <briang000@gmail.com> writes:

>> Arguably, CPU_Time is misnamed, because it is *not*
>> some sort of time type.
> Then the package is misnamed too - How is "Execution_Time" not a time?

A 'time type'
[http://www.adaic.com/resources/add_content/standards/05rm/html/RM-9-6.html]
(6) can be used as the argument for a delay statement. Wouldn't make a
lot of sense for an execution time! (well, perhaps one could think of
some obscure use ...)

I see the standards have moved, time to update my bookmarks!



^ permalink raw reply	[relevance 5%]

* Re: Ada.Execution_Time
  2010-12-15 22:05  3%         ` Ada.Execution_Time Randy Brukardt
  2010-12-16  1:14  4%           ` Ada.Execution_Time BrianG
@ 2010-12-16  8:45  5%           ` Dmitry A. Kazakov
  2010-12-16 16:49  4%             ` Ada.Execution_Time BrianG
  1 sibling, 1 reply; 170+ results
From: Dmitry A. Kazakov @ 2010-12-16  8:45 UTC (permalink / raw)


On Wed, 15 Dec 2010 16:05:16 -0600, Randy Brukardt wrote:

> I think you are missing the point of CPU_Time. It is an abstract 
> representation of some underlying counter. There is no requirement that this 
> counter have any particular value -- in particular it is not necessarily 
> zero when a task is created. So the only operations that are meaningful on a 
> value of type CPU_Time are comparisons and differences. Arguably, CPU_Time 
> is misnamed, because it is *not* some sort of time type.

Any computer time is a representation of some counter. I think the point is
CPU_Time is not a real time, i.e. a time (actually the process driving the
corresponding counter) related to what people used to call "time" in the
external world. CPU_Time is what is usually called "simulation time." One
could use Duration or Time_Span in place of CPU_Time, but the concern is
that on some architectures, with multiple time sources, this would
introduce an additional inaccuracy. Another argument against it is that
there could be no fair translation from the CPU usage counter to
Duration/Time_Span (which is the case for Windows).

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[relevance 5%]

* Re: Ada.Execution_Time
  2010-12-16  1:14  4%           ` Ada.Execution_Time BrianG
@ 2010-12-16  5:46  5%             ` Jeffrey Carter
  2010-12-16 16:13  5%               ` Ada.Execution_Time BrianG
  2010-12-16 11:37  5%             ` Ada.Execution_Time Simon Wright
                               ` (2 subsequent siblings)
  3 siblings, 1 reply; 170+ results
From: Jeffrey Carter @ 2010-12-16  5:46 UTC (permalink / raw)


On 12/15/2010 06:14 PM, BrianG wrote:
>
> One of my problems is that difference (and sum) isn't provided between
> CPU_Time's, only with a Time_Span. But you can only convert a portion of a
> CPU_Time to Time_Span. When is that useful (as opposed to Splitting both
> CPU_Times)?
>
> A function "-" (L, R : CPU_Time) return Time_Span (or better, Duration) would be
> required for what you describe above (actually, if what you say is true, then
> that and Clock are all that's required).

> But that is not provided - that would require a "-" between two CPU_Time's
> returning a Time_Span. Unless all CPU_Time's are always less than a second, you
> can't get there easily.

 From ARM D.14:

function "-"  (Left : CPU_Time; Right : CPU_Time)  return Time_Span;

-- 
Jeff Carter
"Clear? Why, a 4-yr-old child could understand this
report. Run out and find me a 4-yr-old child. I can't
make head or tail out of it."
Duck Soup
94



^ permalink raw reply	[relevance 5%]

* Re: Ada.Execution_Time
  2010-12-15 21:42  5%           ` Ada.Execution_Time Pascal Obry
@ 2010-12-16  3:54 12%             ` jpwoodruff
  2010-12-17  7:11  5%               ` Ada.Execution_Time Stephen Leake
  0 siblings, 1 reply; 170+ results
From: jpwoodruff @ 2010-12-16  3:54 UTC (permalink / raw)


I learn that Ada.Execution_Time isn't widely implemented, so there
isn't a lot of point to pursue a portable abstraction.  I'm thinking I
might as well revert to Gautier's Windows implementation.  The draft I
posted y'day is probably no more portable than that one.


On Dec 15, 2:42 pm, Pascal Obry <pas...@obry.net> wrote:

>
> Probably because the stack size is smaller in the context of tasking
> runtime. Just increase the stack (see corresponding linker option) for
> the environment task. Nothing really blocking or I did I miss your point?
>

That is clearly the case.

Still, I'm non-plussed that I can't write a service - CPU.Counter in
the example - that can hide its implementation from the host program.

It occurs to me that the designers of the D.14 specification for
Ada.Execution_Time did not consider the prospect of measuring a single
environment task's performance.  Otherwise the package might be
factored so that function Clock did not presume multiple tasks.

Here's a flippant suggestion: maybe there should be a pragma to set
stack size.  I'd bury such a pragma inside package CPU so the
Ada.Execution_Time doesn't get linked into too-small an executable.

If I could do that, my user doesn't get a stack splat from an
instrument that worked correctly while running smaller tests.

John



^ permalink raw reply	[relevance 12%]

* Re: Ada.Execution_Time
  2010-12-15 22:05  3%         ` Ada.Execution_Time Randy Brukardt
@ 2010-12-16  1:14  4%           ` BrianG
  2010-12-16  5:46  5%             ` Ada.Execution_Time Jeffrey Carter
                               ` (3 more replies)
  2010-12-16  8:45  5%           ` Ada.Execution_Time Dmitry A. Kazakov
  1 sibling, 4 replies; 170+ results
From: BrianG @ 2010-12-16  1:14 UTC (permalink / raw)


Randy Brukardt wrote:
> "BrianG" <briang000@gmail.com> wrote in message 
> news:ie91co$cko$1@news.eternal-september.org...
> ...
>> My problem is that what is provided in the package in question does not 
>> provide any "values suitable for arithmetic" or provide "an object 
>> suitable for print" (unless all you care about is the number of whole 
>> seconds with no information about the (required) fraction, which seems 
>> rather limiting).
> 
> Having missed your original question, I'm confused as to where you are 
> finding the quoted text above. I don't see anything like that in the 
> Standard. Since it is not in the standard, there is no reason to expect 
> those statements to be true. (Even the standard is wrong occassionally, 
> other materials are wrong a whole lot more often.)
> 
The quoted text was from the post I responded to.  It was Georg's 
attempt to explain the package.  I agree that they are not in the RM; my 
original question was what is the intended purpose of the package - the 
content doesn't seem useful for any use I can think of.

>>  Time_Span is a private type, defined in another package.  If all I want 
>> is CPU_Time (in some form), why do I need Ada.Real_Time?  Also, why are 
>> "+" and "-" provided as they are defined? (And why Time_Span?  I thought 
>> that was the difference between two times, not the fractional part of 
>> time.)
> 
> I think you are missing the point of CPU_Time. It is an abstract 
> representation of some underlying counter. There is no requirement that this 
> counter have any particular value -- in particular it is not necessarily 
> zero when a task is created. So the only operations that are meaningful on a 
> value of type CPU_Time are comparisons and differences. Arguably, CPU_Time 
> is misnamed, because it is *not* some sort of time type.
Then the package is misnamed too - How is "Execution_Time" not a time? 
Wouldn't tying it explicitly to Real_Time imply some relation to "real 
time" (whether that makes sense or not)?  Using Duration could help 
that, since it's implementation-defined.

One of my problems is that difference (and sum) isn't provided between 
CPU_Time's, only with a Time_Span.  But you can only convert a portion 
of a CPU_Time to Time_Span.  When is that useful (as opposed to 
Splitting both CPU_Times)?

A function "-" (L, R : CPU_Time) return Time_Span (or better, Duration) 
would be required for what you describe above (actually, if what you say 
is true, then that and Clock are all that's required).

> 
> The package uses Ada.Real_Time because no one wanted to invent a new kind of 
> time. The only alternative would have been to use Calendar, which does not 
> have to be as accurate. (Of course, the real accuracy depends on the 
> underlying target; CPU_Time has to be fairly inaccurate on Windows simply 
> because the underlying counters are not very accurate, at least in the 
> default configuration.)
(I think that is inherent in anything of this type, but I'd think it's 
hard to specify that in the RM:)
> 
> My guess is that no one thought about the fact that Time_Span is only an 
> alias for Duration; it's definitely something that I didn't know until you 
> complained. (I know I've confused Time and Time_Span before, must have done 
> that here, too). So there probably was no good reason that Time_Span was 
> used instead of Duration in the package. But that seems to indicate a flaw 
> in Ada.Real_Time, not one for execution time.
I wasn't aware that it was an alias.  I had assumed it was there in case 
Duration didn't have the range or precision required for Time_Span (or 
something like that).

The other part of my problem is that I can only convert to another 
private type (for part of the value).  It seems to me equivalent to 
defining Sequential_IO and Direct_IO (etc) without File_Type - requiring 
the use of Text_IO any time you want to Open, Close, etc a file.  :-)

> 
> In any case, the presumption is that interesting CPU_Time differences are 
> relatively short, so that Time_Span is sufficient (as it will hold at least 
> one day).
But that is not provided - that would require a "-" between two 
CPU_Time's returning a Time_Span.  Unless all CPU_Time's are always less 
than a second, you can't get there easily.

> 
>> Given the rest of this thread, I would guess my answer is "No, no one 
>> actually uses Ada.Execution_Time".
> 
> Can't answer that. I intended to use it to replace some hacked debugging 
> code, but I've never gotten around to actually implementing it (I did do a 
> design, but there is of course a difference...).
> 
>                                   Randy.
> 



^ permalink raw reply	[relevance 4%]

* Re: Ada.Execution_Time
  2010-12-15 21:40  5%         ` Ada.Execution_Time Simon Wright
@ 2010-12-15 23:40  5%           ` BrianG
  0 siblings, 0 replies; 170+ results
From: BrianG @ 2010-12-15 23:40 UTC (permalink / raw)


Simon Wright wrote:
> BrianG <briang000@gmail.com> writes:
> 
>> Given the rest of this thread, I would guess my answer is "No, no one
>> actually uses Ada.Execution_Time".
> 
> Certainly not if they're using Mac OS X:
> 
>    gcc -c cpu.adb
>    Execution_Time is not supported in this configuration
>    compilation abandoned
I get the same with the version of Linux I'm currently on (Ubuntu 9.04 I 
think); I had assumed it's because the version of gcc/gnat is rather old 
- 4.3.3.  (The later Ubuntu versions have problems with my eeepc - and 
introduce stupid interface changes with no easy way to revert.)

--Bg



^ permalink raw reply	[relevance 5%]

* Re: Ada.Execution_Time
  2010-12-15 22:52  5%             ` Ada.Execution_Time Keith Thompson
@ 2010-12-15 23:14  5%               ` Adam Beneschan
  2010-12-17  0:44  5%                 ` Ada.Execution_Time Randy Brukardt
  0 siblings, 1 reply; 170+ results
From: Adam Beneschan @ 2010-12-15 23:14 UTC (permalink / raw)


On Dec 15, 2:52 pm, Keith Thompson <ks...@mib.org> wrote:
> Robert A Duff <bobd...@shell01.TheWorld.com> writes:
>
>
>
>
>
> > "Vinzent Hoefler" <0439279208b62c95f1880bf0f8776...@t-domaingrabbing.de>
> > writes:
> [...]
> >> I agree with Georg here, this is an unnecessary change with no apparent use,
> >> it doesn't support neither of the three pillars of the Ada language "safety",
> >> "readability", or "maintainability".
>
> > It certainly supports readability.  I find this:
>
> >     if Debug_Mode then
> >         pragma Assert(Is_Good(X));
> >     end if;
>
> > slightly more readable than:
>
> >     if Debug_Mode then
> >         null;
> >         pragma Assert(Is_Good(X));
> >     end if;
>
> So, um, why is Assert a pragma rather than a statement?
>
>    if Debug_Mode then
>       assert Is_Good(X);
>    end if;
>
> As somebody pointed out, it was defined that way in Ada 80.
>
> Or am I opening a huge can of worms by asking that question?

Somebody on the ARG might have a more authoritative answer.  My
reading of AI95-286 is that a number of Ada compilers had already
implemented the Assert pragma and there was a lot of code using it.
Of course, those compilers couldn't have added "assert" as a statement
on their own, but adding an implementation-defined pragma is OK.

I'm guessing that there was probably code out there that used Assert
as a procedure, so adding this as a reserved word would have caused
problems.

                              -- Adam



^ permalink raw reply	[relevance 5%]

* Re: Ada.Execution_Time
  2010-12-14 15:53  4%           ` Ada.Execution_Time Robert A Duff
  2010-12-14 17:17  5%             ` Ada.Execution_Time Dmitry A. Kazakov
@ 2010-12-15 22:52  5%             ` Keith Thompson
  2010-12-15 23:14  5%               ` Ada.Execution_Time Adam Beneschan
  1 sibling, 1 reply; 170+ results
From: Keith Thompson @ 2010-12-15 22:52 UTC (permalink / raw)


Robert A Duff <bobduff@shell01.TheWorld.com> writes:
> "Vinzent Hoefler" <0439279208b62c95f1880bf0f8776eeb@t-domaingrabbing.de>
> writes:
[...]
>> I agree with Georg here, this is an unnecessary change with no apparent use,
>> it doesn't support neither of the three pillars of the Ada language "safety",
>> "readability", or "maintainability".
>
> It certainly supports readability.  I find this:
>
>     if Debug_Mode then
>         pragma Assert(Is_Good(X));
>     end if;
>
> slightly more readable than:
>
>     if Debug_Mode then
>         null;
>         pragma Assert(Is_Good(X));
>     end if;

So, um, why is Assert a pragma rather than a statement?

   if Debug_Mode then
      assert Is_Good(X);
   end if;

As somebody pointed out, it was defined that way in Ada 80.

Or am I opening a huge can of worms by asking that question?

-- 
Keith Thompson (The_Other_Keith) kst-u@mib.org  <http://www.ghoti.net/~kst>
Nokia
"We must do something.  This is something.  Therefore, we must do this."
    -- Antony Jay and Jonathan Lynn, "Yes Minister"



^ permalink raw reply	[relevance 5%]

* Re: Ada.Execution_Time
  2010-12-15  0:16 10%       ` Ada.Execution_Time BrianG
  2010-12-15 19:17 10%         ` Ada.Execution_Time jpwoodruff
  2010-12-15 21:40  5%         ` Ada.Execution_Time Simon Wright
@ 2010-12-15 22:05  3%         ` Randy Brukardt
  2010-12-16  1:14  4%           ` Ada.Execution_Time BrianG
  2010-12-16  8:45  5%           ` Ada.Execution_Time Dmitry A. Kazakov
  2010-12-17  8:59 10%         ` Ada.Execution_Time anon
  3 siblings, 2 replies; 170+ results
From: Randy Brukardt @ 2010-12-15 22:05 UTC (permalink / raw)


"BrianG" <briang000@gmail.com> wrote in message 
news:ie91co$cko$1@news.eternal-september.org...
...
> My problem is that what is provided in the package in question does not 
> provide any "values suitable for arithmetic" or provide "an object 
> suitable for print" (unless all you care about is the number of whole 
> seconds with no information about the (required) fraction, which seems 
> rather limiting).

Having missed your original question, I'm confused as to where you are 
finding the quoted text above. I don't see anything like that in the 
Standard. Since it is not in the standard, there is no reason to expect 
those statements to be true. (Even the standard is wrong occassionally, 
other materials are wrong a whole lot more often.)

>  Time_Span is a private type, defined in another package.  If all I want 
> is CPU_Time (in some form), why do I need Ada.Real_Time?  Also, why are 
> "+" and "-" provided as they are defined? (And why Time_Span?  I thought 
> that was the difference between two times, not the fractional part of 
> time.)

I think you are missing the point of CPU_Time. It is an abstract 
representation of some underlying counter. There is no requirement that this 
counter have any particular value -- in particular it is not necessarily 
zero when a task is created. So the only operations that are meaningful on a 
value of type CPU_Time are comparisons and differences. Arguably, CPU_Time 
is misnamed, because it is *not* some sort of time type.

The package uses Ada.Real_Time because no one wanted to invent a new kind of 
time. The only alternative would have been to use Calendar, which does not 
have to be as accurate. (Of course, the real accuracy depends on the 
underlying target; CPU_Time has to be fairly inaccurate on Windows simply 
because the underlying counters are not very accurate, at least in the 
default configuration.)

My guess is that no one thought about the fact that Time_Span is only an 
alias for Duration; it's definitely something that I didn't know until you 
complained. (I know I've confused Time and Time_Span before, must have done 
that here, too). So there probably was no good reason that Time_Span was 
used instead of Duration in the package. But that seems to indicate a flaw 
in Ada.Real_Time, not one for execution time.

In any case, the presumption is that interesting CPU_Time differences are 
relatively short, so that Time_Span is sufficient (as it will hold at least 
one day).

> Given the rest of this thread, I would guess my answer is "No, no one 
> actually uses Ada.Execution_Time".

Can't answer that. I intended to use it to replace some hacked debugging 
code, but I've never gotten around to actually implementing it (I did do a 
design, but there is of course a difference...).

                                  Randy.





^ permalink raw reply	[relevance 3%]

* Re: Ada.Execution_Time
  2010-12-15 19:17 10%         ` Ada.Execution_Time jpwoodruff
@ 2010-12-15 21:42  5%           ` Pascal Obry
  2010-12-16  3:54 12%             ` Ada.Execution_Time jpwoodruff
  0 siblings, 1 reply; 170+ results
From: Pascal Obry @ 2010-12-15 21:42 UTC (permalink / raw)
  To: jpwoodruff

Le 15/12/2010 20:17, jpwoodruff a �crit :
> Unfortunately introduction of Ada.Real_Time can  cause an otherwise
> successful program to raise STORAGE_ERROR :EXCEPTION_STACK_OVERFLOW.
> This happens even if the program formerly ran only in an environment
> task.
> 
> 
> with Ada.Real_Time ;   -- This context leads to failure
> Procedure Stack_Splat is
>    Too_Big : array (1..1_000_000) of Float ;
> begin
>    null ;
> end Stack_Splat ;

Probably because the stack size is smaller in the context of tasking
runtime. Just increase the stack (see corresponding linker option) for
the environment task. Nothing really blocking or I did I miss your point?

Pascal.

-- 

--|------------------------------------------------------
--| Pascal Obry                           Team-Ada Member
--| 45, rue Gabriel Peri - 78114 Magny Les Hameaux FRANCE
--|------------------------------------------------------
--|    http://www.obry.net  -  http://v2p.fr.eu.org
--| "The best way to travel is by means of imagination"
--|
--| gpg --keyserver keys.gnupg.net --recv-key F949BD3B




^ permalink raw reply	[relevance 5%]

* Re: Ada.Execution_Time
  2010-12-15  0:16 10%       ` Ada.Execution_Time BrianG
  2010-12-15 19:17 10%         ` Ada.Execution_Time jpwoodruff
@ 2010-12-15 21:40  5%         ` Simon Wright
  2010-12-15 23:40  5%           ` Ada.Execution_Time BrianG
  2010-12-15 22:05  3%         ` Ada.Execution_Time Randy Brukardt
  2010-12-17  8:59 10%         ` Ada.Execution_Time anon
  3 siblings, 1 reply; 170+ results
From: Simon Wright @ 2010-12-15 21:40 UTC (permalink / raw)


BrianG <briang000@gmail.com> writes:

> Given the rest of this thread, I would guess my answer is "No, no one
> actually uses Ada.Execution_Time".

Certainly not if they're using Mac OS X:

   gcc -c cpu.adb
   Execution_Time is not supported in this configuration
   compilation abandoned



^ permalink raw reply	[relevance 5%]

* Re: Ada.Execution_Time
  2010-12-15  0:16 10%       ` Ada.Execution_Time BrianG
@ 2010-12-15 19:17 10%         ` jpwoodruff
  2010-12-15 21:42  5%           ` Ada.Execution_Time Pascal Obry
  2010-12-15 21:40  5%         ` Ada.Execution_Time Simon Wright
                           ` (2 subsequent siblings)
  3 siblings, 1 reply; 170+ results
From: jpwoodruff @ 2010-12-15 19:17 UTC (permalink / raw)


BrianG's discussion spurred my interest, so now I am able to
contradict his

On Dec 14, 5:16 pm, BrianG <briang...@gmail.com> wrote:
>
> Given the rest of this thread, I would guess my answer is "No, no one
> actually uses Ada.Execution_Time".
>

Let me describe my experiment, which ends in a disappointing
observation about Ada.Execution_Time.

For some years I've had a package that defines a "Counter" object that
resembles a stop-watch.  Made curious by BrianG's question, I
re-implemented the abstraction over Ada.Execution_Time.

Unfortunately introduction of Ada.Real_Time can  cause an otherwise
successful program to raise STORAGE_ERROR :EXCEPTION_STACK_OVERFLOW.
This happens even if the program formerly ran only in an environment
task.


with Ada.Real_Time ;   -- This context leads to failure
Procedure Stack_Splat is
   Too_Big : array (1..1_000_000) of Float ;
begin
   null ;
end Stack_Splat ;


I haven't found the documentation that explains my observation, but
it's pretty clear that Ada.Real_Time in the context implies
substantially different run-time memory strategy.  I suppose there are
compile options to affect this; it can be an exercise for a later day.

Here is the package CPU, which is potentially useful in programs that
use stack gently.

-- PACKAGE FOR CPU TIME CALCULATIONS.
-----------------------------------------------------------------------
--
--              Author:         John P Woodruff
--                              jpwoodruff@gmail.com
--

-- This package owes a historic debt to the similarly named service
-- Created 19-SEP-1986 by Mats Weber.  This specification is largely
-- defined by that work.

--  15-apr-2004: However, Weber's package was implemented by unix
calls
--  (alternatively VMS calls).  JPW has adapted this package
--  specification twice: first to use the Ada.Calendar.Clock
--  functions, then later (when I discovered deMontmollin's WIN-PAQ)
--  to use windows32 low-level calls. Now only the thinnest of Mats
--  Weber's traces can be seen.

--  December 2010: the ultimate version is possible now that Ada2005
--  gives us Ada.Execution_Time.  The object CPU_Counter times a task
--  (presently limited to the current task), and a Report about a
--  CPU_Counter might make interesting reading. Unhappily when this
--  package is introduced, gnat allocates smaller space for the
--  environment task than in the absence of tasking.  Therefore
--  instrumented programs may blow stack in cases that uninstrumented
--  programs do not.

with Ada.Execution_Time ;

package CPU is

   -- Type of a CPU time counter.  Each object of this type is an
   --  independent stop-watch that times the task in which it is
   --  declared.  Possible enhancement: bind a CPU_Counter to a
   --  Task_ID different from Current_Task.

   type CPU_Counter is limited private;

   ------------------------------------------------------------------
   -- The operations for a counter are Start, Stop and Clear.
   -- A counter that is Stopped retains the time already accrued,
until
   -- it is Cleared.

   --  A stopped counter can be started and will add to the time
   --  already accrued - just as though it had run continuously.
   --  It is not necessary to stop a counter in order to read
   --  its value.

   procedure Start_Counter (The_Counter : in out CPU_Counter);
   procedure Stop_Counter  (The_Counter : in out CPU_Counter);
   procedure Clear_Counter (The_Counter : in out CPU_Counter);

   ------------------------------------------------------------------
   -- There are two groups of reporting functions:

   --  CPU_Time returns the time used during while the counter has
   --  been running.


   function CPU_Time (Of_Counter : CPU_Counter) return Duration ;

   -- Process_Lifespan returns the total CPU since the process
   --  started.  (Does not rely on any CPU_counter having been
   --  started.)

   function Process_Lifespan  return Duration ;


   Counter_Not_Started : exception;

   --------------------------------------------------------------
   --  The other reporting functions produce printable reports for
   --  a counter, or for the process as a whole (the procedure
   --  writes to standard output). Reports do not effect the
   --  counter.  Use a prefix string to label the output according
   --  to the activity being timed.

   procedure Report_Clock (Watch      : in CPU_Counter;
                           Prefix     : in String := "") ;

   function  Report_Clock (Watch      : in CPU_Counter) return
String ;


   procedure Report_Process_Lifespan ;

   function  Report_Process_Lifespan return String ;

private

   type CPU_Counter is
      record
         Identity       : Natural := 0 ;
         Accrued_Time   : Ada.Execution_Time.CPU_Time
                        := Ada.Execution_Time.CPU_Time_First ;
         Running        : Boolean := False;
         Start_CPU      : Ada.Execution_Time.Cpu_Time
                        := Ada.Execution_Time.Clock ;
      end record;

end CPU ;

---------------------------------
-- Creation : 19-SEP-1986 by Mats Weber.
-- Revision : 16-Jul-1992 by Mats Weber, enhanced portability by
adding
--                                       separate package
System_Interface.
-- JPW 23 Sep 00 use ada.calendar.clock (because it works on Windows)
-- jpw 14apr04 *found* the alternative.  Montmollin's win-paq product
--     defines win32_timing: the operating system function.
-- jpw 14dec10  reimplement and substantially simplify using
Ada.Execution_Time


with Ada.Text_Io;
with Ada.Real_Time ;

package body CPU is
   ----------------
   use type Ada.Execution_Time.Cpu_Time ;

   Next_Counter_Identity : Natural := 0 ;

   function To_Duration (Time : Ada.Execution_Time.CPU_Time) return
Duration is
      -- thanks to Jeff Carter comp.lang.ada 11dec10
      Seconds  : Ada.Real_Time.Seconds_Count;
      Fraction : Ada.Real_Time.Time_Span;
   begin     -- To_Duration
      Ada.Execution_Time.Split (Time, Seconds, Fraction);
      return Duration (Seconds) + Ada.Real_Time.To_Duration
(Fraction);
   end To_Duration;


   procedure Start_Counter (The_Counter : in out CPU_Counter) is
   begin
      if The_Counter.Identity > 0 then
         --  This is a restart:  identity and accrued times remain
         if not The_Counter.Running then
            The_Counter.Start_CPU := Ada.Execution_Time.Clock ;
         end if ;
         The_Counter.Running   := True ;
      else   -- this clock has never started before
         Next_Counter_Identity       := Next_Counter_Identity + 1;
         The_Counter.Identity        := Next_Counter_Identity ;
         The_Counter.Running         := True ;
         The_Counter.Start_CPU       := Ada.Execution_Time.Clock ;
      end if ;
   end Start_Counter;


   procedure Stop_Counter  (The_Counter : in out CPU_Counter)is
      Now : Ada.Execution_Time.Cpu_Time
          := Ada.Execution_Time.Clock ;
   begin
      if  The_Counter.Identity > 0 and The_Counter.Running then
         -- accrue time observed up to now
         The_Counter.Accrued_Time := The_Counter.Accrued_Time +
           (Now - The_Counter.Start_CPU) ;
         The_Counter.Running := False ;
      end if ;
   end Stop_Counter ;


   procedure Clear_Counter  (The_Counter : in out CPU_Counter) is
      -- the counter becomes "as new" ready to start.
   begin
      The_Counter.Running := False ;
      The_Counter.Accrued_Time := Ada.Execution_Time.CPU_Time_First ;
   end Clear_Counter ;


   function CPU_Time (Of_Counter  : CPU_Counter) return Duration is
      Now : Ada.Execution_Time.Cpu_Time
          := Ada.Execution_Time.Clock ;
   begin
      if Of_Counter.Identity <= 0 then
         raise Counter_Not_Started ;
      end if;
      if not Of_Counter.Running then
         return To_Duration (Of_Counter.Accrued_Time) ;
      else
         return To_Duration (Of_Counter.Accrued_Time +
                             (Now - Of_Counter.Start_CPU)) ;
      end if ;
   end CPU_Time;

   function Process_Lifespan return Duration is
   begin
      return To_Duration (Ada.Execution_Time.Clock) ;
   end Process_Lifespan ;

   function Report_Duration (D : in Duration) return String is
   begin
      if D < 1.0 then
         declare
            Millisec : String := Duration'Image (1_000.0 * D);
         begin
            return  MilliSec(MilliSec'First .. MilliSec'Last-6) & "
msec" ;
         end ;
      elsif D < 60.0 then
         declare Sec : String :=  Duration'Image (D);
         begin
            return Sec(Sec'First .. Sec'Last-6)  & " sec" ;  -- fewer
signficant figs
         end ;
      else
         declare
            Minutes : Integer := Integer(D) / 60 ;
            Seconds : Duration := D - Duration(Minutes) * 60.0 ;
            Sec : String := Duration'Image (Seconds) ;
         begin
            return Integer'Image (Minutes) & " min " &
              Sec(Sec'First .. Sec'Last-6) & " sec" ;
         end ;
      end if ;
   end Report_Duration ;


   procedure Report_Clock (Watch      : in CPU_Counter;
                           Prefix     : in String := "") is
      use Ada.Text_IO ;
   begin
      Put (Prefix & Report_Clock (Watch)) ;
      New_Line ;
   end Report_Clock;


   function Report_Clock (Watch      : in CPU_Counter) return String
is
   begin
      return
        " <" & Integer'Image(Watch.Identity) & "> " &
        Report_Duration (CPU_Time (Watch)) ;
   end Report_Clock ;


   procedure Report_Process_Lifespan is
      use Ada.Text_IO ;
   begin
      Put (Report_Process_Lifespan) ;
      New_Line ;
   end Report_Process_Lifespan ;


   function  Report_Process_Lifespan return String is
      use Ada.Text_IO ;
   begin
      return "Process Lifespan: " & Report_Duration
(Process_Lifespan) ;
   end Report_Process_Lifespan ;

end CPU ;



^ permalink raw reply	[relevance 10%]

* Re: Ada.Execution_Time
  2010-12-13  9:28  4%     ` Ada.Execution_Time Georg Bauhaus
  2010-12-13 22:25  3%       ` Ada.Execution_Time Randy Brukardt
@ 2010-12-15  0:16 10%       ` BrianG
  2010-12-15 19:17 10%         ` Ada.Execution_Time jpwoodruff
                           ` (3 more replies)
  1 sibling, 4 replies; 170+ results
From: BrianG @ 2010-12-15  0:16 UTC (permalink / raw)


Georg Bauhaus wrote:
> On 12/12/10 10:59 PM, BrianG wrote:
> 
> 
>> But my question still remains: What's the intended use of 
>> Ada.Execution_Time? Is there an intended use where its content 
>> (CPU_Time, Seconds_Count and Time_Span, "+", "<", etc.) is useful?
> 
> I think that your original posting mentions a use that is quite
> consistent with what the rationale says: each task has its own time.
> Points in time objects can be split into values suitable for
> arithmetic, using Time_Span objects.  Then, from the result of
> arithmetic, produce an object suitable for print, as desired.
> 
> 
> While this seems like having to write a bit much,
> it makes things explicit, like Ada forces one to be
> in many cases.   That' how I explain the series of
> steps to myself.
> 
> Isn't it just like "null;" being required to express
> the null statement?  It seems to me to be a logical
> consequence of requiring that intents must be stated
> explicitly.
> 
I have no problem with verbosity or explicitness, and that's not what I 
was asking about.

My problem is that what is provided in the package in question does not 
provide any "values suitable for arithmetic" or provide "an object 
suitable for print" (unless all you care about is the number of whole 
seconds with no information about the (required) fraction, which seems 
rather limiting).  Time_Span is a private type, defined in another 
package.  If all I want is CPU_Time (in some form), why do I need 
Ada.Real_Time?  Also, why are "+" and "-" provided as they are defined? 
  (And why Time_Span?  I thought that was the difference between two 
times, not the fractional part of time.)

Given the rest of this thread, I would guess my answer is "No, no one 
actually uses Ada.Execution_Time".

--BrianG



^ permalink raw reply	[relevance 10%]

* Re: Ada.Execution_Time
  2010-12-14 18:23  3%                 ` Ada.Execution_Time Adam Beneschan
@ 2010-12-14 21:02  4%                   ` Randy Brukardt
  0 siblings, 0 replies; 170+ results
From: Randy Brukardt @ 2010-12-14 21:02 UTC (permalink / raw)


"Adam Beneschan" <adam@irvine.com> wrote in message 
news:10872143-12a2-4d25-bb08-e236b15d2c18@o14g2000prn.googlegroups.com...
...
>Even in Ada 83, there were at least three different flavors of
>pragmas.  Some (LIST, PAGE) had no effect on the operation of the
>resulting code.  Some (OPTIMIZE, INLINE, PACK) could affect the
>compiler's choice of what kind of code to generate, but the code would
>produce the same results (unless the code explicitly did something to
>create a dependency on the compiler's choice, such as relying on 'SIZE
>of a record that may or may not be packed).  And others (ELABORATE,
>PRIORITY, SHARED, INTERFACE) definitely affected the results---the
>program's behavior would potentially be different (or, in the case of
>INTERFACE, be illegal) if the pragma were missing.  I'm having trouble
>figuring out a common thread that ties all these kinds of pragmas into
>one unified concept---except, perhaps, that they are things that the
>language designers found it PRAGMAtic to shove into the "pragma"
>statement instead of inventing new syntax.  :) :) :)

I think you've got it. None of these things (in the last category) ought to 
have been pragmas in the first place.

Note that pragmas are one of the few ways that implementers have to 
represent implementation-defined information, so in practice, we have lots 
of things that ought to never have been pragmas.

At least Ada 2012 has finally come to grips with this, in that the aspect 
clause will be able to be used rather than almost all of the existing 
pragmas. (But not Elaborate, as the syntax doesn't work in a context clause, 
and no one has the energy to invent some other syntax solely for that 
purpose.) Note, however, that there will still be uses for the old pragmas 
(if you want to hide the aspects in the private part, for instance). But 
they should be used much less often.

                                      Randy.





^ permalink raw reply	[relevance 4%]

* Re: Ada.Execution_Time
  2010-12-14 20:36  5%               ` Ada.Execution_Time Dmitry A. Kazakov
@ 2010-12-14 20:48  5%                 ` Jeffrey Carter
  0 siblings, 0 replies; 170+ results
From: Jeffrey Carter @ 2010-12-14 20:48 UTC (permalink / raw)


On 12/14/2010 01:36 PM, Dmitry A. Kazakov wrote:
>
> Once I suggested:
>
>     raise <exception> when <condition>;

Yes, I suggested that, too. Also

return [<expression>] [when <condition>];

You could also argue for

goto <label> [when <condition>];

but I'd rather make goto as unattractive to use as possible.

-- 
Jeff Carter
"From this day on, the official language of San Marcos will be Swedish."
Bananas
28



^ permalink raw reply	[relevance 5%]

* Re: Ada.Execution_Time
  2010-12-14 19:10  5%             ` Ada.Execution_Time Warren
@ 2010-12-14 20:36  5%               ` Dmitry A. Kazakov
  2010-12-14 20:48  5%                 ` Ada.Execution_Time Jeffrey Carter
  0 siblings, 1 reply; 170+ results
From: Dmitry A. Kazakov @ 2010-12-14 20:36 UTC (permalink / raw)


On Tue, 14 Dec 2010 19:10:44 +0000 (UTC), Warren wrote:

> Robert A Duff expounded in
> news:wccpqt4cqdr.fsf@shell01.TheWorld.com: 
> 
>> Jeffrey Carter <spam.jrcarter.not@spam.not.acm.org> writes:
> ..
>>> Given the argument that the null statement is needed when
>>> there is no other SOS, that SOS refers to executable
>>> statements, and that neither a pragma nor a label are
>>> considered such,... 
>> 
>> Well, a pragma Assert is a lot like a statement.
>> I find it really annoying to have to write "null;"
>> before or after some Asserts.  Pure noise, IMHO.
>> (Again, no big deal.)
> 
> Personally I think an Assert "statement" (non pragma) could be 
> added. Then the assertion _is_ a "statement".

Once I suggested:

   raise <exception> when <condition>;

> I would further 
> suggest that _that_ would be active unless explicitly defeated 
> by compile option(s) or by pragma <grin>.

However implemented, the idea is bad.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[relevance 5%]

* Re: Ada.Execution_Time
  2010-12-14 19:43  4%           ` Ada.Execution_Time anon
@ 2010-12-14 20:09  5%             ` Adam Beneschan
  0 siblings, 0 replies; 170+ results
From: Adam Beneschan @ 2010-12-14 20:09 UTC (permalink / raw)


On Dec 14, 11:43 am, a...@att.net wrote:

> In Ada 95 .. 2012 states:
>
>                          Implementation Requirements
>
> 13    The implementation shall give a warning message for an unrecognized
> pragma name.
>
>                          Implementation Permissions
>
> 15    An implementation may ignore an unrecognized pragma even if it violates
> some of the Syntax Rules, if detecting the syntax error is too complex.
>
>                             Implementation Advice
>
> 16    Normally, implementation-defined pragmas should have no semantic effect
> for error-free programs; that is, if the implementation-defined pragmas are
> removed from a working program, the program should still be legal, and should
> still have the same semantics.
>
> In Ada 83 the unrecognized pragmas was syntactically check and skipped
> with an optional  simple warning that the compiler will skip that pragma.  
> But in Ada 94 .. 2012 it is a question to what the Implementation will do.
> It kinds of kills the Ada concept of "predictable" and that's a shame for all
> who controls the design of Ada.

They did add "pragma Restrictions(No_Implementation_Pragmas)".  So
those users who want that predictability back can have it.

                             -- Adam





^ permalink raw reply	[relevance 5%]

* Re: Ada.Execution_Time
  2010-12-14  8:17  5%         ` Ada.Execution_Time Vinzent Hoefler
  2010-12-14 15:51  4%           ` Ada.Execution_Time Adam Beneschan
  2010-12-14 15:53  4%           ` Ada.Execution_Time Robert A Duff
@ 2010-12-14 19:43  4%           ` anon
  2010-12-14 20:09  5%             ` Ada.Execution_Time Adam Beneschan
  2 siblings, 1 reply; 170+ results
From: anon @ 2010-12-14 19:43 UTC (permalink / raw)


In <op.vno2nwchlzeukk@jellix.jlfencey.com>, "Vinzent Hoefler" <0439279208b62c95f1880bf0f8776eeb@t-domaingrabbing.de> writes:
>Randy Brukardt wrote:
>
>
>I believe, back in the old days, there was a requirement that the presence or
>absence of a pragma shall have no effect on the legality of the program, wasn't
>it?

Actually, it still does: Ada 83 thriough 2012 states in Chapter 2 

2.8 Pragmas

In Ada 83 it states:

    A pragma that is not language-defined has no effect if  its  identifier  is
    not  recognized  by  the  (current)  implementation.  Furthermore, a pragma
    (whether language-defined or implementation-defined) has no effect  if  its
    placement  or  its  arguments  do not correspond to what is allowed for the
    pragma.  The region of text over which a pragma has an  effect  depends  on
    the pragma. 

    Note: 

    It  is  recommended  (but not required) that implementations issue warnings
    for pragmas that are not recognized and therefore ignored. 


In Ada 95 .. 2012 states:


                         Implementation Requirements

13    The implementation shall give a warning message for an unrecognized
pragma name.

                         Implementation Permissions

15    An implementation may ignore an unrecognized pragma even if it violates
some of the Syntax Rules, if detecting the syntax error is too complex.

                            Implementation Advice

16    Normally, implementation-defined pragmas should have no semantic effect
for error-free programs; that is, if the implementation-defined pragmas are
removed from a working program, the program should still be legal, and should
still have the same semantics.




In Ada 83 the unrecognized pragmas was syntactically check and skipped
with an optional  simple warning that the compiler will skip that pragma.  
But in Ada 94 .. 2012 it is a question to what the Implementation will do.
It kinds of kills the Ada concept of "predictable" and that's a shame for all 
who controls the design of Ada.





^ permalink raw reply	[relevance 4%]

* Re: Ada.Execution_Time
  2010-12-14 15:42  3%           ` Ada.Execution_Time Robert A Duff
  2010-12-14 16:17  5%             ` Ada.Execution_Time Jeffrey Carter
@ 2010-12-14 19:10  5%             ` Warren
  2010-12-14 20:36  5%               ` Ada.Execution_Time Dmitry A. Kazakov
  1 sibling, 1 reply; 170+ results
From: Warren @ 2010-12-14 19:10 UTC (permalink / raw)


Robert A Duff expounded in
news:wccpqt4cqdr.fsf@shell01.TheWorld.com: 

> Jeffrey Carter <spam.jrcarter.not@spam.not.acm.org> writes:
..
>> Given the argument that the null statement is needed when
>> there is no other SOS, that SOS refers to executable
>> statements, and that neither a pragma nor a label are
>> considered such,... 
> 
> Well, a pragma Assert is a lot like a statement.
> I find it really annoying to have to write "null;"
> before or after some Asserts.  Pure noise, IMHO.
> (Again, no big deal.)

Personally I think an Assert "statement" (non pragma) could be 
added. Then the assertion _is_ a "statement". I would further 
suggest that _that_ would be active unless explicitly defeated 
by compile option(s) or by pragma <grin>.

Warren



^ permalink raw reply	[relevance 5%]

* Re: Ada.Execution_Time
  2010-12-14 17:45  5%               ` Ada.Execution_Time Robert A Duff
@ 2010-12-14 18:23  3%                 ` Adam Beneschan
  2010-12-14 21:02  4%                   ` Ada.Execution_Time Randy Brukardt
  0 siblings, 1 reply; 170+ results
From: Adam Beneschan @ 2010-12-14 18:23 UTC (permalink / raw)


On Dec 14, 9:45 am, Robert A Duff <bobd...@shell01.TheWorld.com>
wrote:
> "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de> writes:
>
> > Because Elaborate_All should never become a pragma. Two wrongs don't make
> > one right.
>
> Elaborate_All is a pragma because Elaborate is a pragma.  ;-)

And I guess Elaborate shouldn't have been a pragma, either, since it
affects the semantics; you can write a program that is not guaranteed
to work correctly unless you use this pragma.

When I was trying to look into whether such things really fit the
definition, I ran into trouble finding a clear non-language-specific
definition of what a "pragma" should be.  The Ada RM's have very clear
definitions.  In RM83: "A pragma is used to convey information to the
compiler".  That's crystal clear, except that in my view the entire
text of the program conveys information to the compiler, so it's not
really clear what distinguishes a "pragma" from any other line in the
source.  Ada 95 changed this to "A pragma is a compiler directive".
Which of course clears everything up.  Of course, my twisted mind
thinks an assignment statement is a compiler directive, since it
directs the compiler to generate code that assigns something to
something else, so I'm not sure that this is a useful definition.

Even in Ada 83, there were at least three different flavors of
pragmas.  Some (LIST, PAGE) had no effect on the operation of the
resulting code.  Some (OPTIMIZE, INLINE, PACK) could affect the
compiler's choice of what kind of code to generate, but the code would
produce the same results (unless the code explicitly did something to
create a dependency on the compiler's choice, such as relying on 'SIZE
of a record that may or may not be packed).  And others (ELABORATE,
PRIORITY, SHARED, INTERFACE) definitely affected the results---the
program's behavior would potentially be different (or, in the case of
INTERFACE, be illegal) if the pragma were missing.  I'm having trouble
figuring out a common thread that ties all these kinds of pragmas into
one unified concept---except, perhaps, that they are things that the
language designers found it PRAGMAtic to shove into the "pragma"
statement instead of inventing new syntax.  :) :) :)

And personally, I'm fine with that.  We can argue and discuss and
strive endlessly to come up with a flawless language; but during the
time it takes to perfect the language (and then for implementors to
implement it), everyone else is stuck using C++ and Java while they're
waiting for us, which cannot be good for the world.  So a bit of
artistic inelegance is, to me, a small price to pay.  I guess that's
because I'm too much of a ... well ... a pragma-tist?

                                   -- Adam



^ permalink raw reply	[relevance 3%]

* Re: Ada.Execution_Time
  2010-12-14 17:17  5%             ` Ada.Execution_Time Dmitry A. Kazakov
@ 2010-12-14 17:45  5%               ` Robert A Duff
  2010-12-14 18:23  3%                 ` Ada.Execution_Time Adam Beneschan
  0 siblings, 1 reply; 170+ results
From: Robert A Duff @ 2010-12-14 17:45 UTC (permalink / raw)


"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:

> Because Elaborate_All should never become a pragma. Two wrongs don't make
> one right.

Elaborate_All is a pragma because Elaborate is a pragma.  ;-)

- Bob



^ permalink raw reply	[relevance 5%]

* Re: Ada.Execution_Time
  2010-12-14 15:53  4%           ` Ada.Execution_Time Robert A Duff
@ 2010-12-14 17:17  5%             ` Dmitry A. Kazakov
  2010-12-14 17:45  5%               ` Ada.Execution_Time Robert A Duff
  2010-12-15 22:52  5%             ` Ada.Execution_Time Keith Thompson
  1 sibling, 1 reply; 170+ results
From: Dmitry A. Kazakov @ 2010-12-14 17:17 UTC (permalink / raw)


On Tue, 14 Dec 2010 10:53:40 -0500, Robert A Duff wrote:

> "Vinzent Hoefler" <0439279208b62c95f1880bf0f8776eeb@t-domaingrabbing.de>
> writes:
> 
>> I believe, back in the old days, there was a requirement that the presence or
>> absence of a pragma shall have no effect on the legality of the program, wasn't
>> it?
> 
> Or for a language-defined one, try erasing all the pragmas
> Elaborate_All.  Either way, your (still-legal) program
> will be completely broken.

Because Elaborate_All should never become a pragma. Two wrongs don't make
one right.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[relevance 5%]

* Re: Ada.Execution_Time
  2010-12-14 15:42  3%           ` Ada.Execution_Time Robert A Duff
@ 2010-12-14 16:17  5%             ` Jeffrey Carter
  2010-12-14 19:10  5%             ` Ada.Execution_Time Warren
  1 sibling, 0 replies; 170+ results
From: Jeffrey Carter @ 2010-12-14 16:17 UTC (permalink / raw)


On 12/14/2010 08:42 AM, Robert A Duff wrote:
>
> Almost everything that changed in Ada 95, 2005, and 2012 is
> contrary to the original intent.  Indeed, during the
> Ada 9X project, Jean Ichbiah was quite angry that
> we were scribbling graffiti all over his near-perfect
> work of art.  So be it.

I know. Good thing he didn't see some of the current changes.

> I know.  I've seen your code with "-- null;" in empty declarative
> parts.  I'm sure you realize that in this case, yours is a minority
> opinion.

Of course.

-- 
Jeff Carter
"From this day on, the official language of San Marcos will be Swedish."
Bananas
28



^ permalink raw reply	[relevance 5%]

* Re: Ada.Execution_Time
  2010-12-14  8:17  5%         ` Ada.Execution_Time Vinzent Hoefler
  2010-12-14 15:51  4%           ` Ada.Execution_Time Adam Beneschan
@ 2010-12-14 15:53  4%           ` Robert A Duff
  2010-12-14 17:17  5%             ` Ada.Execution_Time Dmitry A. Kazakov
  2010-12-15 22:52  5%             ` Ada.Execution_Time Keith Thompson
  2010-12-14 19:43  4%           ` Ada.Execution_Time anon
  2 siblings, 2 replies; 170+ results
From: Robert A Duff @ 2010-12-14 15:53 UTC (permalink / raw)


"Vinzent Hoefler" <0439279208b62c95f1880bf0f8776eeb@t-domaingrabbing.de>
writes:

> I believe, back in the old days, there was a requirement that the presence or
> absence of a pragma shall have no effect on the legality of the program, wasn't
> it?

Yes, something like that.  It applied only to implementation-defined
pragmas, which smells fishy right there.

Anyway, it was a pretty silly rule.  So you can erase all the pragmas,
and your program is still legal.  But the program now does something
different (i.e. wrong) at run time.  How is this beneficial?

Try erasing all the pragmas Abort_Defer from your program!
Or for a language-defined one, try erasing all the pragmas
Elaborate_All.  Either way, your (still-legal) program
will be completely broken.

> Well, even if it just was that it "shall have no effect on a legal program", I
> still wonder why it is so necessary to introduce the possibility to turn an
> illegal program (without the null statement) into a legal one merely by adding
> some random pragma where a "sequence of statements" was expected.

People don't add "random" pragmas.  They add useful ones.

>...A pragma is
> /not/ a statement.

True, but a sequence_of_statements can contain pragmas.  Huh.
A pragma can act as a statement, but can't BE a statement.

> I agree with Georg here, this is an unnecessary change with no apparent use,
> it doesn't support neither of the three pillars of the Ada language "safety",
> "readability", or "maintainability".

It certainly supports readability.  I find this:

    if Debug_Mode then
        pragma Assert(Is_Good(X));
    end if;

slightly more readable than:

    if Debug_Mode then
        null;
        pragma Assert(Is_Good(X));
    end if;

- Bob



^ permalink raw reply	[relevance 4%]

* Re: Ada.Execution_Time
  2010-12-14  8:17  5%         ` Ada.Execution_Time Vinzent Hoefler
@ 2010-12-14 15:51  4%           ` Adam Beneschan
  2010-12-14 15:53  4%           ` Ada.Execution_Time Robert A Duff
  2010-12-14 19:43  4%           ` Ada.Execution_Time anon
  2 siblings, 0 replies; 170+ results
From: Adam Beneschan @ 2010-12-14 15:51 UTC (permalink / raw)


On Dec 14, 12:17 am, "Vinzent Hoefler"
<0439279208b62c95f1880bf0f8776...@t-domaingrabbing.de> wrote:

> > The logic is that you need a "null;" statement when there is nothing in some
> > list of statements. A pragma (or label) is not "nothing", so the requirement
> > for "null;" is illogical in those cases.
>
> I believe, back in the old days, there was a requirement that the presence or
> absence of a pragma shall have no effect on the legality of the program, wasn't
> it?

RM83 2.8(8): "An implementation is not allowed to define pragmas whose
presence or absence influences the legality of the text outside such
pragmas."  But note that this applied only to *implementation-defined*
pragmas; language-defined pragmas could influence legality (in
particular, the INTERFACE pragma could make an illegal program, i.e.
one in which a subprogram declaration didn't have a corresponding
body, legal).  I don't think this was intended to be a statement about
the *syntax* rules, since the syntax rules are defined by the language
and can't be changed by the implementation, although I suppose that
this rule could have been a reflection of an unstated principle that
was used when the syntax rules were designed.

The current version of this rule is in 2.8(16-19) and is only
Implementation Advice.

                                    -- Adam



>
> Well, even if it just was that it "shall have no effect on a legal program", I
> still wonder why it is so necessary to introduce the possibility to turn an
> illegal program (without the null statement) into a legal one merely by adding
> some random pragma where a "sequence of statements" was expected. A pragma is
> /not/ a statement.
>
> I agree with Georg here, this is an unnecessary change with no apparent use,
> it doesn't support neither of the three pillars of the Ada language "safety",
> "readability", or "maintainability".
>
> Vinzent.
>
> --
> Beaten by the odds since 1974.




^ permalink raw reply	[relevance 4%]

* Re: Ada.Execution_Time
  2010-12-14  3:31  4%         ` Ada.Execution_Time Jeffrey Carter
@ 2010-12-14 15:42  3%           ` Robert A Duff
  2010-12-14 16:17  5%             ` Ada.Execution_Time Jeffrey Carter
  2010-12-14 19:10  5%             ` Ada.Execution_Time Warren
  0 siblings, 2 replies; 170+ results
From: Robert A Duff @ 2010-12-14 15:42 UTC (permalink / raw)


Jeffrey Carter <spam.jrcarter.not@spam.not.acm.org> writes:

> The logic I recall from watching videos of Ichbiah, Barnes, and Firth
> presenting Ada (80) at the Ada Launch was that a null statement
> indicates that the sequence of statements (SOS) was intentionally null;
> it was contrasted to the single semicolon used for the null statement in
> some other languages, which is easily missed when reading, easily
> accidentally deleted when editing, and generally considered a Bad
> Thing.

It's kind of a "belt and suspenders" solution.

In C, you can say:

    for (<stuff-with-side-effects>)
        ;

and it's indeed easy to miss the empty statement,
especially if the ";" is on the same line as the "for",
and/or the following code is mis-indented.

Ada solves this two ways -- you have to write "end loop;"
and you also have to write "null;".  The "end loop;"
already solves the problem.

Similar issue with dangling "else" -- Ada doesn't have them
because of "end if".  (Amazingly, in 2010, people continue
to design programming languages with the dangling "else"
problem.  No excuse for it!)

> As such, a pragma might be "something" and as such not require a null
> statement, but I would disagree about a label. A label by itself would
> make me wonder what happened to the statement it labels.

Conceptually, a label does not label a statement -- it labels
a place in the code.  The Ada syntax rules are confused in
this regard.  I mean, when you say "goto L;" you don't mean
to execute the statement labeled <<L>> (and then come back here),
you mean to jump to the place marked <<L>>, and continue
on from there.

So if you want to jump to the end of a statement list, e.g.

    ...loop
        ...
        if ... then
            ...
            goto Continue;
        end if;
        ...
        <<Continue>>
    end loop;

it's just noise to put "null;" after <<Continue>>.

It's no big deal, of course, since gotos are rare.

> Given the argument that the null statement is needed when there is no
> other SOS, that SOS refers to executable statements, and that neither a
> pragma nor a label are considered such,...

Well, a pragma Assert is a lot like a statement.
I find it really annoying to have to write "null;"
before or after some Asserts.  Pure noise, IMHO.
(Again, no big deal.)

>... I would guess this is contrary
> to the intention of the original language designers.

Almost everything that changed in Ada 95, 2005, and 2012 is
contrary to the original intent.  Indeed, during the
Ada 9X project, Jean Ichbiah was quite angry that
we were scribbling graffiti all over his near-perfect
work of art.  So be it.

> I so like the idea that something explicit is required when a region
> deliberately contains nothing that I'd like to see "null;" as a
> declaration that is required when a declarative region contains nothing
> else.

I know.  I've seen your code with "-- null;" in empty declarative
parts.  I'm sure you realize that in this case, yours is a minority
opinion.

It's another belt and suspenders thing.  If you forgot to
declare anything in the declarative part, you'll likely
get errors when you refer to those missing declarations.
Unless, of course, you forgot the code as well.  When
we see:

    procedure P is
    begin
        null;
    end P;

how do we know the programmer didn't REALLY mean:

    procedure P is
        Message: constant String := "Hello, world.";
    begin
        Put_Line (Message);
    end P;

?

Should we write:

    procedure P is
        Message: constant String := "Hello, world.";
    begin
        Put_Line (Message);
        null;
    end P;

to indicate that we really did NOT want to do anything after
the Put_Line?

;-)

- Bob



^ permalink raw reply	[relevance 3%]

* Re: Ada.Execution_Time
  2010-12-13 22:25  3%       ` Ada.Execution_Time Randy Brukardt
  2010-12-13 22:42  5%         ` Ada.Execution_Time J-P. Rosen
  2010-12-14  3:31  4%         ` Ada.Execution_Time Jeffrey Carter
@ 2010-12-14  8:17  5%         ` Vinzent Hoefler
  2010-12-14 15:51  4%           ` Ada.Execution_Time Adam Beneschan
                             ` (2 more replies)
  2 siblings, 3 replies; 170+ results
From: Vinzent Hoefler @ 2010-12-14  8:17 UTC (permalink / raw)


Randy Brukardt wrote:

> "Georg Bauhaus" <rm-host.bauhaus@maps.futureapps.de> wrote in message
> news:4d05e737$0$6980$9b4e6d93@newsspool4.arcor-online.net...
> ...
>> <rant>
>> Some found the explicit null statement to be unusual,
>> bothersome, and confusing in the presence of a pragma.
>> Thus it was dropped by the language designers.
>
> The logic is that you need a "null;" statement when there is nothing in some
> list of statements. A pragma (or label) is not "nothing", so the requirement
> for "null;" is illogical in those cases.

I believe, back in the old days, there was a requirement that the presence or
absence of a pragma shall have no effect on the legality of the program, wasn't
it?

Well, even if it just was that it "shall have no effect on a legal program", I
still wonder why it is so necessary to introduce the possibility to turn an
illegal program (without the null statement) into a legal one merely by adding
some random pragma where a "sequence of statements" was expected. A pragma is
/not/ a statement.

I agree with Georg here, this is an unnecessary change with no apparent use,
it doesn't support neither of the three pillars of the Ada language "safety",
"readability", or "maintainability".


Vinzent.

-- 
Beaten by the odds since 1974.



^ permalink raw reply	[relevance 5%]

* Re: Ada.Execution_Time
  2010-12-13 22:25  3%       ` Ada.Execution_Time Randy Brukardt
  2010-12-13 22:42  5%         ` Ada.Execution_Time J-P. Rosen
@ 2010-12-14  3:31  4%         ` Jeffrey Carter
  2010-12-14 15:42  3%           ` Ada.Execution_Time Robert A Duff
  2010-12-14  8:17  5%         ` Ada.Execution_Time Vinzent Hoefler
  2 siblings, 1 reply; 170+ results
From: Jeffrey Carter @ 2010-12-14  3:31 UTC (permalink / raw)


On 12/13/2010 03:25 PM, Randy Brukardt wrote:
>
> The logic is that you need a "null;" statement when there is nothing in some
> list of statements. A pragma (or label) is not "nothing", so the requirement
> for "null;" is illogical in those cases.

The logic I recall from watching videos of Ichbiah, Barnes, and Firth presenting 
Ada (80) at the Ada Launch was that a null statement indicates that the sequence 
of statements (SOS) was intentionally null; it was contrasted to the single 
semicolon used for the null statement in some other languages, which is easily 
missed when reading, easily accidentally deleted when editing, and generally 
considered a Bad Thing. Another justification is that it prevents the reader 
from wondering if something was accidentally deleted.

As such, a pragma might be "something" and as such not require a null statement, 
but I would disagree about a label. A label by itself would make me wonder what 
happened to the statement it labels.

Given the argument that the null statement is needed when there is no other SOS, 
that SOS refers to executable statements, and that neither a pragma nor a label 
are considered such, I would guess this is contrary to the intention of the 
original language designers.

I so like the idea that something explicit is required when a region 
deliberately contains nothing that I'd like to see "null;" as a declaration that 
is required when a declarative region contains nothing else.

-- 
Jeff Carter
"He didn't get that nose from playing ping-pong."
Never Give a Sucker an Even Break
110



^ permalink raw reply	[relevance 4%]

* Re: Ada.Execution_Time
  2010-12-13 22:25  3%       ` Ada.Execution_Time Randy Brukardt
@ 2010-12-13 22:42  5%         ` J-P. Rosen
  2010-12-14  3:31  4%         ` Ada.Execution_Time Jeffrey Carter
  2010-12-14  8:17  5%         ` Ada.Execution_Time Vinzent Hoefler
  2 siblings, 0 replies; 170+ results
From: J-P. Rosen @ 2010-12-13 22:42 UTC (permalink / raw)


Le 13/12/2010 23:25, Randy Brukardt a �crit :
>> The little learning it took, the few words of explanation,
>> explicitness of intent dropped in favor of a special case in Ada
>> 2012 which lets one use a pragma in place of a null statement.
> 
> Yes. This is primarily an issue for a pragma Assert.
> 
[...]
> (Note that this was not a change I cared about much in either direction. The 
> use of pragmas for executable things is bad language design IMHO, and in any 
And to think that "assert" was a statement in (preliminary) Ada 1980...

-- 
---------------------------------------------------------
           J-P. Rosen (rosen@adalog.fr)
Adalog a d�m�nag� / Adalog has moved:
2 rue du Docteur Lombard, 92441 Issy-les-Moulineaux CEDEX
Tel: +33 1 45 29 21 52, Fax: +33 1 45 29 25 00



^ permalink raw reply	[relevance 5%]

* Re: Ada.Execution_Time
  2010-12-13  9:28  4%     ` Ada.Execution_Time Georg Bauhaus
@ 2010-12-13 22:25  3%       ` Randy Brukardt
  2010-12-13 22:42  5%         ` Ada.Execution_Time J-P. Rosen
                           ` (2 more replies)
  2010-12-15  0:16 10%       ` Ada.Execution_Time BrianG
  1 sibling, 3 replies; 170+ results
From: Randy Brukardt @ 2010-12-13 22:25 UTC (permalink / raw)


"Georg Bauhaus" <rm-host.bauhaus@maps.futureapps.de> wrote in message 
news:4d05e737$0$6980$9b4e6d93@newsspool4.arcor-online.net...
...
> <rant>
> Some found the explicit null statement to be unusual,
> bothersome, and confusing in the presence of a pragma.
> Thus it was dropped by the language designers.

The logic is that you need a "null;" statement when there is nothing in some 
list of statements. A pragma (or label) is not "nothing", so the requirement 
for "null;" is illogical in those cases.

> The little learning it took, the few words of explanation,
> explicitness of intent dropped in favor of a special case in Ada
> 2012 which lets one use a pragma in place of a null statement.

Yes. This is primarily an issue for a pragma Assert.

> (And re-introduce "null;" once rewriting / removing debug stuff
> / etc is taking place.)

True, but that is always the case when removing debug stuff. The change here 
has no real effect on that. Most of mine is some sort of logging:

       if Something then
            <Lot of code>
       else
            Log_Item ("Something is False");
       end if;

If I remove the logging, I have to add a "null;" or remove the "else".

The situation for a pragma in Ada 2012 is no different:

       if Something then
            <Lot of code>
       else
            pragma Assert (not Something_Else);
       end if;

And it seems less likely that you would change this than the first form, and 
neither seems that likely.

(Note that this was not a change I cared about much in either direction. The 
use of pragmas for executable things is bad language design IMHO, and in any 
I simply don't use them for that sort of purpose, because they are much too 
limited to be of much use in a complex system.)

> Let's hope we can buy support tools in the future to
> help us ensure the effects of language special casing can
> be bridled per project.

I'd suggest you stick with Ada 2005. Ada 2012 is all about easier ways to 
write Ada code: not just these tweaks, but also conditional expressions, 
expression functions, iterator syntax, indexing of containers, the reference 
aspect (giving automatic dereferencing) are all "syntax sugar". That is, 
they're all about making it easier to write (and in most cases, read) Ada 
code in a style that is closer to the problem rather than the solution. (One 
could also put all of the contract stuff into this category, as you can 
write preconditions, postconditions, invariants, and predicates using pragma 
Assert -- it's just a lot more reliable for the compiler to decide where 
they need to go.)

                                               Randy.





^ permalink raw reply	[relevance 3%]

* Re: Ada.Execution_Time
  2010-12-12 21:59 11%   ` Ada.Execution_Time BrianG
  2010-12-12 22:08  5%     ` Ada.Execution_Time BrianG
@ 2010-12-13  9:28  4%     ` Georg Bauhaus
  2010-12-13 22:25  3%       ` Ada.Execution_Time Randy Brukardt
  2010-12-15  0:16 10%       ` Ada.Execution_Time BrianG
  1 sibling, 2 replies; 170+ results
From: Georg Bauhaus @ 2010-12-13  9:28 UTC (permalink / raw)


On 12/12/10 10:59 PM, BrianG wrote:


> But my question still remains: What's the intended use of Ada.Execution_Time? Is there an intended use where its content (CPU_Time, Seconds_Count and Time_Span, "+", "<", etc.) is useful?

I think that your original posting mentions a use that is quite
consistent with what the rationale says: each task has its own time.
Points in time objects can be split into values suitable for
arithmetic, using Time_Span objects.  Then, from the result of
arithmetic, produce an object suitable for print, as desired.


While this seems like having to write a bit much,
it makes things explicit, like Ada forces one to be
in many cases.   That' how I explain the series of
steps to myself.

Isn't it just like "null;" being required to express
the null statement?  It seems to me to be a logical
consequence of requiring that intents must be stated
explicitly.

<rant>
Some found the explicit null statement to be unusual,
bothersome, and confusing in the presence of a pragma.
Thus it was dropped by the language designers.

The little learning it took, the few words of explanation,
explicitness of intent dropped in favor of a special case in Ada
2012 which lets one use a pragma in place of a null statement.
(And re-introduce "null;" once rewriting / removing debug stuff
/ etc is taking place.)

Let's hope we can buy support tools in the future to
help us ensure the effects of language special casing can
be bridled per project.
</rant>



^ permalink raw reply	[relevance 4%]

* Re: Ada.Execution_Time
  2010-12-12 21:59 11%   ` Ada.Execution_Time BrianG
@ 2010-12-12 22:08  5%     ` BrianG
  2010-12-13  9:28  4%     ` Ada.Execution_Time Georg Bauhaus
  1 sibling, 0 replies; 170+ results
From: BrianG @ 2010-12-12 22:08 UTC (permalink / raw)


BrianG wrote:
> Jeffrey Carter wrote:
>> On 12/11/2010 09:19 PM, BrianG wrote:
> ...
>                   (BTW, where is To_Duration defined for integer types? 
>  The only one in the Index is for Time_Span.)
OOPS, forget that part.  I misread your code.



^ permalink raw reply	[relevance 5%]

* Re: Ada.Execution_Time
  2010-12-12 16:56 13% ` Ada.Execution_Time Jeffrey Carter
@ 2010-12-12 21:59 11%   ` BrianG
  2010-12-12 22:08  5%     ` Ada.Execution_Time BrianG
  2010-12-13  9:28  4%     ` Ada.Execution_Time Georg Bauhaus
  0 siblings, 2 replies; 170+ results
From: BrianG @ 2010-12-12 21:59 UTC (permalink / raw)


Jeffrey Carter wrote:
> On 12/11/2010 09:19 PM, BrianG wrote:
>>
...
> I think you're over complicating things.
> 
> function To_Duration (Time : Ada.Execution_Time.CPU_Time) return 
> Duration is
>    Seconds  : Ada.Real_Time.Seconds_Count;
>    Fraction : Ada.Real_Time.Time_Span;
> begin -- To_Duration
>    Ada.Execution_Time.Split (Time, Seconds, Fraction);
> 
>    return Duration (Seconds) + Ada.Real_Time.To_Duration (Fraction);
> end To_Duration;
> 
That's what I get for evolving thru several iterations before coming up 
with a function.  (BTW, where is To_Duration defined for integer types? 
  The only one in the Index is for Time_Span.)

But my question still remains:  What's the intended use of 
Ada.Execution_Time?  Is there an intended use where its content 
(CPU_Time, Seconds_Count and Time_Span, "+", "<", etc.) is useful?

--BrianG



^ permalink raw reply	[relevance 11%]

* Re: Ada.Execution_Time
  2010-12-12  4:19 11% Ada.Execution_Time BrianG
  2010-12-12  5:27 13% ` Ada.Execution_Time Jeffrey Carter
@ 2010-12-12 16:56 13% ` Jeffrey Carter
  2010-12-12 21:59 11%   ` Ada.Execution_Time BrianG
  1 sibling, 1 reply; 170+ results
From: Jeffrey Carter @ 2010-12-12 16:56 UTC (permalink / raw)


On 12/11/2010 09:19 PM, BrianG wrote:
>
> function Task_CPU_Time return Duration (T : ...) is
> Sec : Ada.Execution_Time.Seconds_Count;
> Fraction : Ada.Real_Time.Time_Span;
> begin
> Ada.Execution_Time.Split(Ada.Execution_Time.Clock(T), Sec, Fraction);
> return To_Duration(Ada.Real_Time.Seconds(Integer(Sec)))
> + To_Duration(Fraction);
> end Task_CPU_Time;

I think you're over complicating things.

function To_Duration (Time : Ada.Execution_Time.CPU_Time) return Duration is
    Seconds  : Ada.Real_Time.Seconds_Count;
    Fraction : Ada.Real_Time.Time_Span;
begin -- To_Duration
    Ada.Execution_Time.Split (Time, Seconds, Fraction);

    return Duration (Seconds) + Ada.Real_Time.To_Duration (Fraction);
end To_Duration;

-- 
Jeff Carter
"This school was here before you came,
and it'll be here before you go."
Horse Feathers
48



^ permalink raw reply	[relevance 13%]

* Re: Ada.Execution_Time
  2010-12-12  4:19 11% Ada.Execution_Time BrianG
@ 2010-12-12  5:27 13% ` Jeffrey Carter
  2010-12-12 16:56 13% ` Ada.Execution_Time Jeffrey Carter
  1 sibling, 0 replies; 170+ results
From: Jeffrey Carter @ 2010-12-12  5:27 UTC (permalink / raw)


On 12/11/2010 09:19 PM, BrianG wrote:
>
> function Task_CPU_Time return Duration (T : ...) is
> Sec : Ada.Execution_Time.Seconds_Count;
> Fraction : Ada.Real_Time.Time_Span;
> begin
> Ada.Execution_Time.Split(Ada.Execution_Time.Clock(T), Sec, Fraction);
> return To_Duration(Ada.Real_Time.Seconds(Integer(Sec)))
> + To_Duration(Fraction);
> end Task_CPU_Time;

I think you're over complicating things.

function To_Duration (Time : Ada.Execution_Time.CPU_Time) return Duration is
    Seconds  : Ada.Real_Time.Seconds_Count;
    Fraction : Ada.Real_Time.Time_Span;
begin -- To_Duration
    Ada.Execution_Time.Split (Time, Seconds, Fraction);

    return Duration (Seconds) + Ada.Real_Time.To_Duration (Fraction);
end To_Duration;

-- 
Jeff Carter
"This school was here before you came,
and it'll be here before you go."
Horse Feathers
48



^ permalink raw reply	[relevance 13%]

* Ada.Execution_Time
@ 2010-12-12  4:19 11% BrianG
  2010-12-12  5:27 13% ` Ada.Execution_Time Jeffrey Carter
  2010-12-12 16:56 13% ` Ada.Execution_Time Jeffrey Carter
  0 siblings, 2 replies; 170+ results
From: BrianG @ 2010-12-12  4:19 UTC (permalink / raw)


Has anyone actually used Ada.Execution_Time?  How is it supposed to be used?

I tried to use it for two (what I thought would be) simple uses: 
display the execution time of one task and sum the of execution time of 
a group of related tasks.

In both cases, I don't see anything in that package (or in 
Ada.Real_Time, which appears to be needed to use it) that provides any 
straightforward way to use the value reported.

For display, you can't use CPU_Time directly, since it's private.  You 
can Split it, but you get a Time_Span, which is also private.  So the 
best you can do is Split it, and then convert the parts to a type that 
can be used (like duration).

For summing, there is "+", but only between CPU_Time and Time_Span, so 
you can't add two CPU_Times.  Perhaps you can use Split, sum the 
seconds, and then use "+" to add the fractions to the next Clock (before 
repeating Split/add/"+" with it, then you need to figure out what to do 
with the last fractional second), but that seems an odd intended use.

The best I could come up with was to create my own function, like this 
(using the same definition for T as in Clock), which can be used for both:

function Task_CPU_Time return Duration (T : ...) is
    Sec      : Ada.Execution_Time.Seconds_Count;
    Fraction : Ada.Real_Time.Time_Span;
begin
    Ada.Execution_Time.Split(Ada.Execution_Time.Clock(T), Sec, Fraction);
    return To_Duration(Ada.Real_Time.Seconds(Integer(Sec)))
         + To_Duration(Fraction);
end Task_CPU_Time;

Wouldn't it make sense to put something like this into that package? 
Then, at least, there'd be something that's directly available to use - 
and you wouldn't need another package.  (I'm not sure about the 
definitions of CPU_Time and Duration, and whether the conversions would 
be guaranteed to work.)

--BrianG



^ permalink raw reply	[relevance 11%]

* Re: What about a glob standard method in Ada.Command_Line ?
  2010-08-22 19:30  5%     ` Yannick Duchêne (Hibou57)
@ 2010-08-22 19:46  0%       ` Dmitry A. Kazakov
  0 siblings, 0 replies; 170+ results
From: Dmitry A. Kazakov @ 2010-08-22 19:46 UTC (permalink / raw)


On Sun, 22 Aug 2010 21:30:35 +0200, Yannick Duch�ne (Hibou57) wrote:

> Le Sat, 21 Aug 2010 11:11:39 +0200, Pascal Obry <pascal@obry.net> a �crit:
>> I see the argument but there is already some APIs that cannot be
>> implemented in some OS (like Ada.Execution_Time.Timers on Win32 for
>> example).
> Will have to check about what Ada.Execution_Time.Timers is exactly,  
> because I know there is a timer provided in the Windows API, and for long.

Under Windows there is no way to reliable determine the time a thread owned
CPU. Using existing APIs you can count quants, not the time. The problem is
that when the thread releases processor before it consumed the whole quant,
e.g. by entering non-busy waiting, the pending quant is not counted.
Theoretically you could have 99% CPU usage with 0% indicated in the task
manager.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[relevance 0%]

* Re: What about a glob standard method in Ada.Command_Line ?
  2010-08-21  9:11  5%   ` Pascal Obry
  2010-08-22 19:00  0%     ` J-P. Rosen
@ 2010-08-22 19:30  5%     ` Yannick Duchêne (Hibou57)
  2010-08-22 19:46  0%       ` Dmitry A. Kazakov
  1 sibling, 1 reply; 170+ results
From: Yannick Duchêne (Hibou57) @ 2010-08-22 19:30 UTC (permalink / raw)


Le Sat, 21 Aug 2010 11:11:39 +0200, Pascal Obry <pascal@obry.net> a écrit:
> I see the argument but there is already some APIs that cannot be
> implemented in some OS (like Ada.Execution_Time.Timers on Win32 for
> example).
Will have to check about what Ada.Execution_Time.Timers is exactly,  
because I know there is a timer provided in the Windows API, and for long.



^ permalink raw reply	[relevance 5%]

* Re: What about a glob standard method in Ada.Command_Line ?
  2010-08-21  9:11  5%   ` Pascal Obry
@ 2010-08-22 19:00  0%     ` J-P. Rosen
  2010-08-22 19:30  5%     ` Yannick Duchêne (Hibou57)
  1 sibling, 0 replies; 170+ results
From: J-P. Rosen @ 2010-08-22 19:00 UTC (permalink / raw)


Le 21/08/2010 11:11, Pascal Obry a ï¿œcrit :
> 
> Jean-Pierre,
> 
>> 2) Make a good proposal, and make sure it is compiler independent,
>> system independent, powerful and easy to use. Then propose it to the
>> ARG, and the ARG will tell you why it doesn't work ;-)
> 
> I see the argument but there is already some APIs that cannot be
> implemented in some OS (like Ada.Execution_Time.Timers on Win32 for
> example). That's not a reason to dismiss it. A glob support for
> Command_Line (in a child package) would be most welcomed even if not
> supported (or supported but with some slight differences) on some OS I
> would say.
> 
Sure. And the whole history of Ada is full of stuff where we first said
it was not possible to define in a system independent manner, and later
we changed our mind on the name of usability. Command_Line was dismissed
for Ada 83 and provided in Ada 95. Directory_Operations was dismissed in
Ada 95 and introduced in Ada 2005.

However, it IS too late for Ada 2012, and I'm serious when I say please
propose it; a good proposal is never lost, but it may be harder than you
think.
-- 
---------------------------------------------------------
           J-P. Rosen (rosen@adalog.fr)
Visit Adalog's web site at http://www.adalog.fr



^ permalink raw reply	[relevance 0%]

* Re: What about a glob standard method in Ada.Command_Line ?
  @ 2010-08-21  9:11  5%   ` Pascal Obry
  2010-08-22 19:00  0%     ` J-P. Rosen
  2010-08-22 19:30  5%     ` Yannick Duchêne (Hibou57)
  0 siblings, 2 replies; 170+ results
From: Pascal Obry @ 2010-08-21  9:11 UTC (permalink / raw)



Jean-Pierre,

> 2) Make a good proposal, and make sure it is compiler independent,
> system independent, powerful and easy to use. Then propose it to the
> ARG, and the ARG will tell you why it doesn't work ;-)

I see the argument but there is already some APIs that cannot be
implemented in some OS (like Ada.Execution_Time.Timers on Win32 for
example). That's not a reason to dismiss it. A glob support for
Command_Line (in a child package) would be most welcomed even if not
supported (or supported but with some slight differences) on some OS I
would say.

-- 

--|------------------------------------------------------
--| Pascal Obry                           Team-Ada Member
--| 45, rue Gabriel Peri - 78114 Magny Les Hameaux FRANCE
--|------------------------------------------------------
--|    http://www.obry.net  -  http://v2p.fr.eu.org
--| "The best way to travel is by means of imagination"
--|
--| gpg --keyserver keys.gnupg.net --recv-key F949BD3B




^ permalink raw reply	[relevance 5%]

* Re: gnat: Execution_Time is not supported in this configuration
  2009-12-04 12:10  5% ` Georg Bauhaus
@ 2009-12-07  8:08  0%   ` singo
  0 siblings, 0 replies; 170+ results
From: singo @ 2009-12-07  8:08 UTC (permalink / raw)


On Dec 4, 1:10 pm, Georg Bauhaus <rm.dash-bauh...@futureapps.de>
wrote:

> The reason are explained in the GNAT source files.  The ones I have show
> a note, after the © box:
> ------------------------------------------------------------------------------
> --                                                                          --
> --                         GNAT RUN-TIME COMPONENTS                         --
> --                                                                          --
> --                   A D A . E X E C U T I O N _ T I M E                    --
> --                                                                          --
> --                                 S p e c                                  --
> --                                                                          --
> -- This specification is derived from the Ada Reference Manual for use with --
> -- GNAT.  In accordance with the copyright of that document, you can freely --
> -- copy and modify this specification,  provided that if you redistribute a --
> -- modified version,  any changes that you have made are clearly indicated. --
> --                                                                          --
> ------------------------------------------------------------------------------
>
> --  This unit is not implemented in typical GNAT implementations that lie on
> --  top of operating systems, because it is infeasible to implement in such
> --  environments.
>
> --  If a target environment provides appropriate support for this package
> --  then the Unimplemented_Unit pragma should be removed from this spec and
> --  an appropriate body provided.
>
> with Ada.Task_Identification;
> with Ada.Real_Time;
>
> package Ada.Execution_Time is
>    pragma Preelaborate;
>
>    pragma Unimplemented_Unit;

Thanks to all of you for your help!

Still I wonder why it is written in the GNAT reference specification
that the real time annex is fully implemented [1].

"Real-Time Systems (Annex D) The Real-Time Systems Annex is fully
implemented."

According to the ARM 'Execution Time' is part of the real-time annex
[2], so it should be implemented.

So, does "fully implemented" mean that it only in principle is fully
implemented, but that the underlying OS/hardware (in my case 64-bit
Ubuntu-Linux (9.10) on an Intel QuadCore) has to support this features
as well?

Or how do I have to read "fully implemented"?

Best regards

Ingo

[1] http://gcc.gnu.org/onlinedocs/gnat_rm/Specialized-Needs-Annexes.html#Specialized-Needs-Annexes
[2] http://www.adaic.org/standards/05rm/html/RM-D-14.html




^ permalink raw reply	[relevance 0%]

* Re: gnat: Execution_Time is not supported in this configuration
  2009-12-04 19:01  6%   ` Dmitry A. Kazakov
  2009-12-04 21:50  7%     ` John B. Matthews
@ 2009-12-05  2:59  0%     ` Randy Brukardt
  1 sibling, 0 replies; 170+ results
From: Randy Brukardt @ 2009-12-05  2:59 UTC (permalink / raw)


"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message 
news:1wjhklygzok25.t79koxbbtlcj$.dlg@40tude.net...
...
> This package heavily depends on the OS services at least when the tasks 
> are
> mapped onto the OS scheduling items (like threads).
>
> As far as I know it is impossible to implement it reasonably under 
> Windows,
> because the corresponding service (used by the Task Manager too) counts
> time quants instead of the time. This causes a massive systematic error if
> tasks are switched before they consume their quants. I.e. *always* when 
> you
> do I/O or communicate to other tasks. The bottom line, under Windows
> Ada.Execution_Time can be used only for tasks that do lengthy computations
> interrupted by only by the scheduler, so that all counted quants were
> consumed and no time were spent in uncounted quants.

Obviously this depends on the purpose. For many profiling tasks, the Windows 
implementation is just fine. The quants seem to be short enough that most 
tasks run long enough to be counted. (And I/O has probably already be 
reduced to the minimum before even applying a profiler, if not, you're 
probably profiling the I/O first, not the CPU.) But I would have to agree 
that it isn't all that real-time.

In any case, thanks for the clear explanation of the limitations of the 
Windows services. I'm sure that I'll run into them sooner or later and I'll 
hopefully remember your explanation.

                                   Randy.





^ permalink raw reply	[relevance 0%]

* Re: gnat: Execution_Time is not supported in this configuration
  2009-12-04 19:01  6%   ` Dmitry A. Kazakov
@ 2009-12-04 21:50  7%     ` John B. Matthews
  2009-12-05  2:59  0%     ` Randy Brukardt
  1 sibling, 0 replies; 170+ results
From: John B. Matthews @ 2009-12-04 21:50 UTC (permalink / raw)


In article <1wjhklygzok25.t79koxbbtlcj$.dlg@40tude.net>,
 "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote:

> On Fri, 04 Dec 2009 13:28:24 -0500, John B. Matthews wrote:
> 
> > In article 
> > <5e5d6fb5-e719-4195-925c-d1286699393d@f16g2000yqm.googlegroups.com>,
> >  singo <sander.ingo@gmail.com> wrote:
> > 
[...]
> > I defer to Dmitry A. Kazakov about Windows, but this variation 
> > produces similar results on MacOS 10.5 & Ubuntu 9.10 using GNAT 
> > 4.3.4:
> > 
> > <code>
> > with Ada.Text_IO; use Ada.Text_IO;
> > with Ada.Real_Time; use Ada.Real_Time;
> > 
> > procedure ExecutionTime is
> >    task T;
> > 
> >    task body T is
> >       Start : Time := Clock;
> >       Interval : Time_Span := Milliseconds(100);
> >    begin
> >       loop
> >          Put_Line(Duration'Image(To_Duration(Clock - Start)));
> >          delay To_Duration(Interval);
> >       end loop;
> >    end T;
> > begin
> >    null;
> > end ExecutionTime;
> > </code>
> > 
> > <console>
> > $ ./executiontime 
> >  0.000008000
> >  0.100168000
> >  0.200289000
> >  0.300409000
> >  0.400527000
> >  0.500575000
> > ...
> > </console>
> 
> Your code counts the wall clock time. On the contrary 
> Ada.Execution_Time should do the task time, i.e. the time the task 
> actually owned the processor or, maybe, the time the system did 
> something on the task's behalf.

Ah, thank you for clarifying this. Indeed, one sees the secular growth 
in the output as overhead accumulates. I meant to suggest that other 
parts of Annex D may be supported on a particular platform, even if 
Ada.Execution_Time is not.

> This package heavily depends on the OS services at least when the 
> tasks are mapped onto the OS scheduling items (like threads).
> 
> As far as I know it is impossible to implement it reasonably under 
> Windows, because the corresponding service (used by the Task Manager 
> too) counts time quants instead of the time. This causes a massive 
> systematic error if tasks are switched before they consume their 
> quants. I.e. *always* when you do I/O or communicate to other tasks. 
> The bottom line, under Windows Ada.Execution_Time can be used only 
> for tasks that do lengthy computations interrupted by only by the 
> scheduler, so that all counted quants were consumed and no time were 
> spent in uncounted quants.
> 
> I don't know, if or how, this works under Linux or Max OS.

I should have mentioned that both systems specify "pragma 
Unimplemented_Unit" in Ada.Execution_Time. On Mac OS X 10.5, 
Ada.Real_Time.Time_Span_Unit is 0.000000001, but I'm unaware of a Mac 
clock having better that microsecond resolution, as suggested in the 
output above. I'm running Linux in VirtualBox, so I suspect any results 
reflect the host OS more than anything else.

-- 
John B. Matthews
trashgod at gmail dot com
<http://sites.google.com/site/drjohnbmatthews>



^ permalink raw reply	[relevance 7%]

* Re: gnat: Execution_Time is not supported in this configuration
  2009-12-04 18:28  5% ` John B. Matthews
@ 2009-12-04 19:01  6%   ` Dmitry A. Kazakov
  2009-12-04 21:50  7%     ` John B. Matthews
  2009-12-05  2:59  0%     ` Randy Brukardt
  0 siblings, 2 replies; 170+ results
From: Dmitry A. Kazakov @ 2009-12-04 19:01 UTC (permalink / raw)


On Fri, 04 Dec 2009 13:28:24 -0500, John B. Matthews wrote:

> In article 
> <5e5d6fb5-e719-4195-925c-d1286699393d@f16g2000yqm.googlegroups.com>,
>  singo <sander.ingo@gmail.com> wrote:
> 
>> I have recently become very interested of Ada 2005, and it's 
>> real-time annex. However, as a new user of Ada I face some problems 
>> with the software.
>> 
>> I cannot get the package Ada.Execution_Time to work with gnat, 
>> although the gnat documentation says that the real-time annex is 
>> fully supported... I use the gnat version 4.4 on a Ubuntu 9.10 
>> distribution.
>> 
>> The typical error message I get is
>> 
>> gcc -c executiontime.adb
>> Execution_Time is not supported in this configuration
>> compilation abandoned
> 
> Georg Bauhaus has helpfully referred you to comments in 
> Ada.Execution_Time.
> 
>> How can I configure gnat to support the Ada.Execution_Time package?
> 
> I defer to Dmitry A. Kazakov about Windows, but this variation produces 
> similar results on MacOS 10.5 & Ubuntu 9.10 using GNAT 4.3.4:
> 
> <code>
> with Ada.Text_IO; use Ada.Text_IO;
> with Ada.Real_Time; use Ada.Real_Time;
> 
> procedure ExecutionTime is
>    task T;
> 
>    task body T is
>       Start : Time := Clock;
>       Interval : Time_Span := Milliseconds(100);
>    begin
>       loop
>          Put_Line(Duration'Image(To_Duration(Clock - Start)));
>          delay To_Duration(Interval);
>       end loop;
>    end T;
> begin
>    null;
> end ExecutionTime;
> </code>
> 
> <console>
> $ ./executiontime 
>  0.000008000
>  0.100168000
>  0.200289000
>  0.300409000
>  0.400527000
>  0.500575000
> ...
> </console>

Your code counts the wall clock time. On the contrary Ada.Execution_Time
should do the task time, i.e. the time the task actually owned the
processor or, maybe, the time the system did something on the task's
behalf.

This package heavily depends on the OS services at least when the tasks are
mapped onto the OS scheduling items (like threads).

As far as I know it is impossible to implement it reasonably under Windows,
because the corresponding service (used by the Task Manager too) counts
time quants instead of the time. This causes a massive systematic error if
tasks are switched before they consume their quants. I.e. *always* when you
do I/O or communicate to other tasks. The bottom line, under Windows
Ada.Execution_Time can be used only for tasks that do lengthy computations
interrupted by only by the scheduler, so that all counted quants were
consumed and no time were spent in uncounted quants.

I don't know, if or how, this works under Linux or Max OS.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[relevance 6%]

* Re: gnat: Execution_Time is not supported in this configuration
  2009-12-04 11:09  8% gnat: Execution_Time is not supported in this configuration singo
  2009-12-04 11:26  8% ` Dmitry A. Kazakov
  2009-12-04 12:10  5% ` Georg Bauhaus
@ 2009-12-04 18:28  5% ` John B. Matthews
  2009-12-04 19:01  6%   ` Dmitry A. Kazakov
  2 siblings, 1 reply; 170+ results
From: John B. Matthews @ 2009-12-04 18:28 UTC (permalink / raw)


In article 
<5e5d6fb5-e719-4195-925c-d1286699393d@f16g2000yqm.googlegroups.com>,
 singo <sander.ingo@gmail.com> wrote:

> I have recently become very interested of Ada 2005, and it's 
> real-time annex. However, as a new user of Ada I face some problems 
> with the software.
> 
> I cannot get the package Ada.Execution_Time to work with gnat, 
> although the gnat documentation says that the real-time annex is 
> fully supported... I use the gnat version 4.4 on a Ubuntu 9.10 
> distribution.
> 
> The typical error message I get is
> 
> gcc -c executiontime.adb
> Execution_Time is not supported in this configuration
> compilation abandoned

Georg Bauhaus has helpfully referred you to comments in 
Ada.Execution_Time.

> How can I configure gnat to support the Ada.Execution_Time package?

I defer to Dmitry A. Kazakov about Windows, but this variation produces 
similar results on MacOS 10.5 & Ubuntu 9.10 using GNAT 4.3.4:

<code>
with Ada.Text_IO; use Ada.Text_IO;
with Ada.Real_Time; use Ada.Real_Time;

procedure ExecutionTime is
   task T;

   task body T is
      Start : Time := Clock;
      Interval : Time_Span := Milliseconds(100);
   begin
      loop
         Put_Line(Duration'Image(To_Duration(Clock - Start)));
         delay To_Duration(Interval);
      end loop;
   end T;
begin
   null;
end ExecutionTime;
</code>

<console>
$ ./executiontime 
 0.000008000
 0.100168000
 0.200289000
 0.300409000
 0.400527000
 0.500575000
...
</console>

-- 
John B. Matthews
trashgod at gmail dot com
<http://sites.google.com/site/drjohnbmatthews>



^ permalink raw reply	[relevance 5%]

* Re: gnat: Execution_Time is not supported in this configuration
  2009-12-04 11:09  8% gnat: Execution_Time is not supported in this configuration singo
  2009-12-04 11:26  8% ` Dmitry A. Kazakov
@ 2009-12-04 12:10  5% ` Georg Bauhaus
  2009-12-07  8:08  0%   ` singo
  2009-12-04 18:28  5% ` John B. Matthews
  2 siblings, 1 reply; 170+ results
From: Georg Bauhaus @ 2009-12-04 12:10 UTC (permalink / raw)


singo schrieb:

> I cannot get the package Ada.Execution_Time to work with gnat,
> although the gnat documentation says that the real-time annex is fully
> supported... I use the gnat version 4.4 on a Ubuntu 9.10 distribution.

The reason are explained in the GNAT source files.  The ones I have show
a note, after the � box:
------------------------------------------------------------------------------
--                                                                          --
--                         GNAT RUN-TIME COMPONENTS                         --
--                                                                          --
--                   A D A . E X E C U T I O N _ T I M E                    --
--                                                                          --
--                                 S p e c                                  --
--                                                                          --
-- This specification is derived from the Ada Reference Manual for use with --
-- GNAT.  In accordance with the copyright of that document, you can freely --
-- copy and modify this specification,  provided that if you redistribute a --
-- modified version,  any changes that you have made are clearly indicated. --
--                                                                          --
------------------------------------------------------------------------------

--  This unit is not implemented in typical GNAT implementations that lie on
--  top of operating systems, because it is infeasible to implement in such
--  environments.

--  If a target environment provides appropriate support for this package
--  then the Unimplemented_Unit pragma should be removed from this spec and
--  an appropriate body provided.

with Ada.Task_Identification;
with Ada.Real_Time;

package Ada.Execution_Time is
   pragma Preelaborate;

   pragma Unimplemented_Unit;





^ permalink raw reply	[relevance 5%]

* Re: gnat: Execution_Time is not supported in this configuration
  2009-12-04 11:09  8% gnat: Execution_Time is not supported in this configuration singo
@ 2009-12-04 11:26  8% ` Dmitry A. Kazakov
  2009-12-04 12:10  5% ` Georg Bauhaus
  2009-12-04 18:28  5% ` John B. Matthews
  2 siblings, 0 replies; 170+ results
From: Dmitry A. Kazakov @ 2009-12-04 11:26 UTC (permalink / raw)


On Fri, 4 Dec 2009 03:09:35 -0800 (PST), singo wrote:

> I have recently become very interested of Ada 2005, and it's real-time
> annex. However, as a new user of Ada I face some problems with the
> software.
> 
> I cannot get the package Ada.Execution_Time to work with gnat,
> although the gnat documentation says that the real-time annex is fully
> supported... I use the gnat version 4.4 on a Ubuntu 9.10 distribution.
> 
> The typical error message I get is
> 
> gcc -c executiontime.adb
> Execution_Time is not supported in this configuration
> compilation abandoned
> 
> How can I configure gnat to support the Ada.Execution_Time package?
> 
> Hopefully somebody can help me!
> 
> Thanks in advance!
> 
> Ingo
> 
> Below follows an example program that generates the error messge.
> 
> with Ada.Text_IO; use Ada.Text_IO;
> with Ada.Real_Time; use Ada.Real_Time;
> with Ada.Execution_Time;
> 
> procedure ExecutionTime is
>    task T;
> 
>    task body T is
>       Start : CPU_Time;
>       Interval : Time_Span := Milliseconds(100);
>    begin
>       Start := Ada.Execution_Time.Clock;
>       loop
>          Put_Line(Duration'Image(Ada.Execution_Time.Clock - Start));
>          delay To_Duration(Interval);
>       end loop;
>    end T;
> begin
>    null;
> end ExecutionTime;

I cannot tell anything about Ubuntu, but the program you provided contains
language errors.

It also has the problem that "delay" is non busy in Ada, i.e. the program
will count 0 CPU time for a very long time, at least under Windows, where
the system services, which I presume, Ada.Execution_Time relies on, are
broken.

Anyway, here is the code which works to me:

with Ada.Text_IO;         use Ada.Text_IO;
with Ada.Real_Time;       use Ada.Real_Time;
with Ada.Execution_Time;  use Ada.Execution_Time;

procedure ExecutionTime is
   task T;
   task body T is
      Start : CPU_Time := Clock;
   begin
      loop
         Put_Line (Duration'Image (To_Duration (Clock - Start)));
         for I in 1..40 loop
            Put ("."); -- This does something!
         end loop;
      end loop;
   end T;
begin
   null;
end ExecutionTime;

Under Windows this shows rather poor performance, which again is not
surprising, because as I said there is no way to implement
Ada.Execution_Time under Windows.

Maybe Linux counts CPU time better, I never investigated this issue.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[relevance 8%]

* gnat: Execution_Time is not supported in this configuration
@ 2009-12-04 11:09  8% singo
  2009-12-04 11:26  8% ` Dmitry A. Kazakov
                   ` (2 more replies)
  0 siblings, 3 replies; 170+ results
From: singo @ 2009-12-04 11:09 UTC (permalink / raw)


Dear Ada community,

I have recently become very interested of Ada 2005, and it's real-time
annex. However, as a new user of Ada I face some problems with the
software.

I cannot get the package Ada.Execution_Time to work with gnat,
although the gnat documentation says that the real-time annex is fully
supported... I use the gnat version 4.4 on a Ubuntu 9.10 distribution.

The typical error message I get is

gcc -c executiontime.adb
Execution_Time is not supported in this configuration
compilation abandoned

How can I configure gnat to support the Ada.Execution_Time package?

Hopefully somebody can help me!

Thanks in advance!

Ingo

Below follows an example program that generates the error messge.

with Ada.Text_IO; use Ada.Text_IO;
with Ada.Real_Time; use Ada.Real_Time;
with Ada.Execution_Time;

procedure ExecutionTime is
   task T;

   task body T is
      Start : CPU_Time;
      Interval : Time_Span := Milliseconds(100);
   begin
      Start := Ada.Execution_Time.Clock;
      loop
         Put_Line(Duration'Image(Ada.Execution_Time.Clock - Start));
         delay To_Duration(Interval);
      end loop;
   end T;
begin
   null;
end ExecutionTime;




^ permalink raw reply	[relevance 8%]

* Re: Does Ada tasking profit from multi-core cpus?
  2007-03-08  5:21  5%           ` Randy Brukardt
@ 2007-03-08 10:15  5%             ` Dmitry A. Kazakov
  0 siblings, 0 replies; 170+ results
From: Dmitry A. Kazakov @ 2007-03-08 10:15 UTC (permalink / raw)


On Wed, 7 Mar 2007 23:21:04 -0600, Randy Brukardt wrote:

> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message
> news:1lq9zxgrnvfjx$.17ip3w3ei4xdb.dlg@40tude.net...
> ...
>> Just a side note, the Windows API GetThreadTimes (which the viewer
>> apparently uses) is corrupted. It counts complete time quants rather than
>> the  performance counter ticks. So, potentially you could observe 1% under
>> factual 99% CPU load. The bug should appear for threads performing much
>> synchronization, because they leave the processor before the current quant
>> expiration.
> 
> I wouldn't call it "corrupted"; it's just not very accurate (given that it
> can only register time with a granularity of 0.01 sec).

If it were just inaccurate then the obtained values would be like
ThreadTime + Error where Error has zero mean. That is just not the case.
ThreadTime has a systematic error => in my view corrupt. It simply does not
measure what its name assumes.

> I don't think there
> is any other way to find out CPU use, though, as the performance counter
> provides wall time and thus isn't very useful to find out how much a thread
> is running. (I've tried to figure out how to implement Ada.Execution_Time on
> Windows...)

Yes, at the user level there seems to be no way to do it. The performance
counter should be queried at the scheduling points, and the increment of
should be accumulated for the thread possessing the processor. Only the OS
kernel could do that.

Ada.Execution_Time looks quite a problem for Windows...

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[relevance 5%]

* Re: Does Ada tasking profit from multi-core cpus?
  @ 2007-03-08  5:21  5%           ` Randy Brukardt
  2007-03-08 10:15  5%             ` Dmitry A. Kazakov
  0 siblings, 1 reply; 170+ results
From: Randy Brukardt @ 2007-03-08  5:21 UTC (permalink / raw)


"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message
news:1lq9zxgrnvfjx$.17ip3w3ei4xdb.dlg@40tude.net...
...
> Just a side note, the Windows API GetThreadTimes (which the viewer
> apparently uses) is corrupted. It counts complete time quants rather than
> the  performance counter ticks. So, potentially you could observe 1% under
> factual 99% CPU load. The bug should appear for threads performing much
> synchronization, because they leave the processor before the current quant
> expiration.

I wouldn't call it "corrupted"; it's just not very accurate (given that it
can only register time with a granularity of 0.01 sec). I don't think there
is any other way to find out CPU use, though, as the performance counter
provides wall time and thus isn't very useful to find out how much a thread
is running. (I've tried to figure out how to implement Ada.Execution_Time on
Windows...)

                                       Randy.





^ permalink raw reply	[relevance 5%]

* Re: Task Management
  @ 2005-12-28 12:55  5%   ` Martin Dowie
  0 siblings, 0 replies; 170+ results
From: Martin Dowie @ 2005-12-28 12:55 UTC (permalink / raw)


Dmitry A. Kazakov wrote:
> On 28 Dec 2005 03:27:52 -0800, The One Who Rages wrote:
> 
> 
>>I am new to ada.
> 
> 
> You are welcome, though the following is not Ada question.
> 
> 
>>I am trying to develop a simple system for managing
>>different(predefined) tasks.
>>I need to know time consumption for each task(f.e task with id 1 used
>>213 ms of processor). Could anyone point me vision of solution?
>>I can estimate of course, but it is not sattisfactying me.
>>
>>I work under winXP, with gnat compiler.
> 
> 
> See Win32 API procedure GetThreadTimes in MSDN. GNAT Ada tasks are most
> likely mapped to Windows threads. Call GetCurrentThread once from a task to
> identify it. The result is a pseudo handle. Use DuplicateHandle on it to
> get another (true) handle to the thread. This one can be used outside it
> (in another task.)
> 
> P.S. GNAT has Win32 bindings.

For Ada2005 there will be a new package "Ada.Execution_Time". I have a 
version of this that works with ObjectAda (i.e. Ada95) but not currently 
with GNAT. If anyone is interested in this, please email me. Perhaps 
I'll update my website someday! :-)

I'll see if I can update it to use a general solution using the above 
method described by Dmitry.

Cheers

-- Martin



^ permalink raw reply	[relevance 5%]

* Re: GNAT GPL - Anonymous Access Type
  @ 2005-09-30 11:01  5%               ` Martin Dowie
  0 siblings, 0 replies; 170+ results
From: Martin Dowie @ 2005-09-30 11:01 UTC (permalink / raw)


Martin Dowie wrote:
> No, I mean that I have an implementation for
> Ada.Real_Time.Execution_Time for ObjectAda for Windows.

Except it's actually called "Ada.Execution_Time". ;-)






^ permalink raw reply	[relevance 5%]

* Re: [task] i need some reference...
  @ 2004-10-21 10:37  6% ` Martin Dowie
  0 siblings, 0 replies; 170+ results
From: Martin Dowie @ 2004-10-21 10:37 UTC (permalink / raw)


mferracini wrote:
> i'm start to work on Ada task.
>
> i need exemple of schedulers.
> my idea is to write a simple task_manager with a array of 3 or 4
> elements where write the order of task activation and meanage the task
> (run,suspend,stop,terminate) in a "transparent" way for the task.
> thanks

Hmmm, Ada already provides a built-in 'task manager' - are you looking to
try and time how long eack task has executed for? If so then you can wait
for Ada2005 to provide package "Ada.Execution_Time" or you'll have to write
your own until then.

-- Martin






^ permalink raw reply	[relevance 6%]

Results 1-170 of 170 | reverse | options above
-- pct% links below jump to the message on this page, permalinks otherwise --
2004-10-21 10:12     [task] i need some reference mferracini
2004-10-21 10:37  6% ` Martin Dowie
2005-09-27 19:41     GNAT GPL - Anonymous Access Type Anh Vo
2005-09-28 10:49     ` Rob Norris
2005-09-28 10:57       ` Martin Dowie
2005-09-29 12:28         ` Rob Norris
2005-09-29 17:31           ` Anh Vo
2005-09-29 18:12             ` Martin Dowie
2005-09-29 20:39               ` Anh Vo
2005-09-30  5:44                 ` Martin Dowie
2005-09-30 11:01  5%               ` Martin Dowie
2005-12-28 11:27     Task Management The One Who Rages
2005-12-28 12:28     ` Dmitry A. Kazakov
2005-12-28 12:55  5%   ` Martin Dowie
2007-01-29 11:57     Does Ada tasking profit from multi-core cpus? Gerd
2007-03-04 17:54     `   jpluto
2007-03-05 10:08       ` Ludovic Brenta
2007-03-05 13:12         ` Dmitry A. Kazakov
2007-03-07  3:58           ` Steve
2007-03-07  8:39             ` Dmitry A. Kazakov
2007-03-08  5:21  5%           ` Randy Brukardt
2007-03-08 10:15  5%             ` Dmitry A. Kazakov
2009-12-04 11:09  8% gnat: Execution_Time is not supported in this configuration singo
2009-12-04 11:26  8% ` Dmitry A. Kazakov
2009-12-04 12:10  5% ` Georg Bauhaus
2009-12-07  8:08  0%   ` singo
2009-12-04 18:28  5% ` John B. Matthews
2009-12-04 19:01  6%   ` Dmitry A. Kazakov
2009-12-04 21:50  7%     ` John B. Matthews
2009-12-05  2:59  0%     ` Randy Brukardt
2010-08-21  4:47     What about a glob standard method in Ada.Command_Line ? Yannick Duchêne (Hibou57)
2010-08-21  6:41     ` J-P. Rosen
2010-08-21  9:11  5%   ` Pascal Obry
2010-08-22 19:00  0%     ` J-P. Rosen
2010-08-22 19:30  5%     ` Yannick Duchêne (Hibou57)
2010-08-22 19:46  0%       ` Dmitry A. Kazakov
2010-12-12  4:19 11% Ada.Execution_Time BrianG
2010-12-12  5:27 13% ` Ada.Execution_Time Jeffrey Carter
2010-12-12 16:56 13% ` Ada.Execution_Time Jeffrey Carter
2010-12-12 21:59 11%   ` Ada.Execution_Time BrianG
2010-12-12 22:08  5%     ` Ada.Execution_Time BrianG
2010-12-13  9:28  4%     ` Ada.Execution_Time Georg Bauhaus
2010-12-13 22:25  3%       ` Ada.Execution_Time Randy Brukardt
2010-12-13 22:42  5%         ` Ada.Execution_Time J-P. Rosen
2010-12-14  3:31  4%         ` Ada.Execution_Time Jeffrey Carter
2010-12-14 15:42  3%           ` Ada.Execution_Time Robert A Duff
2010-12-14 16:17  5%             ` Ada.Execution_Time Jeffrey Carter
2010-12-14 19:10  5%             ` Ada.Execution_Time Warren
2010-12-14 20:36  5%               ` Ada.Execution_Time Dmitry A. Kazakov
2010-12-14 20:48  5%                 ` Ada.Execution_Time Jeffrey Carter
2010-12-14  8:17  5%         ` Ada.Execution_Time Vinzent Hoefler
2010-12-14 15:51  4%           ` Ada.Execution_Time Adam Beneschan
2010-12-14 15:53  4%           ` Ada.Execution_Time Robert A Duff
2010-12-14 17:17  5%             ` Ada.Execution_Time Dmitry A. Kazakov
2010-12-14 17:45  5%               ` Ada.Execution_Time Robert A Duff
2010-12-14 18:23  3%                 ` Ada.Execution_Time Adam Beneschan
2010-12-14 21:02  4%                   ` Ada.Execution_Time Randy Brukardt
2010-12-15 22:52  5%             ` Ada.Execution_Time Keith Thompson
2010-12-15 23:14  5%               ` Ada.Execution_Time Adam Beneschan
2010-12-17  0:44  5%                 ` Ada.Execution_Time Randy Brukardt
2010-12-17 17:54  5%                   ` Ada.Execution_Time Warren
2010-12-20 21:28  5%                   ` Ada.Execution_Time Keith Thompson
2010-12-21  3:23  5%                     ` Ada.Execution_Time Robert A Duff
2010-12-21  8:04  5%                       ` Ada.Execution_Time Dmitry A. Kazakov
2010-12-21 17:19  5%                         ` Ada.Execution_Time Robert A Duff
2010-12-21 17:43  5%                           ` Ada.Execution_Time Dmitry A. Kazakov
2010-12-14 19:43  4%           ` Ada.Execution_Time anon
2010-12-14 20:09  5%             ` Ada.Execution_Time Adam Beneschan
2010-12-15  0:16 10%       ` Ada.Execution_Time BrianG
2010-12-15 19:17 10%         ` Ada.Execution_Time jpwoodruff
2010-12-15 21:42  5%           ` Ada.Execution_Time Pascal Obry
2010-12-16  3:54 12%             ` Ada.Execution_Time jpwoodruff
2010-12-17  7:11  5%               ` Ada.Execution_Time Stephen Leake
2010-12-15 21:40  5%         ` Ada.Execution_Time Simon Wright
2010-12-15 23:40  5%           ` Ada.Execution_Time BrianG
2010-12-15 22:05  3%         ` Ada.Execution_Time Randy Brukardt
2010-12-16  1:14  4%           ` Ada.Execution_Time BrianG
2010-12-16  5:46  5%             ` Ada.Execution_Time Jeffrey Carter
2010-12-16 16:13  5%               ` Ada.Execution_Time BrianG
2010-12-16 11:37  5%             ` Ada.Execution_Time Simon Wright
2010-12-16 17:24  5%               ` Ada.Execution_Time BrianG
2010-12-16 17:45  4%                 ` Ada.Execution_Time Adam Beneschan
2010-12-16 21:13  5%                   ` Ada.Execution_Time Jeffrey Carter
2010-12-17  0:35  5%               ` New AdaIC site (was: Ada.Execution_Time) Randy Brukardt
2010-12-16 13:08  5%             ` Ada.Execution_Time Peter C. Chapin
2010-12-16 17:32  5%               ` Ada.Execution_Time BrianG
2010-12-16 18:17  5%             ` Ada.Execution_Time Jeffrey Carter
2010-12-16  8:45  5%           ` Ada.Execution_Time Dmitry A. Kazakov
2010-12-16 16:49  4%             ` Ada.Execution_Time BrianG
2010-12-16 17:52  4%               ` Ada.Execution_Time Dmitry A. Kazakov
2010-12-17  8:49 10%                 ` Ada.Execution_Time Niklas Holsti
2010-12-17  9:32  5%                   ` Ada.Execution_Time Dmitry A. Kazakov
2010-12-17 11:50 11%                     ` Ada.Execution_Time Niklas Holsti
2010-12-17 13:10 10%                       ` Ada.Execution_Time Dmitry A. Kazakov
2010-12-18 21:20  7%                         ` Ada.Execution_Time Niklas Holsti
2010-12-19  9:57  3%                           ` Ada.Execution_Time Dmitry A. Kazakov
2010-12-25 11:31  7%                             ` Ada.Execution_Time Niklas Holsti
2010-12-26 10:25 11%                               ` Ada.Execution_Time Dmitry A. Kazakov
2010-12-27 12:44 10%                                 ` Ada.Execution_Time Niklas Holsti
2010-12-27 15:28  6%                                   ` Ada.Execution_Time Dmitry A. Kazakov
2010-12-27 20:11  9%                                     ` Ada.Execution_Time Niklas Holsti
2010-12-27 21:34  5%                                       ` Ada.Execution_Time Simon Wright
2010-12-28 10:01  4%                                         ` Ada.Execution_Time Niklas Holsti
2010-12-28 14:17  5%                                           ` Ada.Execution_Time Simon Wright
2010-12-27 21:53  3%                                       ` Ada.Execution_Time Dmitry A. Kazakov
2010-12-28 14:14  9%                                         ` Ada.Execution_Time Simon Wright
2010-12-28 15:08  5%                                           ` Ada.Execution_Time Dmitry A. Kazakov
2010-12-28 16:18  5%                                             ` Ada.Execution_Time Simon Wright
2010-12-28 16:34  5%                                               ` Ada.Execution_Time Dmitry A. Kazakov
2010-12-31  0:40  5%                                             ` Ada.Execution_Time BrianG
2010-12-31  9:09  5%                                               ` Ada.Execution_Time Dmitry A. Kazakov
2010-12-28 14:46  9%                                         ` Ada.Execution_Time Niklas Holsti
2010-12-28 15:42  9%                                           ` Ada.Execution_Time Dmitry A. Kazakov
2010-12-28 16:27  4%                                             ` Ada.Execution_Time (see below)
2010-12-28 16:55  4%                                               ` Ada.Execution_Time Dmitry A. Kazakov
2010-12-28 19:41  5%                                                 ` Ada.Execution_Time (see below)
2010-12-28 20:03  5%                                                   ` Ada.Execution_Time Dmitry A. Kazakov
2010-12-28 22:39  5%                                                     ` Ada.Execution_Time Simon Wright
2010-12-29  9:07  5%                                                       ` Ada.Execution_Time Dmitry A. Kazakov
2010-12-27 17:24  5%                                 ` Ada.Execution_Time Robert A Duff
2010-12-27 22:02  5%                                   ` Ada.Execution_Time Randy Brukardt
2010-12-27 22:43  4%                                     ` Ada.Execution_Time Robert A Duff
2010-12-27 22:11  4%                               ` Ada.Execution_Time Randy Brukardt
2010-12-29 12:48  8%                                 ` Ada.Execution_Time Niklas Holsti
2010-12-29 14:30  3%                                   ` Ada.Execution_Time Dmitry A. Kazakov
2010-12-29 16:19  5%                                     ` Ada.Execution_Time (see below)
2010-12-29 16:51 10%                                       ` Ada.Execution_Time Dmitry A. Kazakov
2010-12-29 19:57  4%                                         ` Ada.Execution_Time (see below)
2010-12-29 21:20  5%                                           ` Ada.Execution_Time Dmitry A. Kazakov
2010-12-30  5:13 11%                                             ` Ada.Execution_Time Randy Brukardt
2010-12-30 13:37  5%                                             ` Ada.Execution_Time Niklas Holsti
2010-12-29 20:32 10%                                     ` Ada.Execution_Time Niklas Holsti
2010-12-29 21:21  5%                                       ` Ada.Execution_Time Dmitry A. Kazakov
2010-12-30 13:34 12%                                         ` Ada.Execution_Time Niklas Holsti
2010-12-30 19:23  5%                                     ` Ada.Execution_Time Niklas Holsti
2010-12-30  5:06  2%                                   ` Ada.Execution_Time Randy Brukardt
2010-12-30 23:49  8%                                     ` Ada.Execution_Time Niklas Holsti
2010-12-31 23:34  8%                                       ` Ada.Execution_Time Randy Brukardt
2011-01-01 13:52 10%                                         ` Ada.Execution_Time Niklas Holsti
2011-01-01 14:42 11%                                           ` Ada.Execution_Time Simon Wright
2011-01-01 16:01  5%                                             ` Ada.Execution_Time Simon Wright
2011-01-01 19:18  5%                                               ` Ada.Execution_Time Niklas Holsti
2011-01-03 21:27  9%                                           ` Ada.Execution_Time Randy Brukardt
2011-01-06 22:55 12%                                             ` Ada.Execution_Time Niklas Holsti
2011-01-07  6:25  8%                                               ` Ada.Execution_Time Randy Brukardt
2011-01-01 15:54  9%                                         ` Ada.Execution_Time Simon Wright
2011-01-03 21:33  4%                                           ` Ada.Execution_Time Randy Brukardt
2011-01-05 15:55 10%                                             ` Ada.Execution_Time Brad Moore
2010-12-17  8:59 10%         ` Ada.Execution_Time anon
2010-12-19  3:07  5%           ` Ada.Execution_Time BrianG
2010-12-19  4:01  8%             ` Ada.Execution_Time Vinzent Hoefler
2010-12-19 11:00  6%               ` Ada.Execution_Time Niklas Holsti
2010-12-21  0:37  5%                 ` Ada.Execution_Time Randy Brukardt
2010-12-21  1:20  5%                   ` Ada.Execution_Time Jeffrey Carter
2010-12-19 12:27  5%               ` Ada.Execution_Time Dmitry A. Kazakov
2010-12-21  0:32 13%               ` Ada.Execution_Time Randy Brukardt
2010-12-19 22:54  3%             ` Ada.Execution_Time anon
2010-12-20  3:14  4%               ` Ada.Execution_Time BrianG
2010-12-22 14:30 14%                 ` Ada.Execution_Time anon
2010-12-22 20:09  4%                   ` Ada.Execution_Time BrianG
2010-12-26 10:25  7% Task execution time test Dmitry A. Kazakov
2010-12-27 18:26 12% An Example for Ada.Execution_Time anon
2010-12-28  2:31 14% ` BrianG
2010-12-28 13:43  9%   ` anon
2010-12-29  3:10  9%   ` Randy Brukardt
2010-12-30 23:51  4%     ` BrianG
2010-12-31  9:11 12%       ` Dmitry A. Kazakov
2010-12-31 12:42  9%         ` Niklas Holsti
2010-12-31 14:15 11%           ` Dmitry A. Kazakov
2010-12-31 18:57  9%             ` Niklas Holsti
2011-01-01 13:39  4%               ` Dmitry A. Kazakov
2011-01-01 20:25 10%                 ` Niklas Holsti
2011-01-03  8:50 11%                   ` Dmitry A. Kazakov
2010-12-31 13:05  4%         ` Simon Wright
2010-12-31 14:14  4%           ` Dmitry A. Kazakov
2010-12-31 14:24  5%           ` Robert A Duff
2010-12-31 22:40  5%           ` Simon Wright
2011-01-01  0:07  3%       ` Randy Brukardt
2010-12-30  8:54  7% Task execution time 2 Dmitry A. Kazakov
2014-07-04 17:23     Benchmark Ada, please Victor Porton
2014-07-05 12:34  7% ` Guillaume Foliard
2014-07-05 13:00  5%   ` Niklas Holsti
2014-07-05 17:00  6%     ` Guillaume Foliard
2014-07-05 18:29  0%       ` Niklas Holsti
2014-12-15 16:52     Access parameters and accessibility Michael B.
2014-12-16  7:45     ` Randy Brukardt
2014-12-16 19:46       ` Michael B.
2014-12-17  2:02  5%     ` Adam Beneschan
2014-12-17 23:18  0%       ` Randy Brukardt
2015-03-25 11:46     Languages don't matter. A mathematical refutation Jean François Martinez
2015-04-03 14:13     ` Dmitry A. Kazakov
2015-04-03 17:34       ` Paul Rubin
2015-04-04  0:41         ` Dennis Lee Bieber
2015-04-04  3:05           ` Paul Rubin
2015-04-04 14:46             ` Dennis Lee Bieber
2015-04-04 19:20               ` Paul Rubin
2015-04-04 20:00                 ` Dmitry A. Kazakov
2015-04-04 20:44                   ` Paul Rubin
2015-04-05  8:00                     ` Dmitry A. Kazakov
2015-04-06 17:07                       ` Paul Rubin
2015-04-06 17:41                         ` Dmitry A. Kazakov
2015-04-06 18:35                           ` Paul Rubin
2015-04-06 21:46                             ` Randy Brukardt
2015-04-06 22:12                               ` Paul Rubin
2015-04-07 19:07                                 ` Randy Brukardt
2015-04-08  3:53                                   ` Paul Rubin
2015-04-08 21:16                                     ` Randy Brukardt
2015-04-09  8:55                                       ` Georg Bauhaus
2015-04-09  9:38                                         ` Dmitry A. Kazakov
2015-04-09 13:14                                           ` G.B.
2015-04-09 14:35                                             ` Dmitry A. Kazakov
2015-04-09 18:40  4%                                           ` Niklas Holsti
2015-04-09 19:02  0%                                             ` Dmitry A. Kazakov
2017-06-08 10:36  7% GNAT.Sockets Streaming inefficiency masterglob
2019-05-30 13:57     Ada 95 Timers: Any Alternatives Available, Storage, Efficiency, Reliability, Accuracy, Et Cetera Issues? Felix_The_Cat@gmail.com
2019-05-30 15:34     ` Niklas Holsti
2019-05-30 16:20  8%   ` Dmitry A. Kazakov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox