comp.lang.ada
 help / color / mirror / Atom feed
From: Brad Moore <brad.moore@shaw.ca>
Subject: Re: Ada and OpenMP
Date: Fri, 08 Mar 2013 14:37:04 -0700
Date: 2013-03-08T14:37:04-07:00	[thread overview]
Message-ID: <%Rs_s.191804$uU.99428@newsfe11.iad> (raw)
In-Reply-To: <lyehfqomb0.fsf@pushface.org>

On 07/03/2013 3:52 PM, Simon Wright wrote:
> "Rego, P." <pvrego@gmail.com> writes:
>
>> I'm trying some exercises of parallel computing using that pragmas
>> from OpenMP in C, but it would be good to use it also in Ada. Is it
>> possible to use that pragmas from OpenMP in Ada? And...does gnat gpl
>> supports it?
>
> GNAT doesn't support OpenMP pragmas.
>
> But you might take a look at Paraffin:
> http://sourceforge.net/projects/paraffin/
>

To give an example using Paraffin libraries,

The following code shows the same problem executed sequentially, and
then executed with Paraffin libraries.

with Ada.Real_Time;    use Ada.Real_Time;
with Ada.Command_Line;
with Ada.Text_IO; use Ada.Text_IO;
with Parallel.Iteration.Work_Stealing;

procedure Test_Loops is

    procedure Integer_Loops is new
      Parallel.Iteration.Work_Stealing (Iteration_Index_Type => Integer);

    Start : Time;

    Array_Size : Natural := 50;
    Iterations : Natural := 10_000_000;

begin

    --  Allow first command line parameter to override default iteration 
count
    if Ada.Command_Line.Argument_Count >= 1 then
       Iterations := Integer'Value (Ada.Command_Line.Argument (1));
    end if;

    --  Allow second command line parameter to override default array size
    if Ada.Command_Line.Argument_Count >= 2 then
       Array_Size := Integer'Value (Ada.Command_Line.Argument (2));
    end if;

    Data_Block : declare
       Data : array (1 .. Array_Size) of Natural := (others => 0);
    begin

       --  Sequential Version of the code, any parallelization must be auto
       --  generated by the compiler

       Start := Clock;

       for I in Data'Range loop
          for J in 1 .. Iterations loop
             Data (I) := Data (I) + 1;
          end loop;
       end loop;

       Put_Line ("Sequential Elapsed=" & Duration'Image (To_Duration 
(Clock - Start)));

       Data := (others => 0);
       Start := Clock;

       --  Parallel Version of the code, explicitly parallelized using 
Paraffin
       declare

          procedure Iterate (First : Integer; Last : Integer) is
          begin
             for I in First .. Last loop
                for J in 1 .. Iterations loop
                   Data (I) := Data (I) + 1;
                end loop;
             end loop;
          end Iterate;

       begin
          Integer_Loops (From         => Data'First,
                         To           => Data'Last,
                         Worker_Count => 4,
                         Process      => Iterate'Access);
       end;

       Put_Line ("Parallel Elapsed=" & Duration'Image (To_Duration 
(Clock - Start)));

    end Data_Block;

end Test_Loops;

When run on my machine AMD Quadcore with parameters 100_000 100_000, 
with full optimization turned on with -ftree-vectorize, I get.

Sequential Elapsed= 6.874298000
Parallel Elapsed= 6.287230000

With optimization turned off, I get

Sequential Elapsed= 32.428908000
Parallel Elapsed= 8.424717000

gcc with GNAT does a good job of optimization when its enabled, for 
these cases as shown, but the differences between optimization and using 
Paraffin can be more pronounced in other cases that are more complex, 
such as loops that involve reduction (e.g. calculating a sum)

Brad



      reply	other threads:[~2013-03-08 21:37 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-03-07 18:04 Ada and OpenMP Rego, P.
2013-03-07 20:04 ` Ludovic Brenta
2013-03-07 22:22   ` Peter C. Chapin
2013-03-07 23:42     ` Randy Brukardt
2013-03-08  0:39       ` Peter C. Chapin
2013-03-08  3:31         ` Randy Brukardt
2013-03-08  7:17           ` Simon Wright
2013-03-08 23:40             ` Randy Brukardt
2013-03-08 12:07           ` Peter C. Chapin
2013-03-08 14:40         ` Rego, P.
2013-03-08  1:15       ` Shark8
2013-03-08  3:42         ` Randy Brukardt
2013-03-08 14:53           ` Rego, P.
2013-03-08 15:47             ` Georg Bauhaus
2013-03-08 23:40             ` Randy Brukardt
2013-03-08 16:52           ` Shark8
2013-03-08 23:36             ` Randy Brukardt
2013-03-09  4:13               ` Brad Moore
2013-03-10  4:24                 ` Randy Brukardt
2013-03-08  7:37       ` Simon Wright
2013-03-10 18:00       ` Waldek Hebisch
2013-03-07 23:43     ` Georg Bauhaus
2013-03-08 10:18       ` Georg Bauhaus
2013-03-08 14:24     ` Rego, P.
2013-03-07 22:52 ` Simon Wright
2013-03-08 21:37   ` Brad Moore [this message]
replies disabled

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox