comp.lang.ada
 help / color / mirror / Atom feed
* Robert  Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
@ 2008-03-08  6:04 ME
  2008-03-08 22:11 ` Maciej Sobczak
                   ` (2 more replies)
  0 siblings, 3 replies; 96+ messages in thread
From: ME @ 2008-03-08  6:04 UTC (permalink / raw)


As many of may have already noticed, there has been a tremendous furor over 
the lack of multicore support in the common languages like C and C++.   I 
have been reading these articles in EE Times and elsewhere discussing this 
disaster with all the teeth gnashing and handwringing acting as though Ada 
never existed. Robert Dewar ,our hero, has written an absolutely excellent 
article with a clever intro.
http://www.eetimes.com/news/design/showArticle.jhtml?articleID=206900265 




^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-08  6:04 Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing! ME
@ 2008-03-08 22:11 ` Maciej Sobczak
  2008-03-09  1:09   ` Christopher Henrich
                     ` (5 more replies)
  2008-03-14 19:20 ` Mike Silva
  2008-03-22 22:51 ` Florian Weimer
  2 siblings, 6 replies; 96+ messages in thread
From: Maciej Sobczak @ 2008-03-08 22:11 UTC (permalink / raw)


On 8 Mar, 07:04, "ME" <abcd...@nonodock.net> wrote:
> As many of may have already noticed, there has been a tremendous furor over
> the lack of multicore support in the common languages like C and C++.

No, I did not notice it. It is possble that I've been just too busy
writing multithreaded software in C and C++ and that's why I've missed
this furor.

> Robert Dewar ,our hero, has written an absolutely excellent
> article with a clever intro.http://www.eetimes.com/news/design/showArticle.jhtml?articleID=206900265

No, he didn't write anything special. Actually, there is a lot more to
this subject that he didn't mention.
Take for example lock-free algorithms. There is no visible research on
this related to Ada, unlike Java and C++ (check on
comp.programming.threads).
Ada will most likely miss the "multicore revolution", unless it will
*really* focus on performance - the point is that all this multicore
hoopla revolves around performance, *exclusively*.

--
Maciej Sobczak * www.msobczak.com * www.inspirel.com



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-08 22:11 ` Maciej Sobczak
@ 2008-03-09  1:09   ` Christopher Henrich
  2008-03-09 13:52     ` Maciej Sobczak
  2008-03-09  1:51   ` Phaedrus
                     ` (4 subsequent siblings)
  5 siblings, 1 reply; 96+ messages in thread
From: Christopher Henrich @ 2008-03-09  1:09 UTC (permalink / raw)


In article 
<89af8399-94fb-42b3-909d-edf3c98d32e5@n75g2000hsh.googlegroups.com>,
 Maciej Sobczak <see.my.homepage@gmail.com> wrote:

> On 8 Mar, 07:04, "ME" <abcd...@nonodock.net> wrote:
> > As many of may have already noticed, there has been a tremendous furor over
> > the lack of multicore support in the common languages like C and C++.
> 
> No, I did not notice it. It is possble that I've been just too busy
> writing multithreaded software in C and C++ and that's why I've missed
> this furor.
> 
> > Robert Dewar ,our hero, has written an absolutely excellent
> > article with a clever 
> > intro.http://www.eetimes.com/news/design/showArticle.jhtml?articleID=2069002
> > 65
> 
> No, he didn't write anything special. Actually, there is a lot more to
> this subject that he didn't mention.
> Take for example lock-free algorithms. There is no visible research on
> this related to Ada, unlike Java and C++ (check on
> comp.programming.threads).
> Ada will most likely miss the "multicore revolution", unless it will
> *really* focus on performance - the point is that all this multicore
> hoopla revolves around performance, *exclusively*.
> 
Not correctness?

-- 
Christopher J. Henrich
chenrich@monmouth.com
htp://www.mathinteract.com



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-08 22:11 ` Maciej Sobczak
  2008-03-09  1:09   ` Christopher Henrich
@ 2008-03-09  1:51   ` Phaedrus
  2008-03-09  3:17     ` Jeffrey R. Carter
  2008-03-09 13:59     ` Maciej Sobczak
  2008-03-09  3:15   ` Jeffrey R. Carter
                     ` (3 subsequent siblings)
  5 siblings, 2 replies; 96+ messages in thread
From: Phaedrus @ 2008-03-09  1:51 UTC (permalink / raw)


Ultimately, the most important performance of all is the performance of your 
developers.  If the product doesn't get to market in time then it might as 
well never be developed.  That's yet another place where Ada shines.  Sure, 
you COULD write multithreaded software in C and C++, you could even do it in 
assembly or machine code.  (Not actually a lot of difference there, IMHO.) 
But I bet I'll have my tasks up and running FIRST, and they'll be easier to 
debug, too.

By the way, a quick search with Google brought up quite a few results.  (I'd 
suggest a quick look at Anders Gidenstam's page.)  Maybe you're too busy 
witing multithreaded software in C and C++, if you'd do your work in Ada you 
might have time left over to actually search BEFORE you make unsubstantiated 
claims against Ada.

Ada hasn't "missed" the "multicore revolution".  Quite the opposite, Ada has 
had multitasking built-in since the mid 80's and it works just fine on 
multicore platforms.  (I know, I've been there, done that.)  Perhaps someday 
one of those C variants you seem to prefer will have the same kind of 
advanced features.

Just my $0.02 worth.

Brian




"Maciej Sobczak" <see.my.homepage@gmail.com> wrote in message 
news:89af8399-94fb-42b3-909d-edf3c98d32e5@n75g2000hsh.googlegroups.com...
> On 8 Mar, 07:04, "ME" <abcd...@nonodock.net> wrote:
>> As many of may have already noticed, there has been a tremendous furor 
>> over
>> the lack of multicore support in the common languages like C and C++.
>
> No, I did not notice it. It is possble that I've been just too busy
> writing multithreaded software in C and C++ and that's why I've missed
> this furor.
>
>> Robert Dewar ,our hero, has written an absolutely excellent
>> article with a clever 
>> intro.http://www.eetimes.com/news/design/showArticle.jhtml?articleID=206900265
>
> No, he didn't write anything special. Actually, there is a lot more to
> this subject that he didn't mention.
> Take for example lock-free algorithms. There is no visible research on
> this related to Ada, unlike Java and C++ (check on
> comp.programming.threads).
> Ada will most likely miss the "multicore revolution", unless it will
> *really* focus on performance - the point is that all this multicore
> hoopla revolves around performance, *exclusively*.
>
> --
> Maciej Sobczak * www.msobczak.com * www.inspirel.com 





^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-08 22:11 ` Maciej Sobczak
  2008-03-09  1:09   ` Christopher Henrich
  2008-03-09  1:51   ` Phaedrus
@ 2008-03-09  3:15   ` Jeffrey R. Carter
  2008-03-09 13:32     ` Maciej Sobczak
  2008-03-09  8:20   ` Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing! Pascal Obry
                     ` (2 subsequent siblings)
  5 siblings, 1 reply; 96+ messages in thread
From: Jeffrey R. Carter @ 2008-03-09  3:15 UTC (permalink / raw)


Maciej Sobczak wrote:
> 
> No, I did not notice it. It is possble that I've been just too busy
> writing multithreaded software in C and C++ and that's why I've missed
> this furor.

No, you haven't.

You may have been writing multithreaded SW in C plus a threading library, or in 
C++ plus a threading library, but you haven't been writing it in C or C++.

-- 
Jeff Carter
"No one is to stone anyone until I blow this whistle,
do you understand? Even--and I want to make this
absolutely clear--even if they do say, 'Jehovah.'"
Monty Python's Life of Brian
74



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-09  1:51   ` Phaedrus
@ 2008-03-09  3:17     ` Jeffrey R. Carter
  2008-03-09 13:59     ` Maciej Sobczak
  1 sibling, 0 replies; 96+ messages in thread
From: Jeffrey R. Carter @ 2008-03-09  3:17 UTC (permalink / raw)


Phaedrus wrote:
> 
> Ada hasn't "missed" the "multicore revolution".  Quite the opposite, Ada has 
> had multitasking built-in since the mid 80's and it works just fine on 
> multicore platforms.  (I know, I've been there, done that.)  Perhaps someday 
> one of those C variants you seem to prefer will have the same kind of 
> advanced features.

Ada has had tasking since 1980 (Ada 80, MIL-STD 1815). It was significantly 
revised for Ada 83.

-- 
Jeff Carter
"No one is to stone anyone until I blow this whistle,
do you understand? Even--and I want to make this
absolutely clear--even if they do say, 'Jehovah.'"
Monty Python's Life of Brian
74



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-08 22:11 ` Maciej Sobczak
                     ` (2 preceding siblings ...)
  2008-03-09  3:15   ` Jeffrey R. Carter
@ 2008-03-09  8:20   ` Pascal Obry
  2008-03-09  9:39     ` Georg Bauhaus
                       ` (2 more replies)
  2008-03-10 21:24   ` Randy Brukardt
  2008-04-29  7:15   ` Ivan Levashew
  5 siblings, 3 replies; 96+ messages in thread
From: Pascal Obry @ 2008-03-09  8:20 UTC (permalink / raw)
  To: Maciej Sobczak

Maciej Sobczak a �crit :
> Ada will most likely miss the "multicore revolution", unless it will
> *really* focus on performance - the point is that all this multicore
> hoopla revolves around performance, *exclusively*.

Probably, that's why C++ code using threading, OpenMP or MPI are just a 
mess. Impossible to maintain because C++ hackers seems to prefer working 
hard 6 month for gaining 2% of performance instead of buying a new 
computer with more core or adding some node on a cluster. Sorry, but 
I've seen that, horrible mess just because hacking C++ code seems fun to 
many people.

I prefer using Ada, even losing 10% performance initially, having a 
clean object oriented design (using distributed annex, and Ada tasking). 
Then when the application is done, just buy the new multicore-box and 
you get back the performance than what you loose initially.

The cherry on top of the cake is that your application can be ported to 
a new architecture without much trouble. I've seen people spending 
months porting an application from one machine to another to "tweak" it 
as best as possible. A big lost of money when you compare the salary 
against the price of a new machine. Especially since now the application 
if full of #define and twisted code that make it just unmaintainable.

I hope at some point people will understand that... All this man-power 
burned stupidly !

Pascal.

-- 

--|------------------------------------------------------
--| Pascal Obry                           Team-Ada Member
--| 45, rue Gabriel Peri - 78114 Magny Les Hameaux FRANCE
--|------------------------------------------------------
--|              http://www.obry.net
--| "The best way to travel is by means of imagination"
--|
--| gpg --keyserver wwwkeys.pgp.net --recv-key C1082595



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-09  8:20   ` Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing! Pascal Obry
@ 2008-03-09  9:39     ` Georg Bauhaus
  2008-03-09 12:40     ` Vadim Godunko
  2008-03-09 13:50     ` Maciej Sobczak
  2 siblings, 0 replies; 96+ messages in thread
From: Georg Bauhaus @ 2008-03-09  9:39 UTC (permalink / raw)


Pascal Obry wrote:
> Maciej Sobczak a �crit :
>> Ada will most likely miss the "multicore revolution", unless it will
>> *really* focus on performance - the point is that all this multicore
>> hoopla revolves around performance, *exclusively*.
> 
> Probably, that's why C++ code using threading, OpenMP or MPI are just a
> mess. Impossible to maintain because C++ hackers seems to prefer working
> hard 6 month for gaining 2% of performance instead of buying a new
> computer with more core or adding some node on a cluster. Sorry, but
> I've seen that, horrible mess just because hacking C++ code seems fun to
> many people.

Then again, the multicore things have atomic updates built in
which offer some opportunities that only happen to be
part of the Ada language. Now this language feature becomes
visible as part of the top selling CPUs...
There is research on employing these CPU mechanisms for Ada use,
but, IIUC, it is not *visible* on many sites that are visible to
those interested in how to use new multicore CPUs.
And Ada RTS/Library inclusion is not done yet, or is it?

Multicore algorithms can continue the great academic tradition of
efficient algorithms. Aren't lock-free ones really a natural
starting point? They also have their uses.
IIRC, ready-made Communicating Sequential Processes has a lower
visibility in CS than the basic critical section model.

Ada's tasking implementations are not currently known to be the
best choice when an algorithm is about how to efficiently use
the multicore CPU with word sized memory.  The tasking protocol as
implemented for x86 < Today turns out to be too heavy weight.
The cost is far beyond 10%.



> The cherry on top of the cake is that your application can be ported to
> a new architecture without much trouble.

Gidenstam has ported his Primitives (Ada packages hiding the
CPU's atomic updates) to at least Intel, Sparc, and MIPS.
Built on top of the Primitives, he has a lock-free bounded buffer
in queue mode...



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-09  8:20   ` Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing! Pascal Obry
  2008-03-09  9:39     ` Georg Bauhaus
@ 2008-03-09 12:40     ` Vadim Godunko
  2008-03-09 13:37       ` Dmitry A. Kazakov
                         ` (2 more replies)
  2008-03-09 13:50     ` Maciej Sobczak
  2 siblings, 3 replies; 96+ messages in thread
From: Vadim Godunko @ 2008-03-09 12:40 UTC (permalink / raw)


On Mar 9, 11:20 am, Pascal Obry <pas...@obry.net> wrote:
>
> I prefer using Ada, even losing 10% performance initially.
I usually have 4+ times penalty for Ada program with controlled object
for memory allocation/deallocation control and protected objects for
thread safe operations in comparison with equivalent C++ program. :-(

#include <QString>
#include <QTime>

void test(unsigned length)
{
        uint x[length];
        QString a[1024];
        QString b[1024];
        QTime timer;

        timer.start();

        for (int i = 0; i < 1024; i++)
        {
                a[i] = QString::fromUcs4(x, length);
        }

        qDebug("Init %d %lf", length, (double)timer.elapsed() / 1000);

        timer.restart();

        for (int i = 0; i < 1000; i++)
        {
                if (i % 2)
                {
                        for (int j = 0; j < 1024; j++)
                        {
                                a[j] = b[j];
                        }
                }
                else
                {
                        for (int j = 0; j < 1024; j++)
                        {
                                b[j] = a[j];
                        }
                }
        }

        qDebug("Copy %d %lf", length, (double)timer.elapsed() / 1000);
}

int main()
{
        test (128);
        test (1024);
        return 0;
}

with Ada.Calendar;
with Ada.Wide_Wide_Text_IO;

with League.Strings;

procedure Speed_Test_League is

   type Universal_String_Array is
     array (Positive range <>) of League.Strings.Universal_String;

   procedure Test (Length : in Positive);

   procedure Test (Length : in Positive) is
      use type Ada.Calendar.Time;

      X : constant Wide_Wide_String (1 .. Length) := (others => ' ');
      A : Universal_String_Array (1 .. 1_024);
      B : Universal_String_Array (1 .. 1_024);
      S : Ada.Calendar.Time;

   begin
      S := Ada.Calendar.Clock;

      for J in A'Range loop
         A (J) := League.Strings.To_Universal_String (X);
      end loop;

      Ada.Wide_Wide_Text_IO.Put_Line
       ("Init"
          & Positive'Wide_Wide_Image (Length)
          & Duration'Wide_Wide_Image (Ada.Calendar.Clock - S));

      S := Ada.Calendar.Clock;

      for J in 1 .. 1_000 loop
         if J mod 2 = 1 then
            B := A;

         else
            A := B;
         end if;
      end loop;

      Ada.Wide_Wide_Text_IO.Put_Line
       ("Copy"
          & Positive'Wide_Wide_Image (Length)
          & Duration'Wide_Wide_Image (Ada.Calendar.Clock - S));
   end Test;

begin
   Test (128);
   Test (1_024);
end Speed_Test_League;



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-09  3:15   ` Jeffrey R. Carter
@ 2008-03-09 13:32     ` Maciej Sobczak
  2008-03-09 14:02       ` Dmitry A. Kazakov
                         ` (3 more replies)
  0 siblings, 4 replies; 96+ messages in thread
From: Maciej Sobczak @ 2008-03-09 13:32 UTC (permalink / raw)


On 9 Mar, 04:15, "Jeffrey R. Carter" <spam.jrcarter....@spam.acm.org>
wrote:

> You may have been writing multithreaded SW in C plus a threading library, or in
> C++ plus a threading library, but you haven't been writing it in C or C++.

You are right.
In the same way, nobody every wrote a GUI program in Ada, nor a
networking program, nobody did any cryptography in Ada, etc. These
languages are almost completely useless nowadays, sigh.
I accept this way of reasoning - but it does not change anything in
the industry practice.

--
Maciej Sobczak * www.msobczak.com * www.inspirel.com



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-09 12:40     ` Vadim Godunko
@ 2008-03-09 13:37       ` Dmitry A. Kazakov
  2008-03-09 14:41         ` Vadim Godunko
  2008-03-10  9:56         ` Ole-Hjalmar Kristensen
  2008-03-11 13:58       ` george.priv
  2008-03-11 22:09       ` gpriv
  2 siblings, 2 replies; 96+ messages in thread
From: Dmitry A. Kazakov @ 2008-03-09 13:37 UTC (permalink / raw)


On Sun, 9 Mar 2008 05:40:53 -0700 (PDT), Vadim Godunko wrote:

> On Mar 9, 11:20 am, Pascal Obry <pas...@obry.net> wrote:
>>
>> I prefer using Ada, even losing 10% performance initially.
>
> I usually have 4+ times penalty for Ada program with controlled object
> for memory allocation/deallocation control and protected objects for
> thread safe operations in comparison with equivalent C++ program. :-(

[...]
 
Controlled object's performance is of course an issue in Ada, when you need
fast initialization/finalization. It should be redesigned, not controlled
types, but others, by-value, types in order to support
initialization/finalization and classes.

It is unclear in which the context you are using protected objects. I don't
see why a protected object should be slower than, say, critical section +
operation.

I don't see protected actions in your code. You probably use them for
locking somewhere inside. This would be an abstraction inversion, bad
design. (However, one could address this to another Ada problem: protected
objects aren't tagged/controlled.)

>       for J in 1 .. 1_000 loop
>          if J mod 2 = 1 then
>             B := A;

Is this a deep copy? What is controlled, array, elements, both? What is the
locking policy, per container, per element, both?

As for multi-cores, I guess that protected object would not be a good
concurrency primitive for such architectures. In a long term perspective,
we could experience the pendulum swinging back to tasks.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-09  8:20   ` Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing! Pascal Obry
  2008-03-09  9:39     ` Georg Bauhaus
  2008-03-09 12:40     ` Vadim Godunko
@ 2008-03-09 13:50     ` Maciej Sobczak
  2008-03-09 14:54       ` Pascal Obry
  2 siblings, 1 reply; 96+ messages in thread
From: Maciej Sobczak @ 2008-03-09 13:50 UTC (permalink / raw)


On 9 Mar, 09:20, Pascal Obry <pas...@obry.net> wrote:

> > Ada will most likely miss the "multicore revolution", unless it will
> > *really* focus on performance - the point is that all this multicore
> > hoopla revolves around performance, *exclusively*.
>
> Probably, that's why C++ code using threading, OpenMP or MPI are just a
> mess.

It is not a general truth.
I've seen C++ programs that are well designed and easy to port (there
are portable libraries, actually, so there is nothing to port).
I've also seen complete Ada mess with rendezvous and ATC. Actually,
Ravenscar forbids some tasking features - there must be a reason for
this, right?
If you have a good mutithreading *design*, it can be equally easy to
express it in Ada as in C++. Especially, if you take only the
Ravenscar subset of Ada.

> Impossible to maintain because C++ hackers seems to prefer working
> hard 6 month for gaining 2% of performance instead of buying a new
> computer with more core or adding some node on a cluster. Sorry, but
> I've seen that, horrible mess just because hacking C++ code seems fun to
> many people.

I agree with you. There are many C++ hackers.
But there exist also C++ software engineers.

> I prefer using Ada, even losing 10% performance initially

I've seen 80x (eighty times) penalty when comparing Ada's protected
objects with basic usage of mutexes in C++.
80x is not something to be taken lightly.

> The cherry on top of the cake is that your application can be ported to
> a new architecture without much trouble. I've seen people spending
> months porting an application from one machine to another to "tweak" it
> as best as possible. A big lost of money when you compare the salary
> against the price of a new machine. Especially since now the application
> if full of #define and twisted code that make it just unmaintainable.

I've seen things like this as well - but there are ways to do it in a
much cleaner way.

I don't have problems writing well-structured and portable C++
multithreaded software and I'm very much concerned with the
performance penalty that is imposed by Ada tasking features when
compared to what can be accomplished with C++.

--
Maciej Sobczak * www.msobczak.com * www.inspirel.com



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-09  1:09   ` Christopher Henrich
@ 2008-03-09 13:52     ` Maciej Sobczak
  0 siblings, 0 replies; 96+ messages in thread
From: Maciej Sobczak @ 2008-03-09 13:52 UTC (permalink / raw)


On 9 Mar, 02:09, Christopher Henrich <chenr...@monmouth.com> wrote:

> > the point is that all this multicore
> > hoopla revolves around performance, *exclusively*.
>
> Not correctness?

No. Nobody builds multicore CPUs *because of* correctness.
It is the performance that is a motivator here.

Correctness is a domain of progamming languages, not multicore
architectures.

--
Maciej Sobczak * www.msobczak.com * www.inspirel.com



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-09  1:51   ` Phaedrus
  2008-03-09  3:17     ` Jeffrey R. Carter
@ 2008-03-09 13:59     ` Maciej Sobczak
  1 sibling, 0 replies; 96+ messages in thread
From: Maciej Sobczak @ 2008-03-09 13:59 UTC (permalink / raw)


On 9 Mar, 02:51, "Phaedrus" <phaedrus...@hotmail.com> wrote:
> Ultimately, the most important performance of all is the performance of your
> developers.

Yes. Do you think that multicore CPUs will improve it apart from, say,
faster builds?

>  If the product doesn't get to market in time then it might as
> well never be developed.

Yes.

> That's yet another place where Ada shines.

I find it difficult to understand. The productivity of any particular
language depends mainly on the availability of reusable components
(libraries). Ada does not shine in this department, which is a real
pity.

> Sure,
> you COULD write multithreaded software in C and C++

"Could" has a conditional meaning in English (I think so).
I actually DO write such software.

> you could even do it in
> assembly or machine code.

Yes, but that WOULD be a perversion.

> But I bet I'll have my tasks up and running FIRST, and they'll be easier to
> debug, too.

Why? What makes you think so?

> I'd
> suggest a quick look at Anders Gidenstam's page.

Thank you for this reference.

--
Maciej Sobczak * www.msobczak.com * www.inspirel.com



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-09 13:32     ` Maciej Sobczak
@ 2008-03-09 14:02       ` Dmitry A. Kazakov
  2008-03-09 18:26       ` Phaedrus
                         ` (2 subsequent siblings)
  3 siblings, 0 replies; 96+ messages in thread
From: Dmitry A. Kazakov @ 2008-03-09 14:02 UTC (permalink / raw)


On Sun, 9 Mar 2008 06:32:29 -0700 (PDT), Maciej Sobczak wrote:

> On 9 Mar, 04:15, "Jeffrey R. Carter" <spam.jrcarter....@spam.acm.org>
> wrote:
> 
>> You may have been writing multithreaded SW in C plus a threading library, or in
>> C++ plus a threading library, but you haven't been writing it in C or C++.
> 
> You are right.
> In the same way, nobody every wrote a GUI program in Ada, nor a
> networking program, nobody did any cryptography in Ada, etc. These
> languages are almost completely useless nowadays, sigh.

There is a danger of making a logical error here. Cryptography is
computable (except for true random generators), tasking is not. Real-time
clock isn't as well. GUI and networking are border cases. Broadly they are
much less troublesome because their primitives being wrapped into
subprograms are well composable. Concurrency primitives aren't. That's IMO
the reason why I/O was removed from programming languages since PL/1. 

Arguably GUI are close now to make their comeback. GUI libraries are just
so awful, that one might consider language primitives for. As a response
there exist lots of languages for GUI design (LabView, for example). 

Certainly everybody would agree that sockets should be a standard Ada
library and this is easier to do because Ada has tasking support.

> I accept this way of reasoning - but it does not change anything in
> the industry practice.

Right. In industry one first chooses a low-cost processor, and chances are
extremely high that there is no Ada support for it. Industry does not care
about threading in particular and software developing in general.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-09 13:37       ` Dmitry A. Kazakov
@ 2008-03-09 14:41         ` Vadim Godunko
  2008-03-10 20:51           ` Randy Brukardt
  2008-03-10  9:56         ` Ole-Hjalmar Kristensen
  1 sibling, 1 reply; 96+ messages in thread
From: Vadim Godunko @ 2008-03-09 14:41 UTC (permalink / raw)


On Mar 9, 4:37 pm, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de>
wrote:
>
> It is unclear in which the context you are using protected objects. I don't
> see why a protected object should be slower than, say, critical section +
> operation.
>
I have attach all source code. Protected object used for atomic
reference counting (The design may be looks strange, I just have plans
to replace protected object by the inline assembler code). C++ class
used inline assembler for the same. Both C++ class and Ada tagged type
share internal string data.

> >       for J in 1 .. 1_000 loop
> >          if J mod 2 = 1 then
> >             B := A;
>
> Is this a deep copy?
No.

> What is controlled, array, elements, both?
Array's elements are controlled.

> What is the locking policy, per container, per element, both?
Per element.

------->-8--------

private with Ada.Finalization;

private with League.Internals.Atomics;

package League.Strings is

   pragma Preelaborate;

   type Universal_String is tagged private;

   function To_Universal_String (Item : in Wide_Wide_String)
     return Universal_String;

   function To_Wide_Wide_String (Self : in Universal_String'Class)
     return Wide_Wide_String;

   function "=" (Left  : in Universal_String;
                 Right : in Universal_String)
     return Boolean;

private

   type Utf16_String is new Wide_String;

   type Utf16_String_Access is access all Utf16_String;

   type Private_Data is record
      Counter : aliased League.Internals.Atomics.Counter;
      String  : Utf16_String_Access;
      Last    : Natural := 0;
      Length  : Natural := 0;
   end record;

   type Private_Data_Access is access all Private_Data;

   Empty_String : aliased Utf16_String := "";

   Shared_Empty : aliased Private_Data
     := (String => Empty_String'Access,
         others => <>);

   type Universal_String is new Ada.Finalization.Controlled with
record
      Data : Private_Data_Access := Shared_Empty'Access;
   end record;

   overriding
   procedure Initialize (Self : in out Universal_String);

   overriding
   procedure Adjust (Self : in out Universal_String);

   overriding
   procedure Finalize (Self : in out Universal_String);

end League.Strings;

with Ada.Unchecked_Deallocation;

package body League.Strings is

   Surrogate_First      : constant := 16#D800#;
   High_Surrogate_First : constant := 16#D800#;
   High_Surrogate_Last  : constant := 16#DBFF#;
   Low_Surrogate_First  : constant := 16#DC00#;
   Low_Surrogate_Last   : constant := 16#DFFF#;
   Surrogate_Last       : constant := 16#DFFF#;

   subtype Surrogate_Wide_Character is Wide_Character
     range Wide_Character'Val (Surrogate_First)
             .. Wide_Character'Val (Surrogate_Last);

   subtype High_Surrogate_Wide_Character is Surrogate_Wide_Character
     range Wide_Character'Val (High_Surrogate_First)
             .. Wide_Character'Val (High_Surrogate_Last);

   subtype Low_Surrogate_Wide_Character is Surrogate_Wide_Character
     range Wide_Character'Val (Low_Surrogate_First)
             .. Wide_Character'Val (Low_Surrogate_Last);

   procedure Free is
     new Ada.Unchecked_Deallocation (Private_Data,
Private_Data_Access);

   procedure Free is
     new Ada.Unchecked_Deallocation (Utf16_String,
Utf16_String_Access);

   function "=" (Left  : in Universal_String;
                 Right : in Universal_String)
     return Boolean
   is
   begin
      raise Program_Error;
      return False;
   end "=";

   overriding
   procedure Adjust (Self : in out Universal_String) is
   begin
      League.Internals.Atomics.Increment (Self.Data.Counter'Access);
   end Adjust;

   overriding
   procedure Finalize (Self : in out Universal_String) is
   begin
      if League.Internals.Atomics.Decrement (Self.Data.Counter'Access)
then
         pragma Assert (Self.Data /= Shared_Empty'Access);

         Free (Self.Data.String);
         Free (Self.Data);
      end if;
   end Finalize;

   overriding
   procedure Initialize (Self : in out Universal_String) is
   begin
      League.Internals.Atomics.Increment (Self.Data.Counter'Access);
   end Initialize;

   function To_Universal_String (Item : in Wide_Wide_String)
     return Universal_String
   is
      Aux  : Utf16_String_Access
        := new Utf16_String (1 .. Item'Length * 2);
      --  Reserve memory in assumption of all character will be
encoded as
      --  surrogate pair.

      Last : Natural := 0;

   begin
      for J in Item'Range loop
         if Item (J) > Wide_Wide_Character'Val (Code_Point'Last)
           or else Item (J) in Wide_Wide_Character'Val
(Surrogate_First)
                                 .. Wide_Wide_Character'Val
(Surrogate_Last)
         then
            raise Constraint_Error
              with "Wide_Wide_Character is not a valid Unicode code
point";
         end if;

         declare
            C : constant Code_Point
              := Wide_Wide_Character'Pos (Item (J));

         begin
            if C <= 16#FFFF# then
               Last := Last + 1;
               Aux (Last) := Wide_Character'Val (C);

            else
               Last := Last + 1;
               Aux (Last) :=
                 Wide_Character'Val (High_Surrogate_First + C /
16#400#);

               Last := Last + 1;
               Aux (Last) :=
                 Wide_Character'Val (Low_Surrogate_First + C mod
16#400#);
            end if;
         end;
      end loop;

      return
       (Ada.Finalization.Controlled with
          Data =>
            new Private_Data'
                 (Counter => League.Internals.Atomics.One,
                  String  => Aux,
                  Last    => Last,
                  Length  => Item'Length));

   exception
      when others =>
         Free (Aux);

         raise;
   end To_Universal_String;

   function To_Wide_Wide_String (Self : in Universal_String'Class)
     return Wide_Wide_String
   is
      Current : Positive := 1;

   begin
      return Result : Wide_Wide_String (1 .. Self.Data.Length) do
         for J in Result'Range loop
            if Self.Data.String (Current) in Surrogate_Wide_Character
then
               if Current < Self.Data.Last
                 and then Self.Data.String (Current)
                            in High_Surrogate_Wide_Character
                 and then Self.Data.String (Current + 1)
                            in Low_Surrogate_Wide_Character
               then
                  Result (J) :=
                    Wide_Wide_Character'Val
                     ((Wide_Character'Pos (Self.Data.String (Current))
                         - High_Surrogate_First) * 16#400#
                        + (Wide_Character'Pos
                            (Self.Data.String (Current + 1))
                            - Low_Surrogate_First)
                        + 16#1_0000#);
                  Current := Current + 2;

               else
                  raise Constraint_Error
                    with "Ill-formed UTF-16 string: invalid surrogate
pair";
               end if;

            else
               Result (J) :=
                 Wide_Wide_Character'Val
                  (Wide_Character'Pos (Self.Data.String (Current)));
               Current := Current + 1;
            end if;
         end loop;

         pragma Assert (Current = Self.Data.Last + 1);
      end return;
   end To_Wide_Wide_String;

end League.Strings;
private with Interfaces.C;

package League.Internals.Atomics is

   pragma Preelaborate;

   type Counter is private;

   Zero : constant Counter;
   One  : constant Counter;

   procedure Increment (Self : not null access Counter);
   --  Atomicaly increment counter value.

   function Decrement (Self : not null access Counter)
     return Boolean;
   --  Atomicaly decrement counter value. Returns True if counter has
zero
   --  value after decrement.

private

   type Counter is record
      Value : Interfaces.C.int := 1;
   end record;

   Zero : constant Counter := (Value => 0);
   One  : constant Counter := (Value => 1);

end League.Internals.Atomics;
------------------------------------------------------------------------------
--  This is portable version of the package.
------------------------------------------------------------------------------

package body League.Internals.Atomics is

   protected Guard is

      procedure Increment (Self : not null access Counter);

      procedure Decrement (Self : not null access Counter;
                           Zero : out Boolean);

   end Guard;

   function Decrement (Self : not null access Counter)
     return Boolean
   is
      Aux : Boolean;

   begin
      Guard.Decrement (Self, Aux);

      return Aux;
   end Decrement;

   protected body Guard is

      procedure Decrement (Self : not null access Counter;
                           Zero : out Boolean)
      is
         use type Interfaces.C.int;

      begin
         Self.Value := Self.Value - 1;

         Zero := Self.Value = 0;
      end Decrement;

      procedure Increment (Self : not null access Counter) is
         use type Interfaces.C.int;

      begin
         Self.Value := Self.Value + 1;
      end Increment;

   end Guard;

   procedure Increment (Self : not null access Counter) is
   begin
      Guard.Increment (Self);
   end Increment;

end League.Internals.Atomics;
package League.Internals is

   pragma Pure;

end League.Internals;
package League is

   pragma Pure;

   type Code_Point is mod 16#11_0000#;
   for Code_Point'Size use 32;

end League;



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-09 13:50     ` Maciej Sobczak
@ 2008-03-09 14:54       ` Pascal Obry
  0 siblings, 0 replies; 96+ messages in thread
From: Pascal Obry @ 2008-03-09 14:54 UTC (permalink / raw)
  To: Maciej Sobczak


Maciej,

> I've seen 80x (eighty times) penalty when comparing Ada's protected
> objects with basic usage of mutexes in C++.
> 80x is not something to be taken lightly.

I've never seen such penalty. Maybe in theory or in a very specific part 
of the code. But let's compare the *final* application speed. I have 
gone this path in a medium simulation, the Ada implementation was slower 
in some part, the C++/OpenMP implementation was slower on some other 
part. The final application was running at same speed in Ada and C++ 
(well in fact the Ada implementation was a bit less than 1% faster than 
the C++ one).

Also, one point about the C++/MPI version (we also worked on a 
distributed version even if I don't have the final data, but speed was 
almost comparable) compared to the Ada Annex-E version. My co-worker 
were amazed at how fast I was able to re-configure the distributed 
application. Where it took days/weeks to change the MPI implementation, 
it took me hours to change the GLADE configuration file. Also, the 
facility to exchange Ada Containers objects across partitions was pretty 
amazing. No tweak, no hack, clean code, just plain Ada.

I know a group of C++ hackers still trying to come up with a clean 
solution to exchange objects (class instance) across nodes... Impossible 
to stream properly objects in C++, or you have to code almost all by 
hand! All these aspects are far more important to me than pure speed. I 
understand that this tradeoff can be different on some other 
applications, but I won't buy that this is majority of cases!

Pascal.

-- 

--|------------------------------------------------------
--| Pascal Obry                           Team-Ada Member
--| 45, rue Gabriel Peri - 78114 Magny Les Hameaux FRANCE
--|------------------------------------------------------
--|              http://www.obry.net
--| "The best way to travel is by means of imagination"
--|
--| gpg --keyserver wwwkeys.pgp.net --recv-key C1082595



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-09 13:32     ` Maciej Sobczak
  2008-03-09 14:02       ` Dmitry A. Kazakov
@ 2008-03-09 18:26       ` Phaedrus
  2008-03-10  0:04         ` Ray Blaak
  2008-03-09 22:31       ` Jeffrey R. Carter
  2008-03-10  3:04       ` Robert Dewar's great article about the Strengths of Ada over Larry Kilgallen
  3 siblings, 1 reply; 96+ messages in thread
From: Phaedrus @ 2008-03-09 18:26 UTC (permalink / raw)


> In the same way, nobody every wrote a GUI program in Ada, nor a
> networking program, nobody did any cryptography in Ada, etc. These
> languages are almost completely useless nowadays, sigh.

I guess that I qualify as "nobody" then, 'cause I chose to do my graphics 
work at UCLA in Ada.  The only piece of legacy code that was allowed was the 
ability to write a dot, not a line, to the screen.  Everything else was 
roll-your-own, and I did.  Wrote a nice matrix package, did all of the 
bilinear interprolation necessary for a nice shading algorithm, and had 3d 
objects spinning on the screen, with shading, and painter's algorithm. 
Looked pretty nice, too.

So, what did I learn from all this?  First, with rare exception you CAN 
write darn near anything in almost any language.  (I'll ignore Lisp for 
now.)  Second, I'd much rather write complicated structures in a language 
that makes it easier to understand those structures later.  (A good reason 
for ignoring Lisp.  And C++, and C#, and Java and assembly and machine 
code...)  If you can't figure out what it's doing at 3am, then it's probably 
too cryptic for general use, and you'd be surprised how much of your work 
will get done around that time.

By the way, I've done robotics projects in Ada, I've done a little crypto in 
Ada, I've done a fair amount of number crunching in Ada, and a LOT of 
realtime, embedded weapons work in Ada.  Oh, and my speciality is realtime 
3d graphics, piping out many frames per second with a nice GUI, and (with 
the exception of OpenGl) it's all in Ada.  My little company uses Ada almost 
exclusively, and we're very happy with the competitive advantage it gives 
us.

Sometimes it's nice being "nobody".

Brian

"Maciej Sobczak" <see.my.homepage@gmail.com> wrote in message 
news:53e0fda7-a536-4899-a115-9d4e137ac698@13g2000hsb.googlegroups.com...
> On 9 Mar, 04:15, "Jeffrey R. Carter" <spam.jrcarter....@spam.acm.org>
> wrote:
>
>> You may have been writing multithreaded SW in C plus a threading library, 
>> or in
>> C++ plus a threading library, but you haven't been writing it in C or 
>> C++.
>
> You are right.
> In the same way, nobody every wrote a GUI program in Ada, nor a
> networking program, nobody did any cryptography in Ada, etc. These
> languages are almost completely useless nowadays, sigh.
> I accept this way of reasoning - but it does not change anything in
> the industry practice.
>
> --
> Maciej Sobczak * www.msobczak.com * www.inspirel.com 





^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-09 13:32     ` Maciej Sobczak
  2008-03-09 14:02       ` Dmitry A. Kazakov
  2008-03-09 18:26       ` Phaedrus
@ 2008-03-09 22:31       ` Jeffrey R. Carter
  2008-03-10  3:53         ` gpriv
  2008-03-10  3:04       ` Robert Dewar's great article about the Strengths of Ada over Larry Kilgallen
  3 siblings, 1 reply; 96+ messages in thread
From: Jeffrey R. Carter @ 2008-03-09 22:31 UTC (permalink / raw)


Maciej Sobczak wrote:
> 
> You are right.
> In the same way, nobody every wrote a GUI program in Ada, nor a
> networking program, nobody did any cryptography in Ada, etc. These
> languages are almost completely useless nowadays, sigh.
> I accept this way of reasoning - but it does not change anything in
> the industry practice.

I've written a VGA driver/library from scratch entirely in Ada (for DOS) and 
built graphical programs on top of it. I've written cryptographic code in Ada. 
I've worked on projects that implemented networking in Ada. That's hardly 
"nobody"; it's just a question of how much experience one has.

-- 
Jeff Carter
"Crucifixion's a doddle."
Monty Python's Life of Brian
82



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-09 18:26       ` Phaedrus
@ 2008-03-10  0:04         ` Ray Blaak
  2008-03-10  7:49           ` Georg Bauhaus
  2008-03-10  7:53           ` Phaedrus
  0 siblings, 2 replies; 96+ messages in thread
From: Ray Blaak @ 2008-03-10  0:04 UTC (permalink / raw)


"Phaedrus" <phaedrusalt@hotmail.com> writes:
> So, what did I learn from all this?  First, with rare exception you CAN 
> write darn near anything in almost any language.  (I'll ignore Lisp for 
> now.)  Second, I'd much rather write complicated structures in a language 
> that makes it easier to understand those structures later.  (A good reason 
> for ignoring Lisp.  And C++, and C#, and Java and assembly and machine 
> code...)  If you can't figure out what it's doing at 3am, then it's probably 
> too cryptic for general use, and you'd be surprised how much of your work 
> will get done around that time.

While I agree with your basic point, I think you are dead wrong about Lisp. If
you cannot understand your complicated Lisp structures later, than you simply
are not using it properly.

In fact, in an ideal world I would rather be programming in Lisp/Scheme than
any other language, including Ada. Even stronger, to me Lisp is the ideal pure
computer science language, one that every other language evolves to emulate
over time :-).



-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
rAYblaaK@STRIPCAPStelus.net                    The Rhythm has my soul.



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over
  2008-03-09 13:32     ` Maciej Sobczak
                         ` (2 preceding siblings ...)
  2008-03-09 22:31       ` Jeffrey R. Carter
@ 2008-03-10  3:04       ` Larry Kilgallen
  2008-03-10  9:23         ` Maciej Sobczak
  3 siblings, 1 reply; 96+ messages in thread
From: Larry Kilgallen @ 2008-03-10  3:04 UTC (permalink / raw)


In article <53e0fda7-a536-4899-a115-9d4e137ac698@13g2000hsb.googlegroups.com>, Maciej Sobczak <see.my.homepage@gmail.com> writes:

> nobody did any cryptography in Ada, etc.

Certainly I implemented SHA-1 in Ada.



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-09 22:31       ` Jeffrey R. Carter
@ 2008-03-10  3:53         ` gpriv
  0 siblings, 0 replies; 96+ messages in thread
From: gpriv @ 2008-03-10  3:53 UTC (permalink / raw)


On Mar 9, 6:31 pm, "Jeffrey R. Carter"
<spam.jrcarter....@spam.acm.org> wrote:
> Maciej Sobczak wrote:
>
> > You are right.
> > In the same way, nobody every wrote a GUI program in Ada, nor a
> > networking program, nobody did any cryptography in Ada, etc. These
> > languages are almost completely useless nowadays, sigh.
> > I accept this way of reasoning - but it does not change anything in
> > the industry practice.
>
> I've written a VGA driver/library from scratch entirely in Ada (for DOS) and
> built graphical programs on top of it. I've written cryptographic code in Ada.
> I've worked on projects that implemented networking in Ada. That's hardly
> "nobody"; it's just a question of how much experience one has.
>
> --
> Jeff Carter
> "Crucifixion's a doddle."
> Monty Python's Life of Brian
> 82

Sign me in "nobody club".  I am beta testing Video Network Management
server in Ada (Linux/Windows transparent) works incredibly well and
took only 3-months from start to beta!

George :-)



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-10  0:04         ` Ray Blaak
@ 2008-03-10  7:49           ` Georg Bauhaus
  2008-03-10 16:48             ` Ray Blaak
  2008-03-10  7:53           ` Phaedrus
  1 sibling, 1 reply; 96+ messages in thread
From: Georg Bauhaus @ 2008-03-10  7:49 UTC (permalink / raw)


Ray Blaak wrote:

> In fact, in an ideal world I would rather be programming in Lisp/Scheme than
> any other language, including Ada.

(only (if (not (were it inverted))))

> Even stronger, to me Lisp is the ideal pure
> computer science language, one that every other language evolves to emulate
> over time :-).

Does your ideal world require controlling expression
of time and storage?



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-10  0:04         ` Ray Blaak
  2008-03-10  7:49           ` Georg Bauhaus
@ 2008-03-10  7:53           ` Phaedrus
  1 sibling, 0 replies; 96+ messages in thread
From: Phaedrus @ 2008-03-10  7:53 UTC (permalink / raw)


Rea((y, I d()n't kn()w why pe()ple can't re())d Lisp!

By the way, didja know that every year, EVERY entrant in the Lisp Obfuscated 
Code contest wins?  *grin*  First prize is a set of new "(" and ")" keys for 
their keyboards!

Brian
*Geek humor, it's not just for breakfast anymore*

"Ray Blaak" <rAYblaaK@STRIPCAPStelus.net> wrote in message 
news:u7igbsaul.fsf@STRIPCAPStelus.net...
> "Phaedrus" <phaedrusalt@hotmail.com> writes:
>> So, what did I learn from all this?  First, with rare exception you CAN
>> write darn near anything in almost any language.  (I'll ignore Lisp for
>> now.)  Second, I'd much rather write complicated structures in a language
>> that makes it easier to understand those structures later.  (A good 
>> reason
>> for ignoring Lisp.  And C++, and C#, and Java and assembly and machine
>> code...)  If you can't figure out what it's doing at 3am, then it's 
>> probably
>> too cryptic for general use, and you'd be surprised how much of your work
>> will get done around that time.
>
> While I agree with your basic point, I think you are dead wrong about 
> Lisp. If
> you cannot understand your complicated Lisp structures later, than you 
> simply
> are not using it properly.
>
> In fact, in an ideal world I would rather be programming in Lisp/Scheme 
> than
> any other language, including Ada. Even stronger, to me Lisp is the ideal 
> pure
> computer science language, one that every other language evolves to 
> emulate
> over time :-).
>
>
>
> -- 
> Cheers,                                        The Rhythm is around me,
>                                               The Rhythm has control.
> Ray Blaak                                      The Rhythm is inside me,
> rAYblaaK@STRIPCAPStelus.net                    The Rhythm has my soul. 





^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over
  2008-03-10  3:04       ` Robert Dewar's great article about the Strengths of Ada over Larry Kilgallen
@ 2008-03-10  9:23         ` Maciej Sobczak
  2008-03-10 19:01           ` Jeffrey R. Carter
  2008-03-22 21:12           ` Florian Weimer
  0 siblings, 2 replies; 96+ messages in thread
From: Maciej Sobczak @ 2008-03-10  9:23 UTC (permalink / raw)


On 10 Mar, 04:04, Kilgal...@SpamCop.net (Larry Kilgallen) wrote:

> > nobody did any cryptography in Ada, etc.
>
> Certainly I implemented SHA-1 in Ada.

Sorry, but most likely you did not understand the logic of my answer
to Jeffrey.
He claimed that I did not write any multithreading C or C++ program,
presumably because the C and C++ standards say nothing about threads.
For me this kind of argument is just a handwaving and to show this I
replied with the same logic that nobody did <put-your-pet-application-
here> in Ada, for the simple reason that AARM says nothing about it.

It is *obvious* for me that Ada, C and C++ (and some other languages -
but not all of them) can be used to write any kind software. I
apologize if my ironical answer to Jeffrey was misunderstood.

Still, I promise that I will answer like that next time as well. ;-)

--
Maciej Sobczak * www.msobczak.com * www.inspirel.com



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-09 13:37       ` Dmitry A. Kazakov
  2008-03-09 14:41         ` Vadim Godunko
@ 2008-03-10  9:56         ` Ole-Hjalmar Kristensen
  1 sibling, 0 replies; 96+ messages in thread
From: Ole-Hjalmar Kristensen @ 2008-03-10  9:56 UTC (permalink / raw)


>>>>> "DAK" == Dmitry A Kazakov <mailbox@dmitry-kazakov.de> writes:

<snip>

    DAK> As for multi-cores, I guess that protected object would not be a good
    DAK> concurrency primitive for such architectures. In a long term perspective,
    DAK> we could experience the pendulum swinging back to tasks.

    DAK> -- 
    DAK> Regards,
    DAK> Dmitry A. Kazakov
    DAK> http://www.dmitry-kazakov.de

I agree. Sharing data between cores or CPUs will give you a
performance hit with todays architectures since it typically makes the
caches much less effective, in addition to the synchroniz<tion cost
itself. In many cases it is better if you can partition your data in
the beginning of the computation and assign a partition of the problem
to each core so they can run without syncronization and assemble the
result at the end. This way, at least you have to think of the cost
associated with exchanging data between tasks or cores.

-- 
   C++: The power, elegance and simplicity of a hand grenade.



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-10  7:49           ` Georg Bauhaus
@ 2008-03-10 16:48             ` Ray Blaak
  0 siblings, 0 replies; 96+ messages in thread
From: Ray Blaak @ 2008-03-10 16:48 UTC (permalink / raw)


Georg Bauhaus <see.reply.to@maps.futureapps.de> writes:
> Ray Blaak wrote:
> > Even stronger, to me Lisp is the ideal pure
> > computer science language, one that every other language evolves to emulate
> > over time :-).
> 
> Does your ideal world require controlling expression
> of time and storage?

Time yes, storage, nah. That's the ideal garbage collector's job.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
rAYblaaK@STRIPCAPStelus.net                    The Rhythm has my soul.



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over
  2008-03-10  9:23         ` Maciej Sobczak
@ 2008-03-10 19:01           ` Jeffrey R. Carter
  2008-03-10 22:00             ` Maciej Sobczak
  2008-03-22 21:12           ` Florian Weimer
  1 sibling, 1 reply; 96+ messages in thread
From: Jeffrey R. Carter @ 2008-03-10 19:01 UTC (permalink / raw)


Maciej Sobczak wrote:
> He claimed that I did not write any multithreading C or C++ program,
> presumably because the C and C++ standards say nothing about threads.
> For me this kind of argument is just a handwaving and to show this I
> replied with the same logic that nobody did <put-your-pet-application-
> here> in Ada, for the simple reason that AARM says nothing about it.

Balzac!

You claimed you wrote multithreaded SW in C; I said that you wrote it in C plus 
a threading library that you did not write, which is not writing it in C. I 
never said anything about the standards. So your reply *obviously* claimed that 
no one has written the kinds of SW you mentioned in Ada without a library that 
one did not write. Numerous responses have proven that to be untrue.

It's possible to write all of multithreaded SW in C, including the threading; 
that would qualify as writing such SW in C. It would also be a complete waste of 
time, but those who use C by choice have already demonstrated that they want to 
waste their time.

-- 
Jeff Carter
"I spun around, and there I was, face to face with a
six-year-old kid. Well, I just threw my guns down and
walked away. Little bastard shot me in the ass."
Blazing Saddles
40



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-09 14:41         ` Vadim Godunko
@ 2008-03-10 20:51           ` Randy Brukardt
  2008-03-10 22:30             ` Niklas Holsti
  0 siblings, 1 reply; 96+ messages in thread
From: Randy Brukardt @ 2008-03-10 20:51 UTC (permalink / raw)


"Vadim Godunko" <vgodunko@gmail.com> wrote in message
news:ec684efe-61a6-4463-bd43-fb5895e868bc@x30g2000hsd.googlegroups.com...
> On Mar 9, 4:37 pm, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de>
> wrote:
> >
> > It is unclear in which the context you are using protected objects. I
don't
> > see why a protected object should be slower than, say, critical section
+
> > operation.
> >
> I have attach all source code. Protected object used for atomic
> reference counting (The design may be looks strange, I just have plans
> to replace protected object by the inline assembler code).

If you need an atomic component, why didn't you just declare it as such and
let the compiler handle the mess? (Yes, an implementation is allowed to
reject pragma Atomic if it can't handle such, but that should always be OK
for an integer counter.) A protected object is going to be much heavier than
Atomic (which just uses the hardware support directly).

...
    type Private_Data is record
       --Counter : aliased League.Internals.Atomics.Counter;
       Counter : Natural := 0;
       pragma Atomic (Counter);
       String  : Utf16_String_Access;
       Last    : Natural := 0;
       Length  : Natural := 0;
    end record;

(No, I haven't tried this in your program.)

                                  Randy.





^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-08 22:11 ` Maciej Sobczak
                     ` (3 preceding siblings ...)
  2008-03-09  8:20   ` Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing! Pascal Obry
@ 2008-03-10 21:24   ` Randy Brukardt
  2008-03-11 10:12     ` Alex R. Mosteo
  2008-03-22 22:43     ` Florian Weimer
  2008-04-29  7:15   ` Ivan Levashew
  5 siblings, 2 replies; 96+ messages in thread
From: Randy Brukardt @ 2008-03-10 21:24 UTC (permalink / raw)


"Maciej Sobczak" <see.my.homepage@gmail.com> wrote in message
news:89af8399-94fb-42b3-909d-edf3c98d32e5@n75g2000hsh.googlegroups.com...
...
> Take for example lock-free algorithms. There is no visible research on
> this related to Ada, unlike Java and C++ (check on
> comp.programming.threads).

Perhaps I'm showing my ignorance, but does there need to be any? Ada
supports lock-free threads quite well using pragma Atomic. I find that the
only locks in my recent programs are (1) those for selecting a new job; (2)
those for protecting the logging resources; and (3) those for protecting
complex data structures (such as pattern sets) that can be updated
asynchronously. The vast majority of the code does not contain any locks.

There's no hope of avoiding the locks that protect shared OS resources -
ultimately devices - you could push them into the OS, but that's about it.
The data structure locks might be avoidable in some cases - but that doesn't
seem language-specific (surely if you can write a lock-free queue algorithm
in C you can write it in Ada - and probably with less games).

If Ada needs anything, it is more emphasis on tasking safety - that is,
detection of unsafe access to shared objects, livelocks, and deadlocks -
that would make it easier to write correct tasking programs. That's clearly
possible in Ada because the threading is an integral part of the semantics
of the language, and it would add value to the existing facilities of the
language.

                                      Randy.





^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over
  2008-03-10 19:01           ` Jeffrey R. Carter
@ 2008-03-10 22:00             ` Maciej Sobczak
  2008-03-11  0:48               ` Jeffrey R. Carter
  0 siblings, 1 reply; 96+ messages in thread
From: Maciej Sobczak @ 2008-03-10 22:00 UTC (permalink / raw)


On 10 Mar, 20:01, "Jeffrey R. Carter"
<spam.jrcarter....@spam.acm.org>

> You claimed you wrote multithreaded SW in C; I said that you wrote it in C plus
> a threading library that you did not write, which is not writing it in C.

Fine. Let's go on:

Those who use Ada and any external library that they did not write
themselves are not writing in Ada.

Not counting those who target the hardware directly, every programmer
uses some services offered by the operating system. Even things such
obvious as file operations use system services. By your logic, using
files is not writing in Ada, because the related system services are
not written by those who use them. GUI? Not Ada. Network? Not Ada. By
your logic almost every application I can think of is not Ada.

Funny - you can write multitasking Ada program, which is completely
useless. The very moment it tries to become useful by interacting with
external world with the help of some system services it is no longer
Ada.

Really, your hand-waving does not move anything forward.

No matter how much you hate this idea, I can write multithreaded
software in C++.

> those who use C by choice have already demonstrated that they want to
> waste their time.

Yes. That's why some use C++. Or Ada.

--
Maciej Sobczak * www.msobczak.com * www.inspirel.com



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-10 20:51           ` Randy Brukardt
@ 2008-03-10 22:30             ` Niklas Holsti
  0 siblings, 0 replies; 96+ messages in thread
From: Niklas Holsti @ 2008-03-10 22:30 UTC (permalink / raw)


Randy Brukardt wrote:
> "Vadim Godunko" <vgodunko@gmail.com> wrote in message
> news:ec684efe-61a6-4463-bd43-fb5895e868bc@x30g2000hsd.googlegroups.com...
>>
>>I have attach all source code. Protected object used for atomic
>>reference counting...
> 
> If you need an atomic component, why didn't you just declare it as such and
> let the compiler handle the mess? (Yes, an implementation is allowed to
> reject pragma Atomic if it can't handle such, but that should always be OK
> for an integer counter.) A protected object is going to be much heavier than
> Atomic (which just uses the hardware support directly).
> 
> ...
>     type Private_Data is record
>        --Counter : aliased League.Internals.Atomics.Counter;
>        Counter : Natural := 0;
>        pragma Atomic (Counter);

That won't work if the Counter is used for reference counts. 
Vadim's code has parts like

          Self.Value := Self.Value - 1;
          Zero := Self.Value = 0;

and pragma Atomic will not ensure that the whole sequence of 
statements and several uses of Self.Value are a single critical 
(uninterrupted) section. The pragma only means that the whole of 
Self.Value can be read or written atomically, even if it contains 
more than one storage unit. It does not ensure that Self.Value can 
be updated atomically. The sequence "read; modify; write" can be 
interrupted even with pragma Atomic.

As Randy says in another message, pragma Atomic is good foor 
lock-free algorithms (if the memory order semantics are good) but 
it is not enough for lock-based algorithms such as Vadim is using.

-- 
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
       .      @       .



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over
  2008-03-10 22:00             ` Maciej Sobczak
@ 2008-03-11  0:48               ` Jeffrey R. Carter
  2008-03-11  7:12                 ` Pascal Obry
  2008-03-11  8:59                 ` Maciej Sobczak
  0 siblings, 2 replies; 96+ messages in thread
From: Jeffrey R. Carter @ 2008-03-11  0:48 UTC (permalink / raw)


Maciej Sobczak wrote:
> 
> Those who use Ada and any external library that they did not write
> themselves are not writing in Ada.

Correct. They are writing in Ada plus the library. This is not a problem when 
the library is portable.

> Not counting those who target the hardware directly, every programmer
> uses some services offered by the operating system. Even things such
> obvious as file operations use system services. By your logic, using
> files is not writing in Ada, because the related system services are
> not written by those who use them. GUI? Not Ada. Network? Not Ada. By
> your logic almost every application I can think of is not Ada.

You are apparently not very good at logic. File operations are part of the 
language, so obviously they are Ada.

As for the others, you've already been given examples that were written in Ada, 
but you choose to ignore inconvenient facts.

> No matter how much you hate this idea, I can write multithreaded
> software in C++.

No matter how much you hate the idea, you can but you don't.

> Yes. That's why some use C++. Or Ada.

Those who use C++ by choice have already demonstrated that they want to waste 
their time.

-- 
Jeff Carter
"I spun around, and there I was, face to face with a
six-year-old kid. Well, I just threw my guns down and
walked away. Little bastard shot me in the ass."
Blazing Saddles
40



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over
  2008-03-11  0:48               ` Jeffrey R. Carter
@ 2008-03-11  7:12                 ` Pascal Obry
  2008-03-11  8:59                 ` Maciej Sobczak
  1 sibling, 0 replies; 96+ messages in thread
From: Pascal Obry @ 2008-03-11  7:12 UTC (permalink / raw)
  To: Jeffrey R. Carter


Jeffrey,

> Those who use C++ by choice have already demonstrated that they want to 
> waste their time.

Sorry I do not agree! They are not wasting their own time but time from 
all maintainers who will come in the project after they have move on 
something else. I've lived this! Hence they are also wasting their 
company money...

Pascal.

-- 

--|------------------------------------------------------
--| Pascal Obry                           Team-Ada Member
--| 45, rue Gabriel Peri - 78114 Magny Les Hameaux FRANCE
--|------------------------------------------------------
--|              http://www.obry.net
--| "The best way to travel is by means of imagination"
--|
--| gpg --keyserver wwwkeys.pgp.net --recv-key C1082595



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over
  2008-03-11  0:48               ` Jeffrey R. Carter
  2008-03-11  7:12                 ` Pascal Obry
@ 2008-03-11  8:59                 ` Maciej Sobczak
  2008-03-11  9:49                   ` GNAT bug, Assert_Failure at atree.adb:2893 Ludovic Brenta
  2008-03-14 20:03                   ` Robert Dewar's great article about the Strengths of Ada over Ivan Levashew
  1 sibling, 2 replies; 96+ messages in thread
From: Maciej Sobczak @ 2008-03-11  8:59 UTC (permalink / raw)


On 11 Mar, 01:48, "Jeffrey R. Carter" <spam.jrcarter....@spam.acm.org>
wrote:

> > Those who use Ada and any external library that they did not write
> > themselves are not writing in Ada.
>
> Correct. They are writing in Ada plus the library. This is not a problem when
> the library is portable.

Yes. Portable libraries are very important.
For example, C++ programmers have portable libraries for
multithreading (among others).


> You are apparently not very good at logic. File operations are part of the
> language, so obviously they are Ada.

Function calls and class instantiations are part of C++, so obviously
using libraries that manage multithreading via function calls and
class instantiations is C++.

(This conversation is dead boring, but I just cannot stop.)

> As for the others, you've already been given examples that were written in Ada,
> but you choose to ignore inconvenient facts.

Do you need examples of multithreaded C++ programs?
What did you use to send your last post? Looks like Thunderbird. It is
one of the most successful C++ applications out there.

> > No matter how much you hate this idea, I can write multithreaded
> > software in C++.
>
> No matter how much you hate the idea, you can but you don't.

Fortunately, my multithreaded C++ software does not know this.
Otherwise it would have to stop working.

> > Yes. That's why some use C++. Or Ada.
>
> Those who use C++ by choice have already demonstrated that they want to waste
> their time.

I disagree. And probably the only thing I can do now (while being at
least a little bit constructive) is to offer my consulting services on
effective C++ programming.

BTW - this is what I got yesterday:

+===========================GNAT BUG
DETECTED==============================+
| 4.3.0 20070527 (experimental) (i686-apple-darwin8) Assert_Failure
atree.adb:2893|
[...]

It was a basic program like the ones in Ada Gems.
I guess I have to install a newer compiler. ;-)

--
Maciej Sobczak * www.msobczak.com * www.inspirel.com



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: GNAT bug, Assert_Failure at atree.adb:2893
  2008-03-11  8:59                 ` Maciej Sobczak
@ 2008-03-11  9:49                   ` Ludovic Brenta
  2008-03-14 20:03                   ` Robert Dewar's great article about the Strengths of Ada over Ivan Levashew
  1 sibling, 0 replies; 96+ messages in thread
From: Ludovic Brenta @ 2008-03-11  9:49 UTC (permalink / raw)


Maciej Sobczak wrote:
> BTW - this is what I got yesterday:
>
> +===========================GNAT BUG
> DETECTED==============================+
> | 4.3.0 20070527 (experimental) (i686-apple-darwin8) Assert_Failure
> atree.adb:2893|
> [...]
>
> It was a basic program like the ones in Ada Gems.
> I guess I have to install a newer compiler. ;-)

Please report this in GCC bugzilla[1] along with your basic program.
Do this even if the bug is absent from the just-released GCC 4.3.0.
This bug may also be present in other versions of GCC, including those
currently in stable distributions. Someone might contribute a
workaround or a fix. The fix can then be backported if necessary.

[1] http://gcc.gnu.org/bugzilla

--
Ludovic Brenta.



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-10 21:24   ` Randy Brukardt
@ 2008-03-11 10:12     ` Alex R. Mosteo
  2008-03-22 22:43     ` Florian Weimer
  1 sibling, 0 replies; 96+ messages in thread
From: Alex R. Mosteo @ 2008-03-11 10:12 UTC (permalink / raw)


Randy Brukardt wrote:

> "Maciej Sobczak" <see.my.homepage@gmail.com> wrote in message
> news:89af8399-94fb-42b3-909d-edf3c98d32e5@n75g2000hsh.googlegroups.com...
> ...
>> Take for example lock-free algorithms. There is no visible research on
>> this related to Ada, unlike Java and C++ (check on
>> comp.programming.threads).
> 
> Perhaps I'm showing my ignorance, but does there need to be any? Ada
> supports lock-free threads quite well using pragma Atomic. I find that the
> only locks in my recent programs are (1) those for selecting a new job; (2)
> those for protecting the logging resources; and (3) those for protecting
> complex data structures (such as pattern sets) that can be updated
> asynchronously. The vast majority of the code does not contain any locks.

This is also my experience. Thanks to the simple and easy to understand Ada
tasking, I was writing multithreaded programs routinely many years ago, when
no consumer multi-core existed. I did this simply because some processes are
very naturally separable in tasks, simplifying both implementation and
understanding, since I couldn't benefit from multiprocessors.

This means that these programs would benefit (not in use anymore) from
multi-core today with zero effort on my part, and without dependencies on
external libraries.

I still get shivers when I see C/C++ code with manually P/V locked semaphores.
C has no controlling and no alternative, and some people doing C++ is still
mimicking C without taking advantage of full C++. Is there excuses for not
using automatic critical sections in C++?



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-09 12:40     ` Vadim Godunko
  2008-03-09 13:37       ` Dmitry A. Kazakov
@ 2008-03-11 13:58       ` george.priv
  2008-03-11 15:41         ` Vadim Godunko
  2008-03-11 22:09       ` gpriv
  2 siblings, 1 reply; 96+ messages in thread
From: george.priv @ 2008-03-11 13:58 UTC (permalink / raw)


On Mar 9, 8:40 am, Vadim Godunko <vgodu...@gmail.com> wrote:
> On Mar 9, 11:20 am, Pascal Obry <pas...@obry.net> wrote:
>
> > I prefer using Ada, even losing 10% performance initially.
>
> I usually have 4+ times penalty for Ada program with controlled object
> for memory allocation/deallocation control and protected objects for
> thread safe operations in comparison with equivalent C++ program. :-(
>
> #include <QString>
> #include <QTime>
>
> void test(unsigned length)
> {
>         uint x[length];
>         QString a[1024];
>         QString b[1024];
>         QTime timer;
>
>         timer.start();
>
>         for (int i = 0; i < 1024; i++)
>         {
>                 a[i] = QString::fromUcs4(x, length);
>         }
>
>         qDebug("Init %d %lf", length, (double)timer.elapsed() / 1000);
>
>         timer.restart();
>
>         for (int i = 0; i < 1000; i++)
>         {
>                 if (i % 2)
>                 {
>                         for (int j = 0; j < 1024; j++)
>                         {
>                                 a[j] = b[j];
>                         }
>                 }
>                 else
>                 {
>                         for (int j = 0; j < 1024; j++)
>                         {
>                                 b[j] = a[j];
>                         }
>                 }
>         }
>
>         qDebug("Copy %d %lf", length, (double)timer.elapsed() / 1000);
>
> }
>
> int main()
> {
>         test (128);
>         test (1024);
>         return 0;
>
> }
>
> with Ada.Calendar;
> with Ada.Wide_Wide_Text_IO;
>
> with League.Strings;
>
> procedure Speed_Test_League is
>
>    type Universal_String_Array is
>      array (Positive range <>) of League.Strings.Universal_String;
>
>    procedure Test (Length : in Positive);
>
>    procedure Test (Length : in Positive) is
>       use type Ada.Calendar.Time;
>
>       X : constant Wide_Wide_String (1 .. Length) := (others => ' ');
>       A : Universal_String_Array (1 .. 1_024);
>       B : Universal_String_Array (1 .. 1_024);
>       S : Ada.Calendar.Time;
>
>    begin
>       S := Ada.Calendar.Clock;
>
>       for J in A'Range loop
>          A (J) := League.Strings.To_Universal_String (X);
>       end loop;
>
>       Ada.Wide_Wide_Text_IO.Put_Line
>        ("Init"
>           & Positive'Wide_Wide_Image (Length)
>           & Duration'Wide_Wide_Image (Ada.Calendar.Clock - S));
>
>       S := Ada.Calendar.Clock;
>
>       for J in 1 .. 1_000 loop
>          if J mod 2 = 1 then
>             B := A;
>
>          else
>             A := B;
>          end if;
>       end loop;
>
>       Ada.Wide_Wide_Text_IO.Put_Line
>        ("Copy"
>           & Positive'Wide_Wide_Image (Length)
>           & Duration'Wide_Wide_Image (Ada.Calendar.Clock - S));
>    end Test;
>
> begin
>    Test (128);
>    Test (1_024);
> end Speed_Test_League;

This code is not multi-core safe.  Are you sure that QString has
Vtab?  If not then comparison will be unfair.



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-11 13:58       ` george.priv
@ 2008-03-11 15:41         ` Vadim Godunko
  2008-03-12  0:32           ` gpriv
  0 siblings, 1 reply; 96+ messages in thread
From: Vadim Godunko @ 2008-03-11 15:41 UTC (permalink / raw)


On Mar 11, 4:58 pm, george.p...@gmail.com wrote:
>
> This code is not multi-core safe.  Are you sure that QString has
> Vtab?  If not then comparison will be unfair.
What is it "Vtab"?

QString implemented in the same way as Universal_String. Both classes
internally use shared data - actual string and reference counter.
Operations on both classes are "reentrant" (see 1) and not "thread-
safe" (see 2). Both classes will be even not "reentrant" without
atomic increment/decrement operations.

PS.

(1) A reentrant function can be called simultaneously by multiple
threads provided that each invocation of the function references
unique data.

(2) A thread-safe function can be called simultaneously by multiple
threads when each invocation references shared data. All access to the
shared data is serialized.



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-09 12:40     ` Vadim Godunko
  2008-03-09 13:37       ` Dmitry A. Kazakov
  2008-03-11 13:58       ` george.priv
@ 2008-03-11 22:09       ` gpriv
  2 siblings, 0 replies; 96+ messages in thread
From: gpriv @ 2008-03-11 22:09 UTC (permalink / raw)


On Mar 9, 8:40 am, Vadim Godunko <vgodu...@gmail.com> wrote:
> On Mar 9, 11:20 am, Pascal Obry <pas...@obry.net> wrote:
>
> > I prefer using Ada, even losing 10% performance initially.
>
> I usually have 4+ times penalty for Ada program with controlled object
> for memory allocation/deallocation control and protected objects for
> thread safe operations in comparison with equivalent C++ program. :-(
>
> #include <QString>
> #include <QTime>
>
> void test(unsigned length)
> {
>         uint x[length];
>         QString a[1024];
>         QString b[1024];
>         QTime timer;
>
>         timer.start();
>
>         for (int i = 0; i < 1024; i++)
>         {
>                 a[i] = QString::fromUcs4(x, length);
>         }
>
>         qDebug("Init %d %lf", length, (double)timer.elapsed() / 1000);
>
>         timer.restart();
>
>         for (int i = 0; i < 1000; i++)
>         {
>                 if (i % 2)
>                 {
>                         for (int j = 0; j < 1024; j++)
>                         {
>                                 a[j] = b[j];
>                         }
>                 }
>                 else
>                 {
>                         for (int j = 0; j < 1024; j++)
>                         {
>                                 b[j] = a[j];
>                         }
>                 }
>         }
>
>         qDebug("Copy %d %lf", length, (double)timer.elapsed() / 1000);
>
> }
>
> int main()
> {
>         test (128);
>         test (1024);
>         return 0;
>
> }
>
> with Ada.Calendar;
> with Ada.Wide_Wide_Text_IO;
>
> with League.Strings;
>
> procedure Speed_Test_League is
>
>    type Universal_String_Array is
>      array (Positive range <>) of League.Strings.Universal_String;
>
>    procedure Test (Length : in Positive);
>
>    procedure Test (Length : in Positive) is
>       use type Ada.Calendar.Time;
>
>       X : constant Wide_Wide_String (1 .. Length) := (others => ' ');
>       A : Universal_String_Array (1 .. 1_024);
>       B : Universal_String_Array (1 .. 1_024);
>       S : Ada.Calendar.Time;
>
>    begin
>       S := Ada.Calendar.Clock;
>
>       for J in A'Range loop
>          A (J) := League.Strings.To_Universal_String (X);
>       end loop;
>
>       Ada.Wide_Wide_Text_IO.Put_Line
>        ("Init"
>           & Positive'Wide_Wide_Image (Length)
>           & Duration'Wide_Wide_Image (Ada.Calendar.Clock - S));
>
>       S := Ada.Calendar.Clock;
>
>       for J in 1 .. 1_000 loop
>          if J mod 2 = 1 then
>             B := A;
>
>          else
>             A := B;
>          end if;
>       end loop;
>
>       Ada.Wide_Wide_Text_IO.Put_Line
>        ("Copy"
>           & Positive'Wide_Wide_Image (Length)
>           & Duration'Wide_Wide_Image (Ada.Calendar.Clock - S));
>    end Test;
>
> begin
>    Test (128);
>    Test (1_024);
> end Speed_Test_League;

Does QString has vtable?




^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-11 15:41         ` Vadim Godunko
@ 2008-03-12  0:32           ` gpriv
  2008-03-12 13:33             ` Maciej Sobczak
  0 siblings, 1 reply; 96+ messages in thread
From: gpriv @ 2008-03-12  0:32 UTC (permalink / raw)


On Mar 11, 11:41 am, Vadim Godunko <vgodu...@gmail.com> wrote:
> On Mar 11, 4:58 pm, george.p...@gmail.com wrote:
>
> > This code is not multi-core safe.  Are you sure that QString has
> > Vtab?  If not then comparison will be unfair.
>
> What is it "Vtab"?
>
> QString implemented in the same way as Universal_String. Both classes
> internally use shared data - actual string and reference counter.
> Operations on both classes are "reentrant" (see 1) and not "thread-
> safe" (see 2). Both classes will be even not "reentrant" without
> atomic increment/decrement operations.
>
> PS.
>
> (1) A reentrant function can be called simultaneously by multiple
> threads provided that each invocation of the function references
> unique data.
>
> (2) A thread-safe function can be called simultaneously by multiple
> threads when each invocation references shared data. All access to the
> shared data is serialized.

Sorry for double post, did not get it through.

Thanks for some useful education :-)

VTab is virtual call table, created when class has at least one
virtual function. That will make it similar to tagged record.
Otherwise it will be similar to a plain Ada record and destructor may
be inlined.  So to be fair, your QString should have at least virtual
destructor.  That will cause indirect call to actual one.  The
constructors in C++ not virtual by definition, so to be totally equal
should have virtual init and adjust that will be called from
constructor and copy operator.  That may get rid of your 4+ advantage.

With C++ to be totally multicore thread safe, you need to make
"volatile" all the data that has any possibility to be accessed from
different threads. That makes access to this variables much slower.
Otherwise, you may get incorrect readings once in a while. Few years
back I spent months chasing these misreadings when moved heavily
mullti-threded app to multi-core.  These bugs are most nasty, since
they rarely cause big trouble, manifesting themselves only once in a
while (matter of weeks).


Regards,

George Privalov




^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-12  0:32           ` gpriv
@ 2008-03-12 13:33             ` Maciej Sobczak
  2008-03-12 14:41               ` gpriv
  0 siblings, 1 reply; 96+ messages in thread
From: Maciej Sobczak @ 2008-03-12 13:33 UTC (permalink / raw)


On 12 Mar, 01:32, gp...@axonx.com wrote:


> VTab is virtual call table, created when class has at least one
> virtual function. That will make it similar to tagged record.
> Otherwise it will be similar to a plain Ada record and destructor may
> be inlined.

In C++ destructor can be inlined *always*, unless the object is
deleted via pointer to base. In other words, objects with automatic or
static storage duration (local, static and global) objects can have
inlined destructors - no matter whether it is virtual or not.

> With C++ to be totally multicore thread safe, you need to make
> "volatile" all the data that has any possibility to be accessed from
> different threads.

Absolutely incorrect.
In C and C++ "volatile" has nothing to do with threads. It is neither
necessary nor sufficient.

> Otherwise, you may get incorrect readings once in a while.

"volatile" does not prevent it.

> Few years
> back I spent months chasing these misreadings when moved heavily
> mullti-threded app to multi-core.

You still have some months of chasing ahead. :-)
Start with removing all "volatile" keywords from your code. Then make
it right.

--
Maciej Sobczak * www.msobczak.com * www.inspirel.com



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-12 13:33             ` Maciej Sobczak
@ 2008-03-12 14:41               ` gpriv
  2008-03-12 15:22                 ` Vadim Godunko
  2008-03-12 16:28                 ` Maciej Sobczak
  0 siblings, 2 replies; 96+ messages in thread
From: gpriv @ 2008-03-12 14:41 UTC (permalink / raw)


On Mar 12, 9:33 am, Maciej Sobczak <see.my.homep...@gmail.com> wrote:
> On 12 Mar, 01:32, gp...@axonx.com wrote:
>
> > VTab is virtual call table, created when class has at least one
> > virtual function. That will make it similar to tagged record.
> > Otherwise it will be similar to a plain Ada record and destructor may
> > be inlined.
>
> In C++ destructor can be inlined *always*, unless the object is
> deleted via pointer to base. In other words, objects with automatic or
> static storage duration (local, static and global) objects can have
> inlined destructors - no matter whether it is virtual or not.

Yes if compiler can make that determination at compile time.  If not
it will dispatch via VTab. My point was to compare make all the
conditions equal.  After all you may consider generic model for Ada
which eliminate all the dispatch overhead.

>
> > With C++ to be totally multicore thread safe, you need to make
> > "volatile" all the data that has any possibility to be accessed from
> > different threads.
>
> Absolutely incorrect.
> In C and C++ "volatile" has nothing to do with threads. It is neither
> necessary nor sufficient.
>
> > Otherwise, you may get incorrect readings once in a while.
>
> "volatile" does not prevent it.
>
> > Few years
> > back I spent months chasing these misreadings when moved heavily
> > mullti-threded app to multi-core.
>
> You still have some months of chasing ahead. :-)
> Start with removing all "volatile" keywords from your code. Then make
> it right.
>
> --
> Maciej Sobczak *www.msobczak.com*www.inspirel.com

Definition of volatile:

The volatile keyword is a type qualifier used to declare that an
object can be modified in the program by something such as the
operating system, the hardware, or a concurrently executing thread.



Consider the multi-core execution of two concurrent threads.  One will
modify a variable, another will read.  One will keep the variable in
the register, another will reload it from the memory.  Volatile will
force flushing to the memory. Otherwise thread 2 will read erroneous
data.






^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-12 14:41               ` gpriv
@ 2008-03-12 15:22                 ` Vadim Godunko
  2008-03-13  0:34                   ` gpriv
  2008-03-12 16:28                 ` Maciej Sobczak
  1 sibling, 1 reply; 96+ messages in thread
From: Vadim Godunko @ 2008-03-12 15:22 UTC (permalink / raw)


On 12 мар, 17:41, gp...@axonx.com wrote:
> > In C++ destructor can be inlined *always*, unless the object is
> > deleted via pointer to base. In other words, objects with automatic or
> > static storage duration (local, static and global) objects can have
> > inlined destructors - no matter whether it is virtual or not.
>
> Yes if compiler can make that determination at compile time.  If not
> it will dispatch via VTab. My point was to compare make all the
> conditions equal.  After all you may consider generic model for Ada
> which eliminate all the dispatch overhead.
>
QString is not a polymorphic class - it don't have any virtual
functions. Its constructor, copy constructor, destructor ar inlined.
Its assignment operator not inlined.

Ada program use tagged but not class-wide type, thus compiler never
dispatch any call. I compile all Ada modules with -gnatn (frontend
inline enabled).

Both program are comparable from this point of view. :-)

PS. I have replaced protected type by i386 specific atomic increment/
decrement and have only 1.2 overhead. ;-)



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-12 14:41               ` gpriv
  2008-03-12 15:22                 ` Vadim Godunko
@ 2008-03-12 16:28                 ` Maciej Sobczak
  2008-03-12 17:24                   ` Samuel Tardieu
  2008-03-12 23:54                   ` gpriv
  1 sibling, 2 replies; 96+ messages in thread
From: Maciej Sobczak @ 2008-03-12 16:28 UTC (permalink / raw)


On 12 Mar, 15:41, gp...@axonx.com wrote:

> > In C++ destructor can be inlined *always*, unless the object is
> > deleted via pointer to base. In other words, objects with automatic or
> > static storage duration (local, static and global) objects can have
> > inlined destructors - no matter whether it is virtual or not.
>
> Yes if compiler can make that determination at compile time.

Certainly the compiler knows at compile time whether the object is a
local variable, static or global. This property does not depend on
anything at run-time.
Example:

class MyClass {/* ... */};
void swallow(MyClass * p) { delete p; }

MyClass G; // global (static in disguise)
void foo()
{
    static MyClass S; // static
    MyClass L; // local

    MyClass * D = new MyClass; // dynamic
    swallow(D);
}

Objects G, S and L can have destructors inlined. This is known at
compile time.
Object denoted by pointer D will most likely have the destructor
called virtually (because function swallow cannot assume anything
about its dynamic type), unless the compiler+linker is smart enough to
do some wider optimizations.

In the most extreme case (global linker optimization), all calls can
be resolved statically.


> Definition of volatile:
[...]

I don't know where you got that definition from, but certainly not
from the standard - and as such it is incorrect.


> Consider the multi-core execution of two concurrent threads.  One will
> modify a variable, another will read.  One will keep the variable in
> the register, another will reload it from the memory.  Volatile will
> force flushing to the memory. Otherwise thread 2 will read erroneous
> data.

No. Your perception of what and where is the "memory" is completely
different from that of CPU(s). The volatile keyword can, at best,
force the compiler to use some "memory address" for access. The
problem is that between "memory address" and "physical memory" there
is a long chain of funny things like asynchronous memory writer in CPU
and a few layers of cache with complicated prefetch functionality. The
cache is transparent so can be ignored, but the memory writer module
and prefetch work *asynchronously* and this means that when you "read"
and "write" your variables, you really don't read and write what you
think - or rather not *when* you think. This happens out of the
compiler's control (it's hardware), so "volatile" cannot help you -
you can as well write it in assembly and still have the same problem.

You need memory barrier to ensure visibility between CPUs and volatile
does not provide it (unless you have some funny compiler). To actually
get the barrier, you have to either apply it by hand or use dedicated
system services that do it for you (mutexes, etc.). Now, the best part
- once you do it, volatile give you nothing.

No, the best part is this: volatile not only gives you nothing, it
actually *slows the program down*, because it prevents the compiler
from applying some obvious optimizations when the given object is used
many times within the critical section.

You don't want to slow your program down, do you? :-)

And last but not least - have you tried to use volatile with some C++
classes, like string, vector, map, etc.? Try it. It will not even
compile, but certainly it is possible to use these classes with many
threads.


Short version: don't use volatile for multithreading. It's not the
correct tool to do the job. The correct ones are membars and these are
part of dedicated system services. Use them.

--
Maciej Sobczak * www.msobczak.com * www.inspirel.com



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-12 16:28                 ` Maciej Sobczak
@ 2008-03-12 17:24                   ` Samuel Tardieu
  2008-03-13  8:41                     ` Maciej Sobczak
  2008-03-12 23:54                   ` gpriv
  1 sibling, 1 reply; 96+ messages in thread
From: Samuel Tardieu @ 2008-03-12 17:24 UTC (permalink / raw)


>>>>> "Maciej" == Maciej Sobczak <see.my.homepage@gmail.com> writes:

Maciej> In the most extreme case (global linker optimization), all
Maciej> calls can be resolved statically.

Not even. Consider a base class B with two child classes D1 and
D2 with their own virtual destructors. Pseudo-code:

  B* o;
  if (user_provided_integer_from_keyboard() == 0)
    B = new D1;
  else
    B = new D2;
  do_something_with (B);
  delete B;

You cannot resolve "delete" statically (unless you add a new "if" in
it, which is then equivalent to looking up the destructor in the
virtual table.

  Sam
-- 
Samuel Tardieu -- sam@rfc1149.net -- http://www.rfc1149.net/



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-12 16:28                 ` Maciej Sobczak
  2008-03-12 17:24                   ` Samuel Tardieu
@ 2008-03-12 23:54                   ` gpriv
  2008-03-13  9:40                     ` Maciej Sobczak
  1 sibling, 1 reply; 96+ messages in thread
From: gpriv @ 2008-03-12 23:54 UTC (permalink / raw)


On Mar 12, 12:28 pm, Maciej Sobczak <see.my.homep...@gmail.com> wrote:
> On 12 Mar, 15:41, gp...@axonx.com wrote:
>
> > > In C++ destructor can be inlined *always*, unless the object is
> > > deleted via pointer to base. In other words, objects with automatic or
> > > static storage duration (local, static and global) objects can have
> > > inlined destructors - no matter whether it is virtual or not.
>
> > Yes if compiler can make that determination at compile time.
>
> Certainly the compiler knows at compile time whether the object is a
> local variable, static or global. This property does not depend on
> anything at run-time.
> Example:
>
> class MyClass {/* ... */};
> void swallow(MyClass * p) { delete p; }
>
> MyClass G; // global (static in disguise)
> void foo()
> {
>     static MyClass S; // static
>     MyClass L; // local
>
>     MyClass * D = new MyClass; // dynamic
>     swallow(D);
>
> }
>
> Objects G, S and L can have destructors inlined. This is known at
> compile time.
> Object denoted by pointer D will most likely have the destructor
> called virtually (because function swallow cannot assume anything
> about its dynamic type), unless the compiler+linker is smart enough to
> do some wider optimizations.
>
> In the most extreme case (global linker optimization), all calls can
> be resolved statically.
>
> > Definition of volatile:
>
> [...]
>
> I don't know where you got that definition from, but certainly not
> from the standard - and as such it is incorrect.
>
> > Consider the multi-core execution of two concurrent threads.  One will
> > modify a variable, another will read.  One will keep the variable in
> > the register, another will reload it from the memory.  Volatile will
> > force flushing to the memory. Otherwise thread 2 will read erroneous
> > data.
>
> No. Your perception of what and where is the "memory" is completely
> different from that of CPU(s). The volatile keyword can, at best,
> force the compiler to use some "memory address" for access. The
> problem is that between "memory address" and "physical memory" there
> is a long chain of funny things like asynchronous memory writer in CPU
> and a few layers of cache with complicated prefetch functionality. The
> cache is transparent so can be ignored, but the memory writer module
> and prefetch work *asynchronously* and this means that when you "read"
> and "write" your variables, you really don't read and write what you
> think - or rather not *when* you think. This happens out of the
> compiler's control (it's hardware), so "volatile" cannot help you -
> you can as well write it in assembly and still have the same problem.
>
> You need memory barrier to ensure visibility between CPUs and volatile
> does not provide it (unless you have some funny compiler). To actually
> get the barrier, you have to either apply it by hand or use dedicated
> system services that do it for you (mutexes, etc.). Now, the best part
> - once you do it, volatile give you nothing.
>
> No, the best part is this: volatile not only gives you nothing, it
> actually *slows the program down*, because it prevents the compiler
> from applying some obvious optimizations when the given object is used
> many times within the critical section.
>
> You don't want to slow your program down, do you? :-)
>
> And last but not least - have you tried to use volatile with some C++
> classes, like string, vector, map, etc.? Try it. It will not even
> compile, but certainly it is possible to use these classes with many
> threads.
>
> Short version: don't use volatile for multithreading. It's not the
> correct tool to do the job. The correct ones are membars and these are
> part of dedicated system services. Use them.
>
> --
> Maciej Sobczak *www.msobczak.com*www.inspirel.com

You are replying on posts without reading them carefully.  The issue
is covered in the thread below:

http://www.codeguru.com/forum/archive/index.php/t-442321.html

Please read carefully before throwing back your expertise that I am
sure is quite adequate.  Otherwise it is quite hard to maintain
meaningful discussion.

Cheers,

George



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-12 15:22                 ` Vadim Godunko
@ 2008-03-13  0:34                   ` gpriv
  0 siblings, 0 replies; 96+ messages in thread
From: gpriv @ 2008-03-13  0:34 UTC (permalink / raw)


On Mar 12, 11:22 am, Vadim Godunko <vgodu...@gmail.com> wrote:
> On 12 ÍÁÒ, 17:41, gp...@axonx.com wrote:> > In C++ destructor can be inlined *always*, unless the object is
> > > deleted via pointer to base. In other words, objects with automatic or
> > > static storage duration (local, static and global) objects can have
> > > inlined destructors - no matter whether it is virtual or not.
>
> > Yes if compiler can make that determination at compile time.  If not
> > it will dispatch via VTab. My point was to compare make all the
> > conditions equal.  After all you may consider generic model for Ada
> > which eliminate all the dispatch overhead.
>
> QString is not a polymorphic class - it don't have any virtual
> functions. Its constructor, copy constructor, destructor ar inlined.
> Its assignment operator not inlined.
>
> Ada program use tagged but not class-wide type, thus compiler never
> dispatch any call. I compile all Ada modules with -gnatn (frontend
> inline enabled).
>
> Both program are comparable from this point of view. :-)
>
> PS. I have replaced protected type by i386 specific atomic increment/
> decrement and have only 1.2 overhead. ;-)

That makes sense. Protected objects are guarded by some sort of system
wide mutexes and overhead is unavoidable.

20% overhead is only 3 months of computer evolution, I can live with
that.

George



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-12 17:24                   ` Samuel Tardieu
@ 2008-03-13  8:41                     ` Maciej Sobczak
  2008-03-13 15:20                       ` Samuel Tardieu
  0 siblings, 1 reply; 96+ messages in thread
From: Maciej Sobczak @ 2008-03-13  8:41 UTC (permalink / raw)


On 12 Mar, 18:24, Samuel Tardieu <s...@rfc1149.net> wrote:

> Maciej> In the most extreme case (global linker optimization), all
> Maciej> calls can be resolved statically.
>
> Not even. Consider a base class B with two child classes D1 and
> D2 with their own virtual destructors. Pseudo-code:
>
>   B* o;
>   if (user_provided_integer_from_keyboard() == 0)
>     B = new D1;
>   else
>     B = new D2;
>   do_something_with (B);
>   delete B;
>
> You cannot resolve "delete" statically

Of course I can.
Note that the above is equivalent to:

B* o;
if (user_provided_integer_from_keyboard() == 0)
{
  B = new D1;
  do_something_with (B);
  delete B;
}
else
{
  B = new D2;
  do_something_with (B);
  delete B;
}

There is no further obstacle in resolving the call statically.

--
Maciej Sobczak * www.msobczak.com * www.inspirel.com



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-12 23:54                   ` gpriv
@ 2008-03-13  9:40                     ` Maciej Sobczak
  2008-03-13 10:49                       ` Peter C. Chapin
  2008-03-13 11:42                       ` gpriv
  0 siblings, 2 replies; 96+ messages in thread
From: Maciej Sobczak @ 2008-03-13  9:40 UTC (permalink / raw)


On 13 Mar, 00:54, gp...@axonx.com wrote:

> You are replying on posts without reading them carefully.  The issue
> is covered in the thread below:
>
> http://www.codeguru.com/forum/archive/index.php/t-442321.html

Unfortunately, most of the participants have no idea about the subject
(they even admit it), so their discussion is not a good authority.
The only exception there is Paul McKenzie, who actually claimed
clearly that *volatile has nothing to do with threads*.

There is a link to Microsoft documentation with example. This can be
treated as an authoritative source of information related to Microsoft
compilers *only* (and I have doubts even about this). Since the C++
standard does not define the semantics of volatile in the presence of
multiple threads, then any guarantee given by Microsoft should be
treated only as *language extension*. In particular, the same code
example will not work with other compiler.

To be more exact, if Microsoft claims that their example works with
their compiler, it means that references to volatile provide the
release-acquire memory consistency, which in turn means that they
involve memory barries. This is surprising, but let's check the
assembly output from their compiler...

OK, I've checked it with Visual Studio 2005 (I don't have access to
anything newer).
The generated code does *not* involve memory barriers. It will not
work.

A simpler example can explain the problem.
Consider the following two variables:

bool A = false;  // can be as well volatile
bool B = false;  // can be as well volatile

and two threads:

// Thread1:
A = true;
B = true;

// Thread2:
if (B)
{
    assert(A);
}

*From the point of view* of Thread1, only the following states are
visible, as times goes on downwards:

time 1:   A == false && B == false
time 2:   A == true  && B == false
time 3:   A == true  && B == true

In other words, from the point of view of Thread1, the following
situation:

A == false && B == true

can *never* happen, because the order of writes just makes it
impossible. In other words, the condition and assertion from Thread2
would be correct if executed in Thread1.

The problem is that Thread2 can see something different and there the
assertion can fail. This can happen for two reasons:

1. If you write something to memory, you only instruct the CPU that
you *want* something to be stored. It will do it - but not necessarily
immediately and it can actually happen *some time later*.
The problem is that if you store two values (separate store
instructions), the CPU does not know that they are related - and can
stored them physically in different order. The compiler has no control
over it! It is the hardware that can reorder the writes to memory.

2. If you read something from memory, you can actually read something
that is already in the cache - the value can be there already and
*earlier*.

Both these mechanisms can make Thread2 see different order of
modifications than that perceived by Thread1.
Again, the compiler has no control over it. It is all in hardware.

This is also a reason for why the Double-Check Locking Pattern does
not work. Please read this:

http://www.cs.umd.edu/~pugh/java/memoryModel/DoubleCheckedLocking.html

It describes the problem for Java, but mentions the same ordering
issues.

Don't use volatile for multithreading. It does not work.

Note: Ada gives guarantees for proper ordering for volatile objects.

--
Maciej Sobczak * www.msobczak.com * www.inspirel.com



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-13  9:40                     ` Maciej Sobczak
@ 2008-03-13 10:49                       ` Peter C. Chapin
  2008-03-13 13:03                         ` Alex R. Mosteo
  2008-03-13 11:42                       ` gpriv
  1 sibling, 1 reply; 96+ messages in thread
From: Peter C. Chapin @ 2008-03-13 10:49 UTC (permalink / raw)


This paper:

	http://www.aristeia.com/Papers/DDJ_Jul_Aug_2004_revised.pdf

describes some of these issues in a C++ context and is written by two 
authors who probably know what they are talking about. Their conclusion 
is that it is pretty much impossible to write correct multi-threaded 
code in C++ without (non-standard) compiler assistance. I'm paraphrasing 
here. It is my understanding, however, that this matter will be 
addressed in C++ 0x.

Peter



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-13  9:40                     ` Maciej Sobczak
  2008-03-13 10:49                       ` Peter C. Chapin
@ 2008-03-13 11:42                       ` gpriv
  2008-03-13 16:10                         ` Maciej Sobczak
  1 sibling, 1 reply; 96+ messages in thread
From: gpriv @ 2008-03-13 11:42 UTC (permalink / raw)


On Mar 13, 5:40 am, Maciej Sobczak <see.my.homep...@gmail.com> wrote:
> On 13 Mar, 00:54, gp...@axonx.com wrote:
>
> > You are replying on posts without reading them carefully.  The issue
> > is covered in the thread below:
>
> >http://www.codeguru.com/forum/archive/index.php/t-442321.html
>
> Unfortunately, most of the participants have no idea about the subject
> (they even admit it), so their discussion is not a good authority.
> The only exception there is Paul McKenzie, who actually claimed
> clearly that *volatile has nothing to do with threads*.
>
> There is a link to Microsoft documentation with example. This can be
> treated as an authoritative source of information related to Microsoft
> compilers *only* (and I have doubts even about this). Since the C++
> standard does not define the semantics of volatile in the presence of
> multiple threads, then any guarantee given by Microsoft should be
> treated only as *language extension*. In particular, the same code
> example will not work with other compiler.
>
> To be more exact, if Microsoft claims that their example works with
> their compiler, it means that references to volatile provide the
> release-acquire memory consistency, which in turn means that they
> involve memory barries. This is surprising, but let's check the
> assembly output from their compiler...
>
> OK, I've checked it with Visual Studio 2005 (I don't have access to
> anything newer).
> The generated code does *not* involve memory barriers. It will not
> work.
>
> A simpler example can explain the problem.
> Consider the following two variables:
>
> bool A = false;  // can be as well volatile
> bool B = false;  // can be as well volatile
>
> and two threads:
>
> // Thread1:
> A = true;
> B = true;
>
> // Thread2:
> if (B)
> {
>     assert(A);
>
> }
>
> *From the point of view* of Thread1, only the following states are
> visible, as times goes on downwards:
>
> time 1:   A == false && B == false
> time 2:   A == true  && B == false
> time 3:   A == true  && B == true
>
> In other words, from the point of view of Thread1, the following
> situation:
>
> A == false && B == true
>
> can *never* happen, because the order of writes just makes it
> impossible. In other words, the condition and assertion from Thread2
> would be correct if executed in Thread1.
>
> The problem is that Thread2 can see something different and there the
> assertion can fail. This can happen for two reasons:
>
> 1. If you write something to memory, you only instruct the CPU that
> you *want* something to be stored. It will do it - but not necessarily
> immediately and it can actually happen *some time later*.
> The problem is that if you store two values (separate store
> instructions), the CPU does not know that they are related - and can
> stored them physically in different order. The compiler has no control
> over it! It is the hardware that can reorder the writes to memory.
>
> 2. If you read something from memory, you can actually read something
> that is already in the cache - the value can be there already and
> *earlier*.
>
> Both these mechanisms can make Thread2 see different order of
> modifications than that perceived by Thread1.
> Again, the compiler has no control over it. It is all in hardware.
>
> This is also a reason for why the Double-Check Locking Pattern does
> not work. Please read this:
>
> http://www.cs.umd.edu/~pugh/java/memoryModel/DoubleCheckedLocking.html
>
> It describes the problem for Java, but mentions the same ordering
> issues.
>
> Don't use volatile for multithreading. It does not work.
>
> Note: Ada gives guarantees for proper ordering for volatile objects.
>
> --
> Maciej Sobczak *www.msobczak.com*www.inspirel.com

OK, your point well taken, so it is up to compiler vendors to
guarantee volatility under MT.   Does it mean that original purpose of
volatile ma not always hold either?

G.



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-13 10:49                       ` Peter C. Chapin
@ 2008-03-13 13:03                         ` Alex R. Mosteo
  2008-03-13 14:02                           ` gpriv
  2008-03-14  1:12                           ` Randy Brukardt
  0 siblings, 2 replies; 96+ messages in thread
From: Alex R. Mosteo @ 2008-03-13 13:03 UTC (permalink / raw)


Peter C. Chapin wrote:

> This paper:
> 
> http://www.aristeia.com/Papers/DDJ_Jul_Aug_2004_revised.pdf
> 
> describes some of these issues in a C++ context and is written by two
> authors who probably know what they are talking about. Their conclusion
> is that it is pretty much impossible to write correct multi-threaded
> code in C++ without (non-standard) compiler assistance. I'm paraphrasing
> here. It is my understanding, however, that this matter will be
> addressed in C++ 0x.
> 
> Peter

After reading the pointers given in this thread, I'm under the impression that
indeed you need to use volatile for unprotected variables accessed across
threads? For, example, in Ada:

declare
   Global_Exit : Boolean := False;
   --  pragma Volatile (Global_Exit);
   --  pragma Atomic   (Global_Exit);
begin
   --  Lots o'tasks here that monitor Global_Exit for termination.
end;

Here you don't need a protected, since once someone makes Global_Exit true, the
end arrives and all is well. I would normally mark that as Atomic which, as it
implies volatile, is OK.

But let's say one knows that boolean is atomic for the architecture, and makes
the (risky) decision of not explicitly saying so to the compiler. I understand
that, without volatile, tasks never writing to Global_Exit could optimize away
any check for changes on it, and see it as a constant? (Indeed I guess Gnat
would issue a warning about Global_Exit being constant).

Although in Ada I don't see much (or any?) use for volatile, non-atomic shared
variables outside of protected objects.

Some flaw in my understanding?



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-13 13:03                         ` Alex R. Mosteo
@ 2008-03-13 14:02                           ` gpriv
  2008-03-14  1:12                           ` Randy Brukardt
  1 sibling, 0 replies; 96+ messages in thread
From: gpriv @ 2008-03-13 14:02 UTC (permalink / raw)


On Mar 13, 9:03 am, "Alex R. Mosteo" <devn...@mailinator.com> wrote:
> Peter C. Chapin wrote:
> > This paper:
>
> >http://www.aristeia.com/Papers/DDJ_Jul_Aug_2004_revised.pdf
>
> > describes some of these issues in a C++ context and is written by two
> > authors who probably know what they are talking about. Their conclusion
> > is that it is pretty much impossible to write correct multi-threaded
> > code in C++ without (non-standard) compiler assistance. I'm paraphrasing
> > here. It is my understanding, however, that this matter will be
> > addressed in C++ 0x.
>
> > Peter
>
> After reading the pointers given in this thread, I'm under the impression that
> indeed you need to use volatile for unprotected variables accessed across
> threads? For, example, in Ada:
>
> declare
>    Global_Exit : Boolean := False;
>    --  pragma Volatile (Global_Exit);
>    --  pragma Atomic   (Global_Exit);
> begin
>    --  Lots o'tasks here that monitor Global_Exit for termination.
> end;
>
> Here you don't need a protected, since once someone makes Global_Exit true, the
> end arrives and all is well. I would normally mark that as Atomic which, as it
> implies volatile, is OK.
>
> But let's say one knows that boolean is atomic for the architecture, and makes
> the (risky) decision of not explicitly saying so to the compiler. I understand
> that, without volatile, tasks never writing to Global_Exit could optimize away
> any check for changes on it, and see it as a constant? (Indeed I guess Gnat
> would issue a warning about Global_Exit being constant).
>
> Although in Ada I don't see much (or any?) use for volatile, non-atomic shared
> variables outside of protected objects.
>
> Some flaw in my understanding?

You are correct you should use volatiles. I don't think there is any
need to have volatile inside the protected objects.  Access will be
properly serialized anyway.

G.





^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-13  8:41                     ` Maciej Sobczak
@ 2008-03-13 15:20                       ` Samuel Tardieu
  0 siblings, 0 replies; 96+ messages in thread
From: Samuel Tardieu @ 2008-03-13 15:20 UTC (permalink / raw)


Sam> You cannot resolve "delete" statically

Maciej> Of course I can.  Note that the above is equivalent to:

You stripped the end of my paragraph which contained exactly what you
wrote in response, so this is hardly a rebuttal of what I said. Let me
quote myself:

Sam> unless you add a new "if" in it, which is then equivalent to
Sam> looking up the destructor in the virtual table

Which is exactly what you did (add a new "if").

  Sam
-- 
Samuel Tardieu -- sam@rfc1149.net -- http://www.rfc1149.net/



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-13 11:42                       ` gpriv
@ 2008-03-13 16:10                         ` Maciej Sobczak
  2008-03-13 16:16                           ` gpriv
  0 siblings, 1 reply; 96+ messages in thread
From: Maciej Sobczak @ 2008-03-13 16:10 UTC (permalink / raw)


On 13 Mar, 12:42, gp...@axonx.com wrote:

> OK, your point well taken, so it is up to compiler vendors to
> guarantee volatility under MT.

Vendors can (and should) provide you with some way to write correct MT
programs. This is an issue of normal competition on the market.
There are different ways to do it and language extension is one
possibility. If Microsoft claims that volatile works on their
compiler, it is language extension.

On the GNU side of the world the set of tools does not involve
volatile at all. Instead you get a set of POSIX interfaces that give
necessary guarantees related to memory consistency and visibility
(actually, POSIX gives them for C only, but this isn't a problem
except for standard lawyers).
Windows, as an operating system, provides similar guarantees with
similar interfaces (for example, pthread_mutex_lock in POSIX can be
compared to EnterCriticalSection on Windows). If you have a library
that abstracts the interface differences away (like Boost.Thread), you
can write portable software without any use of volatile. You don't
need volatile, because the system interfaces already provide the
necessary memory consistency and visibility guarantees. And since it
is possible to write portable software this way, it is what I would
recommend.

If you start using volatile relying on language extensions provided by
one particular vendor, you will not be able to easily port this
software to other platforms.

> Does it mean that original purpose of
> volatile ma not always hold either?

Original purpose of volatile did not relate to threads, so nothing
changes there.
To be exact (and to complete the discussion), the volatile type
specifier can be used to implement I/O, for information exchange with
signal handlers and for long jump. None of it is related to threads.

--
Maciej Sobczak * www.msobczak.com * www.inspirel.com



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-13 16:10                         ` Maciej Sobczak
@ 2008-03-13 16:16                           ` gpriv
  2008-03-13 22:01                             ` Simon Wright
  2008-03-13 22:25                             ` Maciej Sobczak
  0 siblings, 2 replies; 96+ messages in thread
From: gpriv @ 2008-03-13 16:16 UTC (permalink / raw)


On Mar 13, 12:10 pm, Maciej Sobczak <see.my.homep...@gmail.com> wrote:
> On 13 Mar, 12:42, gp...@axonx.com wrote:
>
> > OK, your point well taken, so it is up to compiler vendors to
> > guarantee volatility under MT.
>
> Vendors can (and should) provide you with some way to write correct MT
> programs. This is an issue of normal competition on the market.
> There are different ways to do it and language extension is one
> possibility. If Microsoft claims that volatile works on their
> compiler, it is language extension.
>
> On the GNU side of the world the set of tools does not involve
> volatile at all. Instead you get a set of POSIX interfaces that give
> necessary guarantees related to memory consistency and visibility
> (actually, POSIX gives them for C only, but this isn't a problem
> except for standard lawyers).
> Windows, as an operating system, provides similar guarantees with
> similar interfaces (for example, pthread_mutex_lock in POSIX can be
> compared to EnterCriticalSection on Windows). If you have a library
> that abstracts the interface differences away (like Boost.Thread), you
> can write portable software without any use of volatile. You don't
> need volatile, because the system interfaces already provide the
> necessary memory consistency and visibility guarantees. And since it
> is possible to write portable software this way, it is what I would
> recommend.
>
> If you start using volatile relying on language extensions provided by
> one particular vendor, you will not be able to easily port this
> software to other platforms.
>
> > Does it mean that original purpose of
> > volatile ma not always hold either?
>
> Original purpose of volatile did not relate to threads, so nothing
> changes there.
> To be exact (and to complete the discussion), the volatile type
> specifier can be used to implement I/O, for information exchange with
> signal handlers and for long jump. None of it is related to threads.
>
> --
> Maciej Sobczak *www.msobczak.com*www.inspirel.com

So if you write to IO through volatile and later read from it, you may
get the same value fetched back by "smart" CPU.

G.



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-13 16:16                           ` gpriv
@ 2008-03-13 22:01                             ` Simon Wright
  2008-03-13 22:25                             ` Maciej Sobczak
  1 sibling, 0 replies; 96+ messages in thread
From: Simon Wright @ 2008-03-13 22:01 UTC (permalink / raw)


gpriv@axonx.com writes:

> So if you write to IO through volatile and later read from it, you
> may get the same value fetched back by "smart" CPU.

On the PowerPC/VME hardware I'm using, the memory mapping is arranged
so that VME (== IO) space isn't cached. I would imagine the same would
have to be true of any memory-mapped IO scheme.



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-13 16:16                           ` gpriv
  2008-03-13 22:01                             ` Simon Wright
@ 2008-03-13 22:25                             ` Maciej Sobczak
  2008-03-14  2:07                               ` gpriv
  1 sibling, 1 reply; 96+ messages in thread
From: Maciej Sobczak @ 2008-03-13 22:25 UTC (permalink / raw)


On 13 Mar, 17:16, gp...@axonx.com wrote:

> So if you write to IO through volatile and later read from it, you may
> get the same value fetched back by "smart" CPU.

I would expect that I/O is realized within the range of memory
addresses that is excluded from the "smart" handling (your BIOS setup
can even allow setting this - I remember seeing it somewhere). In any
case, it is the implementation in its entirety that has to handle it -
somehow. We don't need to bother how it's done.
The full meaning of volatile is defined in terms of its relation to so
called sequence points and program side-effects. There is some place
left just for "magic".

The problem is that when you make something volatile in your program,
it has no special meaning to CPU and is therefore subject to "smart"
optimization.

I have seen once a very interesting model for memory in multi-core and
multi-cpu machines. Imagine that each thread (or tasks in Ada) has its
own private memory. There is also one "global" memory area. There is
some "magic" that makes random bits of private memory propagate at
random times to the global part and another that makes random bits of
global memory propagate at random times to private parts of each
thread (task). This model can sound completely crazy, but actually
foresees all bad things that can happen in MT program - including
reordering.
There are some system-level mechanisms that help control all this mess
and that force the memory blocks to synchronize - these include
mutexes, condvars and membars with release-acquire consistency (in
Ada, you get them with protected objects, entry barriers, etc.).
You should write your program with this crazy model in mind. If you
can make it correct, it will work everywhere.

--
Maciej Sobczak * www.msobczak.com * www.inspirel.com



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-13 13:03                         ` Alex R. Mosteo
  2008-03-13 14:02                           ` gpriv
@ 2008-03-14  1:12                           ` Randy Brukardt
  2008-03-14 10:16                             ` Alex R. Mosteo
  1 sibling, 1 reply; 96+ messages in thread
From: Randy Brukardt @ 2008-03-14  1:12 UTC (permalink / raw)


"Alex R. Mosteo" <devnull@mailinator.com> wrote in message
news:63sn1tF29a5oiU1@mid.individual.net...
...
> Although in Ada I don't see much (or any?) use for volatile, non-atomic
shared
> variables outside of protected objects.
>
> Some flaw in my understanding?

Yes, sort of. You're not considering non-tasking uses. The canonical example
for Volatile is reading/writing a memory-mapped hardware device. In that
case, optimizing out the reads/writes can be fatal. It doesn't directly have
anything to do with tasking. (I suppose one could consider the hardware
device as a sort of task, but it would be done with unusual properties.)

Another purpose is for debugging. You can monitor the memory location of a
Volatile object to see what is going on, without the pragma the compiler can
optimize the entire object away leaving nothing to monitor. (Of course, the
pragma also changes the code, which might change the bug you are trying to
find.)

Also, you can use Volatile on any object, whereas Atomic only works on ones
the compiler can do indivisibly (usually no larger than the word size of the
machine).

OTOH, for communication between tasks, you have to use Atomic. (Thus large
objects need a protected object or other scheme for synchronizing access.)

                             Randy.







^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-13 22:25                             ` Maciej Sobczak
@ 2008-03-14  2:07                               ` gpriv
  2008-03-14  9:29                                 ` Maciej Sobczak
  2008-03-14 21:54                                 ` Simon Wright
  0 siblings, 2 replies; 96+ messages in thread
From: gpriv @ 2008-03-14  2:07 UTC (permalink / raw)


On Mar 13, 6:25 pm, Maciej Sobczak <see.my.homep...@gmail.com> wrote:
> On 13 Mar, 17:16, gp...@axonx.com wrote:
>
> > So if you write to IO through volatile and later read from it, you may
> > get the same value fetched back by "smart" CPU.
>
> I would expect that I/O is realized within the range of memory
> addresses that is excluded from the "smart" handling (your BIOS setup

I would expect that too, but not sure that "compatitive" compiler
vendors will share my/your beliefs.


> can even allow setting this - I remember seeing it somewhere). In any
> case, it is the implementation in its entirety that has to handle it -
> somehow. We don't need to bother how it's done.
> The full meaning of volatile is defined in terms of its relation to so
> called sequence points and program side-effects. There is some place
> left just for "magic".

"Magic" not a good way to define an algorithmic language.

>
> The problem is that when you make something volatile in your program,
> it has no special meaning to CPU and is therefore subject to "smart"
> optimization.

Than why this keyword is still there, just drop it and issue warning
"volatile obsolete and no longer supported"?

>
> I have seen once a very interesting model for memory in multi-core and
> multi-cpu machines. Imagine that each thread (or tasks in Ada) has its
> own private memory. There is also one "global" memory area. There is
> some "magic" that makes random bits of private memory propagate at
> random times to the global part and another that makes random bits of
> global memory propagate at random times to private parts of each
> thread (task). This model can sound completely crazy, but actually
> foresees all bad things that can happen in MT program - including
> reordering.

What would it take to enforce the code that ensures memory writes and
flushes the L1 cache.  That should be enough to make clear
implementation of volatile (as defined by the standard).  The
performance penalty should not be that significant (in most cases)
since flush is DMA process (?).  An again, the standard warns about
those performance penalties so who gives the right to compiler
implementers to decide for us what is good or not?

> There are some system-level mechanisms that help control all this mess
> and that force the memory blocks to synchronize - these include
> mutexes, condvars and membars with release-acquire consistency (in
> Ada, you get them with protected objects, entry barriers, etc.).
> You should write your program with this crazy model in mind. If you
> can make it correct, it will work everywhere.
>
> --
> Maciej Sobczak *www.msobczak.com*www.inspirel.com

Anyways, that sounds like total chaos. When investing in code
development clarity and predictability is not the last items on my
list.  Our C++ activity nowadays is limited to embedded DSP platforms
and you helped me to appreciate the simplicity and predictability of
architecture that we use (TI DM642).  I am convinced to use C++ no
further than this device (not even floating point supported), given
that we have no other choice anyways, and have no time neither desire
or reasonable justification to get into details that you just
described.  With the general purpose programming, it is also a
convincing argument for using Ada as a first choice - the decision
that we made months ago. Finally, I think we got carried away with C/C+
+ issues that are totally foreign to Ada while being in Ada forum and
are somewhat rude to others. So let's leave it at that before getting
shout at (Although Ada folks are much more tolerant then C++ crowd on
off-topic issues).

Regards,

George..





^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-14  2:07                               ` gpriv
@ 2008-03-14  9:29                                 ` Maciej Sobczak
  2008-03-14 21:54                                 ` Simon Wright
  1 sibling, 0 replies; 96+ messages in thread
From: Maciej Sobczak @ 2008-03-14  9:29 UTC (permalink / raw)


On 14 Mar, 03:07, gp...@axonx.com wrote:

> "Magic" not a good way to define an algorithmic language.

You need it to explain interaction of the program with external world.
This is outside of the language, yet the language has to define it -
somehow. Being selectively silent is a standard (pun intended)
practice.


> Than why this keyword is still there, just drop it and issue warning
> "volatile obsolete and no longer supported"?

Volatile is needed to define side effects. But the truth is that
"normal" programmers don't need to even learn this keyword.


> Anyways, that sounds like total chaos.

Yes.

Note that the hardware phenomenons are also related to Ada. You *have*
to use proper tools to control this chaos, otherwise you have exactly
the same problems as in C++.

The advantage of Ada is that it offers right tools out of the box -
and as such they are readily explained in all educational material.
Get *any* Ada book and you have protected objects explained. Thanks to
this, even newbies can get it right by default. Or at least they have
bigger chances.

The problem with C++ is that all this is not explained in any
introductory C++ book. You need separate and additional material and
the sad truth is that most of the programmers never reach for anything
additional. Then the misunderstanding propagates by other channels,
including forums, blogs, etc.

This is a technology culture problem, not a language limitation in the
technical sense. You can write correct C++ MT programs and it is not
more expensive than any alternative. It's just not what you can learn
from "C++ Programming Language" type of books.

--
Maciej Sobczak * www.msobczak.com * www.inspirel.com



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-14  1:12                           ` Randy Brukardt
@ 2008-03-14 10:16                             ` Alex R. Mosteo
  0 siblings, 0 replies; 96+ messages in thread
From: Alex R. Mosteo @ 2008-03-14 10:16 UTC (permalink / raw)


Randy Brukardt wrote:

> "Alex R. Mosteo" <devnull@mailinator.com> wrote in message
> news:63sn1tF29a5oiU1@mid.individual.net...
> ...
>> Although in Ada I don't see much (or any?) use for volatile, non-atomic
> shared
>> variables outside of protected objects.
>>
>> Some flaw in my understanding?
> 
> Yes, sort of. You're not considering non-tasking uses. The canonical example
> for Volatile is reading/writing a memory-mapped hardware device. In that
> case, optimizing out the reads/writes can be fatal. It doesn't directly have
> anything to do with tasking. (I suppose one could consider the hardware
> device as a sort of task, but it would be done with unusual properties.)

Ah, of course. I was so bent on looking for strange uses that I missed the
obvious one.

> Another purpose is for debugging. You can monitor the memory location of a
> Volatile object to see what is going on, without the pragma the compiler can
> optimize the entire object away leaving nothing to monitor. (Of course, the
> pragma also changes the code, which might change the bug you are trying to
> find.)

Hehe. Happily I only need the debugger once in a trimester or so :)

> (...)

Thanks!



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-08  6:04 Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing! ME
  2008-03-08 22:11 ` Maciej Sobczak
@ 2008-03-14 19:20 ` Mike Silva
  2008-03-14 20:43   ` Ed Falis
  2008-03-22 22:51 ` Florian Weimer
  2 siblings, 1 reply; 96+ messages in thread
From: Mike Silva @ 2008-03-14 19:20 UTC (permalink / raw)


On Mar 8, 2:04 am, "ME" <abcd...@nonodock.net> wrote:
> As many of may have already noticed, there has been a tremendous furor over
> the lack of multicore support in the common languages like C and C++.   I
> have been reading these articles in EE Times and elsewhere discussing this
> disaster with all the teeth gnashing and handwringing acting as though Ada
> never existed. Robert Dewar ,our hero, has written an absolutely excellent
> article with a clever intro.http://www.eetimes.com/news/design/showArticle.jhtml?articleID=206900265

Me, I'm just enjoying (in a melancholy way) the quote from the paper
Dr. Dewar mentioned:

"The 1980s will probably be remembered as the decade in which
programmers took a gigantic step backwards by switching from
secure Pascal-like languages to insecure C-like languages. I have
no rational explanation for this trend."



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over
  2008-03-11  8:59                 ` Maciej Sobczak
  2008-03-11  9:49                   ` GNAT bug, Assert_Failure at atree.adb:2893 Ludovic Brenta
@ 2008-03-14 20:03                   ` Ivan Levashew
  1 sibling, 0 replies; 96+ messages in thread
From: Ivan Levashew @ 2008-03-14 20:03 UTC (permalink / raw)


Maciej Sobczak пишет:

> BTW - this is what I got yesterday:
> 
> +===========================GNAT BUG
> DETECTED==============================+
> | 4.3.0 20070527 (experimental) (i686-apple-darwin8) Assert_Failure
> atree.adb:2893|

http://www.adacore.com/home/gnatpro/configurations/ :

Mac OS X

x86 Mac OS X – *future availability*
PowerPC Mac OS X



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-14 19:20 ` Mike Silva
@ 2008-03-14 20:43   ` Ed Falis
  0 siblings, 0 replies; 96+ messages in thread
From: Ed Falis @ 2008-03-14 20:43 UTC (permalink / raw)


On Fri, 14 Mar 2008 15:20:55 -0400, Mike Silva <snarflemike@yahoo.com>  
wrote:

> Me, I'm just enjoying (in a melancholy way) the quote from the paper
> Dr. Dewar mentioned:
>
> "The 1980s will probably be remembered as the decade in which
> programmers took a gigantic step backwards by switching from
> secure Pascal-like languages to insecure C-like languages. I have
> no rational explanation for this trend."


Didn't Richard Gabriel from Stanford cover the larger trend in his "Worse  
is Better" paper? http://www.jwz.org/doc/worse-is-better.html



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-14  2:07                               ` gpriv
  2008-03-14  9:29                                 ` Maciej Sobczak
@ 2008-03-14 21:54                                 ` Simon Wright
  2008-03-15  2:29                                   ` gpriv
  1 sibling, 1 reply; 96+ messages in thread
From: Simon Wright @ 2008-03-14 21:54 UTC (permalink / raw)


gpriv@axonx.com writes:

> On Mar 13, 6:25 pm, Maciej Sobczak <see.my.homep...@gmail.com> wrote:
>> On 13 Mar, 17:16, gp...@axonx.com wrote:
>>
>> > So if you write to IO through volatile and later read from it, you may
>> > get the same value fetched back by "smart" CPU.
>>
>> I would expect that I/O is realized within the range of memory
>> addresses that is excluded from the "smart" handling (your BIOS setup
>
> I would expect that too, but not sure that "compatitive" compiler
> vendors will share my/your beliefs.

I think that 'volatile' is only a way to tell the compiler that this
object must be read/written directly each time; don't copy it to a
register and manipulate values there.

Likewise 'atomic' says to read/write the whole thing with one machine
operation. Probably because that's what the IO hardware requires; eg,
you must write all 32 bits at once, not just the only byte you've
changed..

The only time this makes sense is with memory-mapped IO where the
system must be designed so that this is sensible.

If you try it with memory with varying levels of hardware cache and
with multiple cores you are going to be disappointed, I think.



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-14 21:54                                 ` Simon Wright
@ 2008-03-15  2:29                                   ` gpriv
  2008-03-15 13:29                                     ` Maciej Sobczak
  0 siblings, 1 reply; 96+ messages in thread
From: gpriv @ 2008-03-15  2:29 UTC (permalink / raw)


On Mar 14, 5:54 pm, Simon Wright <simon.j.wri...@mac.com> wrote:
> gp...@axonx.com writes:
> > On Mar 13, 6:25 pm, Maciej Sobczak <see.my.homep...@gmail.com> wrote:
> >> On 13 Mar, 17:16, gp...@axonx.com wrote:
>
> >> > So if you write to IO through volatile and later read from it, you may
> >> > get the same value fetched back by "smart" CPU.
>
> >> I would expect that I/O is realized within the range of memory
> >> addresses that is excluded from the "smart" handling (your BIOS setup
>
> > I would expect that too, but not sure that "compatitive" compiler
> > vendors will share my/your beliefs.
>
> I think that 'volatile' is only a way to tell the compiler that this
> object must be read/written directly each time; don't copy it to a
> register and manipulate values there.
>
> Likewise 'atomic' says to read/write the whole thing with one machine
> operation. Probably because that's what the IO hardware requires; eg,
> you must write all 32 bits at once, not just the only byte you've
> changed..
>
> The only time this makes sense is with memory-mapped IO where the
> system must be designed so that this is sensible.
>
> If you try it with memory with varying levels of hardware cache and
> with multiple cores you are going to be disappointed, I think.

IMHO it should not be keeping compiler designers from flushing the
contents of the registers down to lowest level, which I believe some
compilers do.  Performance penalties should not be a cover for not
doing this.  What also seems to be alarming is proliferation of
architecture specificity into the programming techniques.  On device
level it is unavoidable, but on general systems level inexcusable.
What will happen when architecture change?

C started as a language for very simple microprocessor architecture
language.  Now architectures outgrew what the language was originally
designed for.  Practitioners are simply patching things up trying to
make 12-year's old pants to fit on 16-years old boy.

G.




^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-15  2:29                                   ` gpriv
@ 2008-03-15 13:29                                     ` Maciej Sobczak
  2008-03-15 16:09                                       ` gpriv
  0 siblings, 1 reply; 96+ messages in thread
From: Maciej Sobczak @ 2008-03-15 13:29 UTC (permalink / raw)


On 15 Mar, 03:29, gp...@axonx.com wrote:

> IMHO it should not be keeping compiler designers from flushing the
> contents of the registers down to lowest level

Do you know that it can cost you two orders of magnitude of
performance loss?

> Performance penalties should not be a cover for not
> doing this.

On the contrary. Otherwise it would make no sense to introduce
multicore architectures at all.

> What also seems to be alarming is proliferation of
> architecture specificity into the programming techniques.

No, there is no specificity. I have an impression that you still try
to keep the volatile mess and then declare that the platform specifity
breaks your code, but the truth is totally inverse - you have broken
code, period. That's it - don't expect compiler vendors to "fix the
world" and penalize those who do things correctly.
If you write correct code (yes, forget once and for all the volatile
keyword), there is absolutely no architecture specificity to worry
about.

> What will happen when architecture change?

Nothing. Correct programs will still work and broken programs will
still be broken.

> C started as a language for very simple microprocessor architecture
> language.  Now architectures outgrew what the language was originally
> designed for.

No, you can still target modern architectures with this "outdated"
language. Just do it right. Interestingly, this also applies to Ada
(wow! I've managed to keep the discussion on-topic! ;-) ).

> Practitioners are simply patching things up trying to
> make 12-year's old pants to fit on 16-years old boy.

Yes, you are unfortunately right here. Practitioners have to worry
about truckloads of broken code.

--
Maciej Sobczak * www.msobczak.com * www.inspirel.com



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-15 13:29                                     ` Maciej Sobczak
@ 2008-03-15 16:09                                       ` gpriv
  0 siblings, 0 replies; 96+ messages in thread
From: gpriv @ 2008-03-15 16:09 UTC (permalink / raw)


On Mar 15, 9:29 am, Maciej Sobczak <see.my.homep...@gmail.com> wrote:
> On 15 Mar, 03:29, gp...@axonx.com wrote:
>
> > IMHO it should not be keeping compiler designers from flushing the
> > contents of the registers down to lowest level
>
> Do you know that it can cost you two orders of magnitude of
> performance loss?
>
> > Performance penalties should not be a cover for not
> > doing this.
>
> On the contrary. Otherwise it would make no sense to introduce
> multicore architectures at all.

Performance is an issue in less than 1% of our code.  You may be in
different application though (however I doubt your numbers will be
that different)  We usually locate this 1 % and concentrate on it
still remaining well within the standard.  You must be a total idiot
to use volatile there or allow context switches.

>
> > What also seems to be alarming is proliferation of
> > architecture specificity into the programming techniques.
>
> No, there is no specificity. I have an impression that you still try
> to keep the volatile mess and then declare that the platform specifity

Your impression is totally wrong. My use of volatile may went 2-3 time
within 50KLOC program.  And as you may notice I don't use C++ in
preemtive environment any longer so issue is over.


> breaks your code, but the truth is totally inverse - you have broken
> code, period. That's it - don't expect compiler vendors to "fix the
> world" and penalize those who do things correctly.
> If you write correct code (yes, forget once and for all the volatile
> keyword), there is absolutely no architecture specificity to worry
> about.
>
> > What will happen when architecture change?
>
> Nothing. Correct programs will still work and broken programs will
> still be broken.

It's harder to formulate correctness that's all


>
> > C started as a language for very simple microprocessor architecture
> > language.  Now architectures outgrew what the language was originally
> > designed for.
>
> No, you can still target modern architectures with this "outdated"
> language. Just do it right. Interestingly, this also applies to Ada
> (wow! I've managed to keep the discussion on-topic! ;-) ).
>
> > Practitioners are simply patching things up trying to
> > make 12-year's old pants to fit on 16-years old boy.
>
> Yes, you are unfortunately right here. Practitioners have to worry
> about truckloads of broken code.
>
> --
> Maciej Sobczak *www.msobczak.com*www.inspirel.com




^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over
  2008-03-10  9:23         ` Maciej Sobczak
  2008-03-10 19:01           ` Jeffrey R. Carter
@ 2008-03-22 21:12           ` Florian Weimer
  1 sibling, 0 replies; 96+ messages in thread
From: Florian Weimer @ 2008-03-22 21:12 UTC (permalink / raw)


* Maciej Sobczak:

> He claimed that I did not write any multithreading C or C++ program,
> presumably because the C and C++ standards say nothing about threads.
> For me this kind of argument is just a handwaving and to show this I
> replied with the same logic that nobody did <put-your-pet-application-
> here> in Ada, for the simple reason that AARM says nothing about it.

Concurrency is a bit different.  It cannot be implemented as a library:

<http://www.hpl.hp.com/personal/Hans_Boehm/misc_slides/pldi05_threads.pdf>
<http://www.hpl.hp.com/techreports/2004/HPL-2004-209.html>



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-10 21:24   ` Randy Brukardt
  2008-03-11 10:12     ` Alex R. Mosteo
@ 2008-03-22 22:43     ` Florian Weimer
  2008-03-26 13:49       ` Ole-Hjalmar Kristensen
  1 sibling, 1 reply; 96+ messages in thread
From: Florian Weimer @ 2008-03-22 22:43 UTC (permalink / raw)


* Randy Brukardt:

> "Maciej Sobczak" <see.my.homepage@gmail.com> wrote in message
> news:89af8399-94fb-42b3-909d-edf3c98d32e5@n75g2000hsh.googlegroups.com...
> ...
>> Take for example lock-free algorithms. There is no visible research on
>> this related to Ada, unlike Java and C++ (check on
>> comp.programming.threads).
>
> Perhaps I'm showing my ignorance, but does there need to be any?

If you actually want to go thoroughly multi-core, you often need
non-blocking algorithms.  It is not possible to write them in Ada
because Ada lacks the required primitives (mainly compare-and-swap or
something equivalent).

> Ada supports lock-free threads quite well using pragma Atomic.

Try implementing Azul's concurrent hashtable in Ada:

<http://blogs.azulsystems.com/cliff/2007/03/a_nonblocking_h.html>

(Portable Java source code is available.)



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert  Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-08  6:04 Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing! ME
  2008-03-08 22:11 ` Maciej Sobczak
  2008-03-14 19:20 ` Mike Silva
@ 2008-03-22 22:51 ` Florian Weimer
  2 siblings, 0 replies; 96+ messages in thread
From: Florian Weimer @ 2008-03-22 22:51 UTC (permalink / raw)


* ME:

> As many of may have already noticed, there has been a tremendous furor
> over the lack of multicore support in the common languages like C and
> C++.   I have been reading these articles in EE Times and elsewhere
> discussing this disaster with all the teeth gnashing and handwringing
> acting as though Ada never existed. Robert Dewar ,our hero, has
> written an absolutely excellent article with a clever intro.
> http://www.eetimes.com/news/design/showArticle.jhtml?articleID=206900265 

GNAT indirectly inherits most C/C++ ambiguities regarding concurrency
because Ada code is compiled to GIMPLE, an intermediate language which
is heavily influenced by C/C++ requirements and which lacks a rigorously
specified memory model (among other things).

In fact, for most C/C++ concurrency surprises (for instance, a
conditional write turned into an unconditional one by GCC), Ada test
cases could be produced which showed the same problem.  For Ada, these
are defects in the toolchain.  For C/C++, they are mere
quality-of-implementation issues.  Is this a major difference?  Probably
not, because you may still have to convince your Ada vendor that their
reading of the standard is incorrect, and the concurrency anomaly you're
observing is actually an implementation defect.



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-22 22:43     ` Florian Weimer
@ 2008-03-26 13:49       ` Ole-Hjalmar Kristensen
  2008-03-26 21:27         ` Florian Weimer
  0 siblings, 1 reply; 96+ messages in thread
From: Ole-Hjalmar Kristensen @ 2008-03-26 13:49 UTC (permalink / raw)


If the compiler is smart enough to optimize this case, an entryless
protected object would be a good building block.

The AARM states that "Entryless protected objects are intended to be
treated roughly like atomic objects -- each operation is indivisible
with respect to other operations (unless both are reads), but such
operations cannot be used to synchronize access to other nonvolatile
shared variables"

-- 
 C++: The power, elegance and simplicity of a hand grenade.



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-26 13:49       ` Ole-Hjalmar Kristensen
@ 2008-03-26 21:27         ` Florian Weimer
  2008-03-27  9:31           ` Ole-Hjalmar Kristensen
  0 siblings, 1 reply; 96+ messages in thread
From: Florian Weimer @ 2008-03-26 21:27 UTC (permalink / raw)


* Ole-Hjalmar Kristensen:

> If the compiler is smart enough to optimize this case, an entryless
> protected object would be a good building block.
>
> The AARM states that "Entryless protected objects are intended to be
> treated roughly like atomic objects -- each operation is indivisible
> with respect to other operations (unless both are reads), but such
> operations cannot be used to synchronize access to other nonvolatile
> shared variables"

You need some signaling for a shared hash table, otherwise reading a
freshly-added object from a different thread might not give you the data
you want (strictly speaking, even just comparing the might cause
issues).



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-26 21:27         ` Florian Weimer
@ 2008-03-27  9:31           ` Ole-Hjalmar Kristensen
  2008-03-27 23:10             ` Florian Weimer
  2008-03-28  6:34             ` Randy Brukardt
  0 siblings, 2 replies; 96+ messages in thread
From: Ole-Hjalmar Kristensen @ 2008-03-27  9:31 UTC (permalink / raw)


If you mean that it may be difficult to optimize, I agree, but I
cannot agree that you need *more* than an entryless protected object
to implement a hash table, since it guarantees that each operation on
the object is indivisible. 

The simplest case (for the programmer) is of course to put both key
and value inside the protected object, Then a reader will either see
the key, value pair as it was before the update or as it is after the
update. The problem is that although reads within a procedure may be
optimistic, the compiler probably needs to insert at least a spin lock
during the actual update of the object.

What I was thinking of was to recognize the special case where the
entryless protected object contains only a single entity which can be
updated atomically with a compare and swap. In that case, you could
skip the spin lock in the update phase and use CAS directly. In this
case the key and the value would be in separate protected objects, and
the implementation of the hash table could follow the pattern of the
hash table you mentioned.

On the other hand, I cannot see any reason why Annex C just couldn't
say that intrinsic subprograms for compare-amd-swap and similar
machine operations shall be provided *if* they are available on a
platform.
That would at least save me the work of writing the bindings myself.


>>>>> "FW" == Florian Weimer <fw@deneb.enyo.de> writes:

    FW> * Ole-Hjalmar Kristensen:
    >> If the compiler is smart enough to optimize this case, an entryless
    >> protected object would be a good building block.
    >> 
    >> The AARM states that "Entryless protected objects are intended to be
    >> treated roughly like atomic objects -- each operation is indivisible
    >> with respect to other operations (unless both are reads), but such
    >> operations cannot be used to synchronize access to other nonvolatile
    >> shared variables"

    FW> You need some signaling for a shared hash table, otherwise reading a
    FW> freshly-added object from a different thread might not give you the data
    FW> you want (strictly speaking, even just comparing the might cause
    FW> issues).

-- 
   C++: The power, elegance and simplicity of a hand grenade.



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-27  9:31           ` Ole-Hjalmar Kristensen
@ 2008-03-27 23:10             ` Florian Weimer
  2008-03-28  9:51               ` Ole-Hjalmar Kristensen
  2008-03-28  6:34             ` Randy Brukardt
  1 sibling, 1 reply; 96+ messages in thread
From: Florian Weimer @ 2008-03-27 23:10 UTC (permalink / raw)


* Ole-Hjalmar Kristensen:

> If you mean that it may be difficult to optimize, I agree, but I
> cannot agree that you need *more* than an entryless protected object
> to implement a hash table, since it guarantees that each operation on
> the object is indivisible. 

You need some kind of signaling construct.  Suppose you put some access
value as a value into the hash table.  If there's no signaling (the case
of an entryless protected object, it seems), a task that retrieves the
value from the table cannot make any assumptions regarding the object
the access value refers to, unless there is some other form of
synchronization.

> On the other hand, I cannot see any reason why Annex C just couldn't
> say that intrinsic subprograms for compare-amd-swap and similar
> machine operations shall be provided *if* they are available on a
> platform.

CAS alone is not sufficient, you also need some sort of barrier (both
against compiler optimizations and reordering in the silicon).



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-27  9:31           ` Ole-Hjalmar Kristensen
  2008-03-27 23:10             ` Florian Weimer
@ 2008-03-28  6:34             ` Randy Brukardt
  1 sibling, 0 replies; 96+ messages in thread
From: Randy Brukardt @ 2008-03-28  6:34 UTC (permalink / raw)


"Ole-Hjalmar Kristensen"
<ole-hjalmar.kristensen@substitute_employer_here.com> wrote in message
news:wvbrlk44zf5l.fsf@astra06.norway.sun.com...
...
> On the other hand, I cannot see any reason why Annex C just couldn't
> say that intrinsic subprograms for compare-amd-swap and similar
> machine operations shall be provided *if* they are available on a
platform.
> That would at least save me the work of writing the bindings myself.

It does, actually. See C.1(11-16). It's "only" Implementation Advice, but
that is necessary in any case, because the Standard can't require something
it can't define.

A more interesting question is whether implementations follow that advice
(in any useful manner).

                                Randy.





^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-27 23:10             ` Florian Weimer
@ 2008-03-28  9:51               ` Ole-Hjalmar Kristensen
  2008-03-28 18:12                 ` Florian Weimer
  0 siblings, 1 reply; 96+ messages in thread
From: Ole-Hjalmar Kristensen @ 2008-03-28  9:51 UTC (permalink / raw)


>>>>> "FW" == Florian Weimer <fw@deneb.enyo.de> writes:

    FW> * Ole-Hjalmar Kristensen:
    >> If you mean that it may be difficult to optimize, I agree, but I
    >> cannot agree that you need *more* than an entryless protected object
    >> to implement a hash table, since it guarantees that each operation on
    >> the object is indivisible. 

    FW> You need some kind of signaling construct.  Suppose you put some access
    FW> value as a value into the hash table.  If there's no signaling (the case
    FW> of an entryless protected object, it seems), a task that retrieves the
    FW> value from the table cannot make any assumptions regarding the object
    FW> the access value refers to, unless there is some other form of
    FW> synchronization.

First, I never said it can be used to synchronize something which is
outside the protected object or the hash table. But it will serialize
the access to the protected object, so any task will see consistent
key, value pairs, which is all the synchronization you need for the
hash table itself.

Second, I cannot find anything in the RM which says you can make *any*
assumptions about objects which are outside the protected object so I
cannot see how signaling will help you.

Actually there are two cases here:

1. The task is calling a protected function or procedure to retrieve
   the pointer and accesses the object through the pointer after the
   call. This is obviously bad code unless the object is atomic, which
   in practice means a single word.

2. The task is calling a protected function or procedure and accesses
   the object through the pointer while it is inside the protected
   object. In both cases you are guaranteed that you will see any
   previous updates to the protected object itself and that no one is
   updating the protected object while you are inside, but unless the
   other object is atomic I do not think you have the guarantee that
   you even see previous updates to it, at least in the multiprocessor
   case. You may speculate that since the implementation of a
   protected call typically involves a memory barrier, it will be safe
   to access data outside the protected object, but I would not bet on
   it.

    >> On the other hand, I cannot see any reason why Annex C just couldn't
    >> say that intrinsic subprograms for compare-amd-swap and similar
    >> machine operations shall be provided *if* they are available on a
    >> platform.

    FW> CAS alone is not sufficient, you also need some sort of barrier (both
    FW> against compiler optimizations and reordering in the silicon).

Which is why I said compare-and-swap and similar machine operations.

With respect to compiler optimizations, I assume you are thinking of
the kind of reordering decribed by Boehm in his paper where some
compilers introduced extra reads and writes while the pthread mutex
was not held? This is obviously a problem, the compiler needs to know
whether a variable may be accessed by multiple threads or not to do
sensible optimizations. But I assume that pragma atomic would be
sufficient to take care of such cases (all atomic objects are volatile):

"15   For an atomic object (including an atomic component) all reads and
updates of the object as a whole are indivisible.

16   For a volatile object all reads and updates of the object as a whole are
performed directly to memory.

        16.a   Implementation Note:  This precludes any use of register
        temporaries, caches, and other similar optimizations for that object."

When it comes to reordering in the silicon, at least in the Solaris
case, this is already taken care of in the atomic library operations,
see under NOTES in the excerpt from the man page below. Only if you
need synchronization between *different* variables do you need memory
barriers.


Standard C Library Functions                       atomic_ops(3C)

NAME
     atomic_ops - atomic operations

SYNOPSIS
     #include <atomic.h>

DESCRIPTION
     This collection of functions provides atomic  memory  opera-
     tions. There are 8 different classes of atomic operations:

     atomic_add(3C)  These functions provide an  atomic  addition
                     of a signed value to a variable.

     atomic_and(3C)  These functions provide  an  atomic  logical
                     'and' of a value to a variable.

     atomic_bits(3C) These functions provide atomic  bit  setting
                     and clearing within a variable.

     atomic_cas(3C)  These functions provide an atomic comparison
                     of  a  value  with  a  variable. If the com-
                     parison is equal, then swap in a  new  value
                     for the variable, returning the old value of
                     the variable in either case.

     atomic_dec(3C)  These functions provide an atomic  decrement
                     on a variable.

     atomic_inc(3C)  These functions provide an atomic  increment
                     on a variable.

     atomic_or(3C)   These functions provide  an  atomic  logical
                     'or' of a value to a variable.

     atomic_swap(3C) These functions provide an atomic swap of  a
                     value  with  a  variable,  returning the old
                     value of the variable.

<snip>

NOTES
     Atomic instructions ensure global visibility of  atomically-
     modified  variables on completion.  In a relaxed store order
     system, this does not guarantee that the visibility of other
     variables  will  be  synchronized with the completion of the
     atomic instruction. If  such  synchronization  is  required,
     memory    barrier    instructions    must   be   used.   See
     membar_ops(3C).


-- 
   C++: The power, elegance and simplicity of a hand grenade.



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-28  9:51               ` Ole-Hjalmar Kristensen
@ 2008-03-28 18:12                 ` Florian Weimer
  2008-03-28 21:45                   ` Randy Brukardt
  2008-03-31  7:59                   ` Ole-Hjalmar Kristensen
  0 siblings, 2 replies; 96+ messages in thread
From: Florian Weimer @ 2008-03-28 18:12 UTC (permalink / raw)


* Ole-Hjalmar Kristensen:

> Second, I cannot find anything in the RM which says you can make *any*
> assumptions about objects which are outside the protected object so I
> cannot see how signaling will help you.

I think that's what 9.10 (Ada 95 without TC 1) is about.



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-28 18:12                 ` Florian Weimer
@ 2008-03-28 21:45                   ` Randy Brukardt
  2008-03-31  7:59                   ` Ole-Hjalmar Kristensen
  1 sibling, 0 replies; 96+ messages in thread
From: Randy Brukardt @ 2008-03-28 21:45 UTC (permalink / raw)


"Florian Weimer" <fw@deneb.enyo.de> wrote in message
news:87wsnmg1ju.fsf@mid.deneb.enyo.de...
> * Ole-Hjalmar Kristensen:
>
> > Second, I cannot find anything in the RM which says you can make *any*
> > assumptions about objects which are outside the protected object so I
> > cannot see how signaling will help you.
>
> I think that's what 9.10 (Ada 95 without TC 1) is about.

Yes, that's right. The wording is pretty obscure, but it allows you
serialize access to anything.

                              Randy.





^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-28 18:12                 ` Florian Weimer
  2008-03-28 21:45                   ` Randy Brukardt
@ 2008-03-31  7:59                   ` Ole-Hjalmar Kristensen
  2008-03-31 13:03                     ` (see below)
  1 sibling, 1 reply; 96+ messages in thread
From: Ole-Hjalmar Kristensen @ 2008-03-31  7:59 UTC (permalink / raw)


>>>>> "FW" == Florian Weimer <fw@deneb.enyo.de> writes:

    FW> * Ole-Hjalmar Kristensen:
    >> Second, I cannot find anything in the RM which says you can make *any*
    >> assumptions about objects which are outside the protected object so I
    >> cannot see how signaling will help you.

    FW> I think that's what 9.10 (Ada 95 without TC 1) is about.

Yes, you are right. Thanks. I looked in the part describing protected
objects without finding it.

But one part of 9.10 puzzles me:

 9.c    Reason:  The point of this distinction is so that on
        multiprocessors with inconsistent caches, the caches only need
        to be refreshed at the beginning of an entry body, and forced
        out at the end of an entry body or protected procedure that
        leaves an entry open.  Protected function calls, and protected
        subprogram calls for entryless protected objects do not require
        full cache consistency.  Entryless protected objects are
        intended to be treated roughly like atomic objects -- each
        operation is indivisible with respect to other operations
        (unless both are reads), but such operations cannot be used to
        synchronize access to other nonvolatile shared variables.

If you do not refresh the cache at the beginning of a protected
procedure, how do you avoid reading stale data within the protected
object on a multiprocessor? And why the wording "cannot be used to
synchronize access to other *nonvolatile* shared variables"? Is the
implication that it *can* be used to synchronize other *volatile*
shared variables?

Btw., I ran a simple test on a SPRAC multiprocessor with an entryless
protected object containing a single integer versus an integer
declared with pragma atomic, and as expected the pragma atomic
solution was much (40x) faster. 

-- 
   C++: The power, elegance and simplicity of a hand grenade.



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-31  7:59                   ` Ole-Hjalmar Kristensen
@ 2008-03-31 13:03                     ` (see below)
  2008-03-31 14:17                       ` (see below)
  2008-04-01  9:02                       ` Ole-Hjalmar Kristensen
  0 siblings, 2 replies; 96+ messages in thread
From: (see below) @ 2008-03-31 13:03 UTC (permalink / raw)


On 31/03/2008 08:59, in article wvbrd4pbz5mj.fsf@astra06.norway.sun.com,
"Ole-Hjalmar Kristensen"
<ole-hjalmar.kristensen@substitute_employer_here.com> wrote:

> But one part of 9.10 puzzles me:
> 
>  9.c    Reason:  The point of this distinction is so that on
>         multiprocessors with inconsistent caches, the caches only need
>         to be refreshed at the beginning of an entry body, and forced
>         out at the end of an entry body or protected procedure that
>         leaves an entry open.  Protected function calls, and protected
>         subprogram calls for entryless protected objects do not require
>         full cache consistency.  Entryless protected objects are
>         intended to be treated roughly like atomic objects -- each
>         operation is indivisible with respect to other operations
>         (unless both are reads), but such operations cannot be used to
>         synchronize access to other nonvolatile shared variables.
> 
> If you do not refresh the cache at the beginning of a protected
> procedure, how do you avoid reading stale data within the protected
> object on a multiprocessor? And why the wording "cannot be used to
> synchronize access to other *nonvolatile* shared variables"? Is the
> implication that it *can* be used to synchronize other *volatile*
> shared variables?

I think the wording is trying to cover all the bases.
One clue is the phrase "_full_ cache consistency".
The caches certainly need to be consistent with respect to the
protected data, even for protected functions and procedures,
but only entries ensure global consistency and so provide
synchronization of data that is not local to the protected object.

> Btw., I ran a simple test on a SPRAC multiprocessor with an entryless
> protected object containing a single integer versus an integer
> declared with pragma atomic, and as expected the pragma atomic
> solution was much (40x) faster.

Unfortunately, we can't usefully apply that pragma even
to a pair of integers. (I don't mean the pair's components!)

-- 
Bill Findlay
<surname><forename> chez blueyonder.co.uk





^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-31 13:03                     ` (see below)
@ 2008-03-31 14:17                       ` (see below)
  2008-04-01  9:02                       ` Ole-Hjalmar Kristensen
  1 sibling, 0 replies; 96+ messages in thread
From: (see below) @ 2008-03-31 14:17 UTC (permalink / raw)


On 31/03/2008 14:03, in article C4169FA1.E214C%yaldnif.w@blueyonder.co.uk,
"(see below)" <yaldnif.w@blueyonder.co.uk> wrote:

> On 31/03/2008 08:59, in article wvbrd4pbz5mj.fsf@astra06.norway.sun.com,
> "Ole-Hjalmar Kristensen"
> <ole-hjalmar.kristensen@substitute_employer_here.com> wrote:
>> If you do not refresh the cache at the beginning of a protected
>> procedure, how do you avoid reading stale data within the protected
>> object on a multiprocessor? And why the wording "cannot be used to
>> synchronize access to other *nonvolatile* shared variables"? Is the
>> implication that it *can* be used to synchronize other *volatile*
>> shared variables?
> 
> I think the wording is trying to cover all the bases.
> One clue is the phrase "_full_ cache consistency".
> The caches certainly need to be consistent with respect to the
> protected data, even for protected functions and procedures,
> but only entries ensure global consistency and so provide
> synchronization of data that is not local to the protected object.

In practice, I would expect even protected functions and procedures to need
a mutex, and that that would execute a memory barrier operation, so
enforcing global consistency. To verify this I ran some simple tests myself.

I implemented Simpson's algorithm for a lock-free shared variable.
Here is the relevant code:

> package Wait_Free_Atomicity is
> 
>     type an_Atomic is limited private;
>     
>     procedure Update  (Atomic_Item : in out an_Atomic; Item : in  an_Item);
>     procedure Inspect (Atomic_Item : in out an_Atomic; Item : out an_Item);
> 
> private
> 
>     type a_Bistate is new Boolean;
>     pragma Atomic (a_Bistate);
> 
>     type a_Slot_Matrix is array (a_Bistate, a_Bistate) of an_Item;
>     pragma Volatile_Components (a_Slot_Matrix);
>             
>     type a_Full_Column is array (a_Bistate) of a_Bistate;
>     pragma Atomic_Components (a_Full_Column);
>    
>    type an_Atomic is
>       limited record
>             Data_Slot_Matrix    : a_Slot_Matrix;
>             Last_Column_Updated : a_Full_Column  :=
>                   (others => a_Bistate'First);
>             Last_Row_Inspected, Last_Row_Updated : a_Bistate :=
>                   a_Bistate'First;
>             pragma Atomic (Last_Row_Inspected);
>             pragma Atomic (Last_Row_Updated);
>         end record;
> 
> end Wait_Free_Atomicity;
> 
> package body Wait_Free_Atomicity is
>         
>     procedure Update (Atomic_Item : in out an_Atomic; Item : in  an_Item) is
>         Row : constant a_Bistate := not Atomic_Item.Last_Row_Inspected;
>         Col : constant a_Bistate := not Atomic_Item.Last_Column_Updated(Row);
>     begin
>         Atomic_Item.Data_Slot_Matrix(Row, Col) := Item;
>         Atomic_Item.Last_Column_Updated(Row) := Col;
>         Atomic_Item.Last_Row_Updated := Row;
>         -- no explicit membar sync
>     end Update;
>     
>     procedure Inspect (Atomic_Item : in out an_Atomic; Item : out an_Item) is
>         Row : constant a_Bistate := Atomic_Item.Last_Row_Updated;
>         Col : a_Bistate;
>         pragma Atomic (Col);
>     begin
>         Atomic_Item.Last_Row_Inspected := Row;
>         Col  := Atomic_Item.Last_Column_Updated(Row);
>         Item := Atomic_Item.Data_Slot_Matrix(Row, Col);
>         -- no explicit membar sync
>     end Inspect;
> 
> end Wait_Free_Atomicity;

The test was to run it with the Item type being a 5-tuple of consecutive
integers, and checking that the inspecting task received correct tuples.

When run on a single-processor machine (Mac PowerBook) there were no
consistency failures.

When run on a dual-core machine (MacBook Pro with Core 2 Duo CPU) there
were many consistency failures.

When the body was modified as follows:

> package body Wait_Free_Atomicity is
>     
>     protected Mem_Bar is
>        entry Sync;
>     end Mem_Bar;
>     
>     protected body Mem_Bar is
>        entry Sync when Boolean'(True) is
>        begin
>           null;
>        end Sync;
>     end Mem_Bar;
>     
>     procedure Update (Atomic_Item : in out an_Atomic; Item : in  an_Item) is
>         Row : constant a_Bistate := not Atomic_Item.Last_Row_Inspected;
>         Col : constant a_Bistate := not Atomic_Item.Last_Column_Updated(Row);
>     begin
>         Atomic_Item.Data_Slot_Matrix(Row, Col) := Item;
>         Atomic_Item.Last_Column_Updated(Row) := Col;
>         Atomic_Item.Last_Row_Updated := Row;
>         Mem_Bar.Sync;
>     end Update;
>     
>     procedure Inspect (Atomic_Item : in out an_Atomic; Item : out an_Item) is
>         Row : constant a_Bistate := Atomic_Item.Last_Row_Updated;
>         Col : a_Bistate;
>         pragma Atomic (Col);
>     begin
>         Atomic_Item.Last_Row_Inspected := Row;
>         Col  := Atomic_Item.Last_Column_Updated(Row);
>         Item := Atomic_Item.Data_Slot_Matrix(Row, Col);
>         Mem_Bar.Sync;
>     end Inspect;
> 
> end Wait_Free_Atomicity;

Note that the tuples are external to the protected object.

The consistency failures went away.
The same result held when the Mem_Bar.Sync entry was replaced by a protected
procedure; ditto with a protected function.

For the sake of interest, here are the test results, with execution times
(CPU times exceed real times because there are 2 cores working
simultaneously):

20_000_000 updates

with no sync:
3689 consistency failures.
        0.90 real         0.73 user         0.09 sys

with entry sync:
No consistency failures.
      222.90 real       138.52 user       265.78 sys

with procedure sync:
No consistency failures.
      149.36 real       127.65 user       164.96 sys

with function sync:
No consistency failures.
      142.43 real       120.33 user       155.93 sys

-- 
Bill Findlay
<surname><forename> chez blueyonder.co.uk





^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-31 13:03                     ` (see below)
  2008-03-31 14:17                       ` (see below)
@ 2008-04-01  9:02                       ` Ole-Hjalmar Kristensen
  2008-04-01 14:12                         ` (see below)
  1 sibling, 1 reply; 96+ messages in thread
From: Ole-Hjalmar Kristensen @ 2008-04-01  9:02 UTC (permalink / raw)


>>>>> "(b" == (see below) <yaldnif.w@blueyonder.co.uk> writes:

<snip>

    (b> I think the wording is trying to cover all the bases.
    (b> One clue is the phrase "_full_ cache consistency".
    (b> The caches certainly need to be consistent with respect to the
    (b> protected data, even for protected functions and procedures,
    (b> but only entries ensure global consistency and so provide
    (b> synchronization of data that is not local to the protected object.

Yes, that seems reasonable.

    >> Btw., I ran a simple test on a SPRAC multiprocessor with an entryless
    >> protected object containing a single integer versus an integer
    >> declared with pragma atomic, and as expected the pragma atomic
    >> solution was much (40x) faster.

    (b> Unfortunately, we can't usefully apply that pragma even
    (b> to a pair of integers. (I don't mean the pair's components!)

Agreed, but you may able to cheat and pack a pair of integers into
a 64-bit atomic, and a compare-and-swap is also much cheaper than a
protected object it seems.

    (b> -- 
    (b> Bill Findlay
    (b> <surname><forename> chez blueyonder.co.uk



-- 
   C++: The power, elegance and simplicity of a hand grenade.



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-04-01  9:02                       ` Ole-Hjalmar Kristensen
@ 2008-04-01 14:12                         ` (see below)
  2008-04-02  7:22                           ` Ole-Hjalmar Kristensen
  0 siblings, 1 reply; 96+ messages in thread
From: (see below) @ 2008-04-01 14:12 UTC (permalink / raw)


On 01/04/2008 10:02, in article wvbr8wzyymkw.fsf@astra06.norway.sun.com,
"Ole-Hjalmar Kristensen"
<ole-hjalmar.kristensen@substitute_employer_here.com> wrote:

>>>>>> "(b" == (see below) <yaldnif.w@blueyonder.co.uk> writes:
> 
> <snip>
> 
>     (b> I think the wording is trying to cover all the bases.
>     (b> One clue is the phrase "_full_ cache consistency".
>     (b> The caches certainly need to be consistent with respect to the
>     (b> protected data, even for protected functions and procedures,
>     (b> but only entries ensure global consistency and so provide
>     (b> synchronization of data that is not local to the protected object.
> 
> Yes, that seems reasonable.
> 
>>> Btw., I ran a simple test on a SPRAC multiprocessor with an entryless
>>> protected object containing a single integer versus an integer
>>> declared with pragma atomic, and as expected the pragma atomic
>>> solution was much (40x) faster.
> 
>     (b> Unfortunately, we can't usefully apply that pragma even
>     (b> to a pair of integers. (I don't mean the pair's components!)

Also, simply declaring a variable atomic does not in itself
ensure global consistency of view. For that you also need
to execute appropriate memory barrier operations for the
architecture. (I'm sure you know that.)
 
> Agreed, but you may able to cheat and pack a pair of integers into
> a 64-bit atomic, and a compare-and-swap is also much cheaper than a
> protected object it seems.

Yes, but this is completely implementation-dependent.
Not much above the semantic level of assembly.
I would take a lot of convincing that the performance
improvement was actually necessary.

-- 
Bill Findlay
<surname><forename> chez blueyonder.co.uk




^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-04-01 14:12                         ` (see below)
@ 2008-04-02  7:22                           ` Ole-Hjalmar Kristensen
  2008-04-02 14:59                             ` (see below)
  0 siblings, 1 reply; 96+ messages in thread
From: Ole-Hjalmar Kristensen @ 2008-04-02  7:22 UTC (permalink / raw)


>>>>> "(b" == (see below) <yaldnif.w@blueyonder.co.uk> writes:

<snip>

    (b> Also, simply declaring a variable atomic does not in itself
    (b> ensure global consistency of view. For that you also need
    (b> to execute appropriate memory barrier operations for the
    (b> architecture. (I'm sure you know that.)

Yes, I know. So pragma atomic does not help very much by itself in
implementing lock-free algorithms.
 
    >> Agreed, but you may able to cheat and pack a pair of integers into
    >> a 64-bit atomic, and a compare-and-swap is also much cheaper than a
    >> protected object it seems.

    (b> Yes, but this is completely implementation-dependent.
    (b> Not much above the semantic level of assembly.
    (b> I would take a lot of convincing that the performance
    (b> improvement was actually necessary.

I agree that it rarely should be necessary. It would have been nice to
have a way of implementing lock-free algorithms in Ada efficiently,
but entryless protected objects seems to be the best solution if
you want a portable program.

    (b> -- 
    (b> Bill Findlay
    (b> <surname><forename> chez blueyonder.co.uk


-- 
   C++: The power, elegance and simplicity of a hand grenade.



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-04-02  7:22                           ` Ole-Hjalmar Kristensen
@ 2008-04-02 14:59                             ` (see below)
  2008-04-04  6:36                               ` Ole-Hjalmar Kristensen
  2008-04-15 12:05                               ` Ole-Hjalmar Kristensen
  0 siblings, 2 replies; 96+ messages in thread
From: (see below) @ 2008-04-02 14:59 UTC (permalink / raw)


On 02/04/2008 08:22, in article wvbr4pakzpo1.fsf@astra06.norway.sun.com,
"Ole-Hjalmar Kristensen"
<ole-hjalmar.kristensen@substitute_employer_here.com> wrote:

> Yes, I know. So pragma atomic does not help very much by itself in
> implementing lock-free algorithms.

Yes, it's only one essential aspect.
 
>>> Agreed, but you may able to cheat and pack a pair of integers into
>>> a 64-bit atomic, and a compare-and-swap is also much cheaper than a
>>> protected object it seems.
> 
>     (b> Yes, but this is completely implementation-dependent.
>     (b> Not much above the semantic level of assembly.
>     (b> I would take a lot of convincing that the performance
>     (b> improvement was actually necessary.
> 
> I agree that it rarely should be necessary. It would have been nice to
> have a way of implementing lock-free algorithms in Ada efficiently,
> but entryless protected objects seems to be the best solution if
> you want a portable program.

It would not take too much - perhaps just a standard library including
read-barrier and write-barrier operations, and a selection of things like
CAS, TAS, EXCH, etc; mapped to m/c codes where these are provided,
and implemented using the architecture's native sync operations where not.

Is this the basis for an AI?

Following on my second post to the point, I was reminded of the package
Ada.Synchronous_Task_Control, which must also impose memory barriers is it
is to work reliably on a multiprocessor; so I tried that in my Simpson's
algorithm test as well. Here is the code:

> with Ada.Synchronous_Task_Control;
> package body Wait_Free_Atomicity is
>     
>        procedure Sync is
>           use Ada.Synchronous_Task_Control;
>           Sema : Suspension_Object;
>           Bool : Boolean := Current_State(Sema); -- either this
>        begin
>            -- Set_True(Sema);                     -- or
>            -- Suspend_Until_True(Sema);           -- this
>        end Sync;
>     
>     procedure Update (Atomic_Item : in out an_Atomic; Item : in  an_Item) is
>         Row : constant a_Bistate := not Atomic_Item.Last_Row_Inspected;
>         Col : constant a_Bistate := not Atomic_Item.Last_Column_Updated(Row);
>     begin
>         Atomic_Item.Data_Slot_Matrix(Row, Col) := Item;
>         Atomic_Item.Last_Column_Updated(Row) := Col;
>         Atomic_Item.Last_Row_Updated := Row;
>         Sync;
>     end Update;
>     
>     procedure Inspect (Atomic_Item : in out an_Atomic; Item : out an_Item) is
>         Row : constant a_Bistate := Atomic_Item.Last_Row_Updated;
>         Col : a_Bistate;
>         pragma Atomic (Col);
>     begin
>         Atomic_Item.Last_Row_Inspected := Row;
>         Col  := Atomic_Item.Last_Column_Updated(Row);
>         Item := Atomic_Item.Data_Slot_Matrix(Row, Col);
>         Sync;
>     end Inspect;
> 
> end Wait_Free_Atomicity;

It works nicely, and is an order of magnitude faster than a protected
object:

20_000_000 updates

with Suspend_Until_True sync:
No consistency failures.
        7.30 real        14.14 user         0.04 sys

with Current_State sync:
No consistency failures.
        3.57 real         6.97 user         0.02 sys

-- 
Bill Findlay
<surname><forename> chez blueyonder.co.uk





^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-04-02 14:59                             ` (see below)
@ 2008-04-04  6:36                               ` Ole-Hjalmar Kristensen
  2008-04-04 13:56                                 ` (see below)
  2008-04-15 12:05                               ` Ole-Hjalmar Kristensen
  1 sibling, 1 reply; 96+ messages in thread
From: Ole-Hjalmar Kristensen @ 2008-04-04  6:36 UTC (permalink / raw)


Interesting. I had not thought of Ada.Synchronous_Task_Control.  Apart
from that, I agree that a "best effort" implementation of standard
library like the membar_ops and atomic_ops which are part of the
Solaris C library would likely be sufficient. 


>>>>> "(b" == (see below) <yaldnif.w@blueyonder.co.uk> writes:

    (b> On 02/04/2008 08:22, in article wvbr4pakzpo1.fsf@astra06.norway.sun.com,
    (b> "Ole-Hjalmar Kristensen"
    (b> <ole-hjalmar.kristensen@substitute_employer_here.com> wrote:

    >> Yes, I know. So pragma atomic does not help very much by itself in
    >> implementing lock-free algorithms.

    (b> Yes, it's only one essential aspect.
 
    >>>> Agreed, but you may able to cheat and pack a pair of integers into
    >>>> a 64-bit atomic, and a compare-and-swap is also much cheaper than a
    >>>> protected object it seems.
    >> 
    >> (b> Yes, but this is completely implementation-dependent.
    >> (b> Not much above the semantic level of assembly.
    >> (b> I would take a lot of convincing that the performance
    >> (b> improvement was actually necessary.
    >> 
    >> I agree that it rarely should be necessary. It would have been nice to
    >> have a way of implementing lock-free algorithms in Ada efficiently,
    >> but entryless protected objects seems to be the best solution if
    >> you want a portable program.

    (b> It would not take too much - perhaps just a standard library including
    (b> read-barrier and write-barrier operations, and a selection of things like
    (b> CAS, TAS, EXCH, etc; mapped to m/c codes where these are provided,
    (b> and implemented using the architecture's native sync operations where not.

    (b> Is this the basis for an AI?

    (b> Following on my second post to the point, I was reminded of the package
    (b> Ada.Synchronous_Task_Control, which must also impose memory barriers is it
    (b> is to work reliably on a multiprocessor; so I tried that in my Simpson's
    (b> algorithm test as well. Here is the code:

    >> with Ada.Synchronous_Task_Control;
    >> package body Wait_Free_Atomicity is
    >> 
    >> procedure Sync is
    >> use Ada.Synchronous_Task_Control;
    >> Sema : Suspension_Object;
    >> Bool : Boolean := Current_State(Sema); -- either this
    >> begin
    >> -- Set_True(Sema);                     -- or
    >> -- Suspend_Until_True(Sema);           -- this
    >> end Sync;
    >> 
    >> procedure Update (Atomic_Item : in out an_Atomic; Item : in  an_Item) is
    >> Row : constant a_Bistate := not Atomic_Item.Last_Row_Inspected;
    >> Col : constant a_Bistate := not Atomic_Item.Last_Column_Updated(Row);
    >> begin
    >> Atomic_Item.Data_Slot_Matrix(Row, Col) := Item;
    >> Atomic_Item.Last_Column_Updated(Row) := Col;
    >> Atomic_Item.Last_Row_Updated := Row;
    >> Sync;
    >> end Update;
    >> 
    >> procedure Inspect (Atomic_Item : in out an_Atomic; Item : out an_Item) is
    >> Row : constant a_Bistate := Atomic_Item.Last_Row_Updated;
    >> Col : a_Bistate;
    >> pragma Atomic (Col);
    >> begin
    >> Atomic_Item.Last_Row_Inspected := Row;
    >> Col  := Atomic_Item.Last_Column_Updated(Row);
    >> Item := Atomic_Item.Data_Slot_Matrix(Row, Col);
    >> Sync;
    >> end Inspect;
    >> 
    >> end Wait_Free_Atomicity;

    (b> It works nicely, and is an order of magnitude faster than a protected
    (b> object:

    (b> 20_000_000 updates

    (b> with Suspend_Until_True sync:
    (b> No consistency failures.
    (b>         7.30 real        14.14 user         0.04 sys

    (b> with Current_State sync:
    (b> No consistency failures.
    (b>         3.57 real         6.97 user         0.02 sys

    (b> -- 
    (b> Bill Findlay
    (b> <surname><forename> chez blueyonder.co.uk

-- 
   C++: The power, elegance and simplicity of a hand grenade.



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-04-04  6:36                               ` Ole-Hjalmar Kristensen
@ 2008-04-04 13:56                                 ` (see below)
  2008-04-04 17:36                                   ` Georg Bauhaus
  0 siblings, 1 reply; 96+ messages in thread
From: (see below) @ 2008-04-04 13:56 UTC (permalink / raw)


On 04/04/2008 07:36, in article wvbry77uxh19.fsf@astra06.norway.sun.com,
"Ole-Hjalmar Kristensen"
<ole-hjalmar.kristensen@substitute_employer_here.com> wrote:

> Interesting. I had not thought of Ada.Synchronous_Task_Control.  Apart
> from that, I agree that a "best effort" implementation of standard
> library like the membar_ops and atomic_ops which are part of the
> Solaris C library would likely be sufficient.

Is there online documentation/code for that library?

-- 
Bill Findlay
<surname><forename> chez blueyonder.co.uk





^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-04-04 13:56                                 ` (see below)
@ 2008-04-04 17:36                                   ` Georg Bauhaus
  2008-04-04 17:40                                     ` (see below)
  0 siblings, 1 reply; 96+ messages in thread
From: Georg Bauhaus @ 2008-04-04 17:36 UTC (permalink / raw)



On Fri, 2008-04-04 at 14:56 +0100, (see below) wrote:
> On 04/04/2008 07:36, in article wvbry77uxh19.fsf@astra06.norway.sun.com,
> "Ole-Hjalmar Kristensen"
> <ole-hjalmar.kristensen@substitute_employer_here.com> wrote:
> 
> > Interesting. I had not thought of Ada.Synchronous_Task_Control.  Apart
> > from that, I agree that a "best effort" implementation of standard
> > library like the membar_ops and atomic_ops which are part of the
> > Solaris C library would likely be sufficient.
> 
> Is there online documentation/code for that library?

docs.sun.com has sometimes been a good starting point.
http://docs.sun.com/app/docs/doc/819-2256/membar-ops-9f?a=view





^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-04-04 17:36                                   ` Georg Bauhaus
@ 2008-04-04 17:40                                     ` (see below)
  0 siblings, 0 replies; 96+ messages in thread
From: (see below) @ 2008-04-04 17:40 UTC (permalink / raw)


On 04/04/2008 18:36, in article 1207330618.23925.2.camel@K72, "Georg
Bauhaus" <rm.plus-bug.tsoh@maps.futureapps.de> wrote:

> 
> On Fri, 2008-04-04 at 14:56 +0100, (see below) wrote:
>> On 04/04/2008 07:36, in article wvbry77uxh19.fsf@astra06.norway.sun.com,
>> "Ole-Hjalmar Kristensen"
>> <ole-hjalmar.kristensen@substitute_employer_here.com> wrote:
>> 
>>> Interesting. I had not thought of Ada.Synchronous_Task_Control.  Apart
>>> from that, I agree that a "best effort" implementation of standard
>>> library like the membar_ops and atomic_ops which are part of the
>>> Solaris C library would likely be sufficient.
>> 
>> Is there online documentation/code for that library?
> 
> docs.sun.com has sometimes been a good starting point.
> http://docs.sun.com/app/docs/doc/819-2256/membar-ops-9f?a=view

Thank, that's handy.

-- 
Bill Findlay
<surname><forename> chez blueyonder.co.uk





^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-04-02 14:59                             ` (see below)
  2008-04-04  6:36                               ` Ole-Hjalmar Kristensen
@ 2008-04-15 12:05                               ` Ole-Hjalmar Kristensen
  2008-04-17  4:46                                 ` Randy Brukardt
  1 sibling, 1 reply; 96+ messages in thread
From: Ole-Hjalmar Kristensen @ 2008-04-15 12:05 UTC (permalink / raw)


After reading your post about use of suspension objects, I looked in
the RM to see what guarantees we have about the effect of sharing
variables between tasks synchronized with suspension objects. I
completely agree that to work reliably on a multiprocessor the
suspension operations *has* to impose a memory barrier, but I was
unable to find any explicit statement about sharing variables
synchronized with suspension objects.

The sections I have reproduced below is the closest I could get. Since
Suspend_Until_True is potentially blocking, it *could* be signaled (as
defined in 9.10), but I find it strange that it is not mentioned
explicitly. Also, the statement "can be used for two-stage suspend
operations and as a simple building block for implementing
higher-level queues", seems to indicate that the intent indeed is to
be able to use suspension objects to synchronize access to shared
variables. Comments, anyone?

From the AARM:

9.10 Shared Variables

 9.a    Reason:  The underlying principle here is that for one action to
        ``signal'' a second, the second action has to follow a
        potentially blocking operation, whose blocking is dependent on
        the first action in some way.  Protected procedures are not
        potentially blocking, so they can only be "signalers," they
        cannot be signaled.

and 

D.10 Synchronous Task Control

1   [This clause describes a language-defined private semaphore (suspension
object), which can be used for two-stage suspend operations and as a simple
building block for implementing higher-level queues.]
...
9   The procedure Suspend_Until_True blocks the calling task until the state
of the object S is true; at that point the task becomes ready and the state
of the object becomes false.

10   {potentially blocking operation [Suspend_Until_True]} {blocking,
potentially [Suspend_Until_True]} {Program_Error (raised by failure of
run-time check)} Program_Error is raised upon calling Suspend_Until_True if
another task is already waiting on that suspension object.  Suspend_Until_
True is a potentially blocking operation (see 9.5.1).


>>>>> "(b" == (see below) <yaldnif.w@blueyonder.co.uk> writes:

<snip>

    (b> Following on my second post to the point, I was reminded of the package
    (b> Ada.Synchronous_Task_Control, which must also impose memory barriers is it
    (b> is to work reliably on a multiprocessor; so I tried that in my Simpson's
    (b> algorithm test as well. Here is the code:

    >> with Ada.Synchronous_Task_Control;
    >> package body Wait_Free_Atomicity is
    >> 
    >> procedure Sync is
    >> use Ada.Synchronous_Task_Control;
    >> Sema : Suspension_Object;
    >> Bool : Boolean := Current_State(Sema); -- either this
    >> begin
    >> -- Set_True(Sema);                     -- or
    >> -- Suspend_Until_True(Sema);           -- this
    >> end Sync;
    >> 
    >> procedure Update (Atomic_Item : in out an_Atomic; Item : in  an_Item) is
    >> Row : constant a_Bistate := not Atomic_Item.Last_Row_Inspected;
    >> Col : constant a_Bistate := not Atomic_Item.Last_Column_Updated(Row);
    >> begin
    >> Atomic_Item.Data_Slot_Matrix(Row, Col) := Item;
    >> Atomic_Item.Last_Column_Updated(Row) := Col;
    >> Atomic_Item.Last_Row_Updated := Row;
    >> Sync;
    >> end Update;
    >> 
    >> procedure Inspect (Atomic_Item : in out an_Atomic; Item : out an_Item) is
    >> Row : constant a_Bistate := Atomic_Item.Last_Row_Updated;
    >> Col : a_Bistate;
    >> pragma Atomic (Col);
    >> begin
    >> Atomic_Item.Last_Row_Inspected := Row;
    >> Col  := Atomic_Item.Last_Column_Updated(Row);
    >> Item := Atomic_Item.Data_Slot_Matrix(Row, Col);
    >> Sync;
    >> end Inspect;
    >> 
    >> end Wait_Free_Atomicity;

    (b> It works nicely, and is an order of magnitude faster than a protected
    (b> object:

    (b> 20_000_000 updates

    (b> with Suspend_Until_True sync:
    (b> No consistency failures.
    (b>         7.30 real        14.14 user         0.04 sys

    (b> with Current_State sync:
    (b> No consistency failures.
    (b>         3.57 real         6.97 user         0.02 sys

    (b> -- 
    (b> Bill Findlay
    (b> <surname><forename> chez blueyonder.co.uk



-- 
   C++: The power, elegance and simplicity of a hand grenade.



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-04-15 12:05                               ` Ole-Hjalmar Kristensen
@ 2008-04-17  4:46                                 ` Randy Brukardt
  0 siblings, 0 replies; 96+ messages in thread
From: Randy Brukardt @ 2008-04-17  4:46 UTC (permalink / raw)


"Ole-Hjalmar Kristensen"
<ole-hjalmar.kristensen@substitute_employer_here.com> wrote in message
news:wvbrtzi32uig.fsf@astra06.norway.sun.com...
...
> The sections I have reproduced below is the closest I could get. Since
> Suspend_Until_True is potentially blocking, it *could* be signaled (as
> defined in 9.10), but I find it strange that it is not mentioned
> explicitly. Also, the statement "can be used for two-stage suspend
> operations and as a simple building block for implementing
> higher-level queues", seems to indicate that the intent indeed is to
> be able to use suspension objects to synchronize access to shared
> variables. Comments, anyone?

Looks like a bug in the RM to me. Surely the intent is that these operations
"signal" (what good are they otherwise?), but without some sort of statement
to that effect, you (and implementers) just have to guess. You should submit
a comment to Ada-Comment.

                                  Randy.





^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-03-08 22:11 ` Maciej Sobczak
                     ` (4 preceding siblings ...)
  2008-03-10 21:24   ` Randy Brukardt
@ 2008-04-29  7:15   ` Ivan Levashew
  2008-05-01  2:03     ` Steve Whalen
  5 siblings, 1 reply; 96+ messages in thread
From: Ivan Levashew @ 2008-04-29  7:15 UTC (permalink / raw)


Maciej Sobczak пишет:
> Take for example lock-free algorithms. There is no visible research on
> this related to Ada, unlike Java and C++ (check on
> comp.programming.threads).
FYI: http://www.gidenstam.org/Ada/Non-Blocking/index.html

-- 
If you want to get to the top, you have to start at the bottom



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing!
  2008-04-29  7:15   ` Ivan Levashew
@ 2008-05-01  2:03     ` Steve Whalen
  0 siblings, 0 replies; 96+ messages in thread
From: Steve Whalen @ 2008-05-01  2:03 UTC (permalink / raw)


On Apr 29, 12:15 am, Ivan Levashew <octag...@bluebottle.com> wrote:
> Maciej Sobczak пишет:> Take for example lock-free algorithms. There is no visible research on
> > this related to Ada, unlike Java and C++ (check on
> > comp.programming.threads).
>
> FYI:http://www.gidenstam.org/Ada/Non-Blocking/index.html
>
> --
> If you want to get to the top, you have to start at the bottom

Thanks for the link. Very interesting ...



^ permalink raw reply	[flat|nested] 96+ messages in thread

end of thread, other threads:[~2008-05-01  2:03 UTC | newest]

Thread overview: 96+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2008-03-08  6:04 Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing! ME
2008-03-08 22:11 ` Maciej Sobczak
2008-03-09  1:09   ` Christopher Henrich
2008-03-09 13:52     ` Maciej Sobczak
2008-03-09  1:51   ` Phaedrus
2008-03-09  3:17     ` Jeffrey R. Carter
2008-03-09 13:59     ` Maciej Sobczak
2008-03-09  3:15   ` Jeffrey R. Carter
2008-03-09 13:32     ` Maciej Sobczak
2008-03-09 14:02       ` Dmitry A. Kazakov
2008-03-09 18:26       ` Phaedrus
2008-03-10  0:04         ` Ray Blaak
2008-03-10  7:49           ` Georg Bauhaus
2008-03-10 16:48             ` Ray Blaak
2008-03-10  7:53           ` Phaedrus
2008-03-09 22:31       ` Jeffrey R. Carter
2008-03-10  3:53         ` gpriv
2008-03-10  3:04       ` Robert Dewar's great article about the Strengths of Ada over Larry Kilgallen
2008-03-10  9:23         ` Maciej Sobczak
2008-03-10 19:01           ` Jeffrey R. Carter
2008-03-10 22:00             ` Maciej Sobczak
2008-03-11  0:48               ` Jeffrey R. Carter
2008-03-11  7:12                 ` Pascal Obry
2008-03-11  8:59                 ` Maciej Sobczak
2008-03-11  9:49                   ` GNAT bug, Assert_Failure at atree.adb:2893 Ludovic Brenta
2008-03-14 20:03                   ` Robert Dewar's great article about the Strengths of Ada over Ivan Levashew
2008-03-22 21:12           ` Florian Weimer
2008-03-09  8:20   ` Robert Dewar's great article about the Strengths of Ada over other langauges in multiprocessing! Pascal Obry
2008-03-09  9:39     ` Georg Bauhaus
2008-03-09 12:40     ` Vadim Godunko
2008-03-09 13:37       ` Dmitry A. Kazakov
2008-03-09 14:41         ` Vadim Godunko
2008-03-10 20:51           ` Randy Brukardt
2008-03-10 22:30             ` Niklas Holsti
2008-03-10  9:56         ` Ole-Hjalmar Kristensen
2008-03-11 13:58       ` george.priv
2008-03-11 15:41         ` Vadim Godunko
2008-03-12  0:32           ` gpriv
2008-03-12 13:33             ` Maciej Sobczak
2008-03-12 14:41               ` gpriv
2008-03-12 15:22                 ` Vadim Godunko
2008-03-13  0:34                   ` gpriv
2008-03-12 16:28                 ` Maciej Sobczak
2008-03-12 17:24                   ` Samuel Tardieu
2008-03-13  8:41                     ` Maciej Sobczak
2008-03-13 15:20                       ` Samuel Tardieu
2008-03-12 23:54                   ` gpriv
2008-03-13  9:40                     ` Maciej Sobczak
2008-03-13 10:49                       ` Peter C. Chapin
2008-03-13 13:03                         ` Alex R. Mosteo
2008-03-13 14:02                           ` gpriv
2008-03-14  1:12                           ` Randy Brukardt
2008-03-14 10:16                             ` Alex R. Mosteo
2008-03-13 11:42                       ` gpriv
2008-03-13 16:10                         ` Maciej Sobczak
2008-03-13 16:16                           ` gpriv
2008-03-13 22:01                             ` Simon Wright
2008-03-13 22:25                             ` Maciej Sobczak
2008-03-14  2:07                               ` gpriv
2008-03-14  9:29                                 ` Maciej Sobczak
2008-03-14 21:54                                 ` Simon Wright
2008-03-15  2:29                                   ` gpriv
2008-03-15 13:29                                     ` Maciej Sobczak
2008-03-15 16:09                                       ` gpriv
2008-03-11 22:09       ` gpriv
2008-03-09 13:50     ` Maciej Sobczak
2008-03-09 14:54       ` Pascal Obry
2008-03-10 21:24   ` Randy Brukardt
2008-03-11 10:12     ` Alex R. Mosteo
2008-03-22 22:43     ` Florian Weimer
2008-03-26 13:49       ` Ole-Hjalmar Kristensen
2008-03-26 21:27         ` Florian Weimer
2008-03-27  9:31           ` Ole-Hjalmar Kristensen
2008-03-27 23:10             ` Florian Weimer
2008-03-28  9:51               ` Ole-Hjalmar Kristensen
2008-03-28 18:12                 ` Florian Weimer
2008-03-28 21:45                   ` Randy Brukardt
2008-03-31  7:59                   ` Ole-Hjalmar Kristensen
2008-03-31 13:03                     ` (see below)
2008-03-31 14:17                       ` (see below)
2008-04-01  9:02                       ` Ole-Hjalmar Kristensen
2008-04-01 14:12                         ` (see below)
2008-04-02  7:22                           ` Ole-Hjalmar Kristensen
2008-04-02 14:59                             ` (see below)
2008-04-04  6:36                               ` Ole-Hjalmar Kristensen
2008-04-04 13:56                                 ` (see below)
2008-04-04 17:36                                   ` Georg Bauhaus
2008-04-04 17:40                                     ` (see below)
2008-04-15 12:05                               ` Ole-Hjalmar Kristensen
2008-04-17  4:46                                 ` Randy Brukardt
2008-03-28  6:34             ` Randy Brukardt
2008-04-29  7:15   ` Ivan Levashew
2008-05-01  2:03     ` Steve Whalen
2008-03-14 19:20 ` Mike Silva
2008-03-14 20:43   ` Ed Falis
2008-03-22 22:51 ` Florian Weimer

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox