comp.lang.ada
 help / color / mirror / Atom feed
* Preferred OS, processor family for running embedded Ada?
@ 2007-02-23  0:59 Mike Silva
  2007-02-23  4:41 ` Steve
                   ` (2 more replies)
  0 siblings, 3 replies; 79+ messages in thread
From: Mike Silva @ 2007-02-23  0:59 UTC (permalink / raw)


First a bit of background.  I'm a longtime embedded programmer and a
dabbler in Ada (I'd use it more if I could get paid for it).  Now I'd
like to play around with Ada on a single-board computer.  I have no
particular goals in mind other than to try something "neat".  So, what
is likely to be the quickest, most foolproof way for me to get from
here to there?

I'm assuming I'll want an OS on the board for the runtime stuff.
Would one of the *BSDs or Linux be the way to go?  If so and given my
intentions, would there be a reason to choose one over the other?  My
contrary side wants to try a *BSD, but I have no experience in any of
them _or_ Linux.

And what about processor family?  I was thinking ARM or Coldfire or
PPC (something in the MPC5xx family maybe).  Again, would there be an
Ada- or OS-related reason to choose one over the others?

I did ask an abbreviated version of this question at the bottom of
another thread, but I'm hoping this thread will have more visibility.
So, is all of this do-able by a mere mortal?  Many thanks for any
advice!




^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Preferred OS, processor family for running embedded Ada?
  2007-02-23  0:59 Preferred OS, processor family for running embedded Ada? Mike Silva
@ 2007-02-23  4:41 ` Steve
  2007-02-23 16:00   ` Mike Silva
  2007-02-23  4:49 ` Jeffrey R. Carter
  2007-02-23 13:56 ` Stephen Leake
  2 siblings, 1 reply; 79+ messages in thread
From: Steve @ 2007-02-23  4:41 UTC (permalink / raw)


Two free RTOS I am aware of are RTEMS and MaRTE:
  http://www.rtems.com/wiki/index.php/RTEMSAda
and:
  http://marte.unican.es/

These both work with GNAT.

Regards,
Steve
(The Duck)

"Mike Silva" <snarflemike@yahoo.com> wrote in message 
news:1172192349.419694.274670@k78g2000cwa.googlegroups.com...
> First a bit of background.  I'm a longtime embedded programmer and a
> dabbler in Ada (I'd use it more if I could get paid for it).  Now I'd
> like to play around with Ada on a single-board computer.  I have no
> particular goals in mind other than to try something "neat".  So, what
> is likely to be the quickest, most foolproof way for me to get from
> here to there?
>
> I'm assuming I'll want an OS on the board for the runtime stuff.
> Would one of the *BSDs or Linux be the way to go?  If so and given my
> intentions, would there be a reason to choose one over the other?  My
> contrary side wants to try a *BSD, but I have no experience in any of
> them _or_ Linux.
>
> And what about processor family?  I was thinking ARM or Coldfire or
> PPC (something in the MPC5xx family maybe).  Again, would there be an
> Ada- or OS-related reason to choose one over the others?
>
> I did ask an abbreviated version of this question at the bottom of
> another thread, but I'm hoping this thread will have more visibility.
> So, is all of this do-able by a mere mortal?  Many thanks for any
> advice!
> 





^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Preferred OS, processor family for running embedded Ada?
  2007-02-23  0:59 Preferred OS, processor family for running embedded Ada? Mike Silva
  2007-02-23  4:41 ` Steve
@ 2007-02-23  4:49 ` Jeffrey R. Carter
  2007-02-23 13:13   ` Mike Silva
  2007-02-23 13:56 ` Stephen Leake
  2 siblings, 1 reply; 79+ messages in thread
From: Jeffrey R. Carter @ 2007-02-23  4:49 UTC (permalink / raw)


Asking for the "preferred" anything here is dangerous. You'll provoke 
all sorts of arguments about which what is "preferred".

For getting quickly and easily into embedded Ada, you could try Lego 
Mindstorms. There's a free Ada => NQC compiler available. This is not 
the preferred way, of course, but it is a way.

-- 
Jeff Carter
"Your mother was a hamster and your father smelt of elderberries."
Monty Python & the Holy Grail
06



^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Preferred OS, processor family for running embedded Ada?
  2007-02-23  4:49 ` Jeffrey R. Carter
@ 2007-02-23 13:13   ` Mike Silva
  0 siblings, 0 replies; 79+ messages in thread
From: Mike Silva @ 2007-02-23 13:13 UTC (permalink / raw)


On Feb 22, 11:49 pm, "Jeffrey R. Carter" <jrcar...@acm.org> wrote:
> Asking for the "preferred" anything here is dangerous. You'll provoke
> all sorts of arguments about which what is "preferred".

I can believe that.  I was asking in terms of what OSes and/or
processors might have better support, or fewer gotchas.  What I want
to avoid is the situation where some port or feature X has not been
kept up to dat, or is known to have problems.  For example, I believe
I remember some years back that there was a problem or poor
performance with some version of Linux threads.  That kind of thing.
What I want to do is not accidentally drift so far out of the
mainstream that I cause myself grief.
>
> For getting quickly and easily into embedded Ada, you could try Lego
> Mindstorms. There's a free Ada => NQC compiler available. This is not
> the preferred way, of course, but it is a way.

I appreciate that suggestion, but I'd like to work with a 32-bit
mainstream processor family.  While my goal now is just to play
around, if I could use what I learn in some real products down the
line so much the better.  It's that hopeful thing about maybe getting
paid to do Ada (after I first have some fun with it).




^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Preferred OS, processor family for running embedded Ada?
  2007-02-23  0:59 Preferred OS, processor family for running embedded Ada? Mike Silva
  2007-02-23  4:41 ` Steve
  2007-02-23  4:49 ` Jeffrey R. Carter
@ 2007-02-23 13:56 ` Stephen Leake
  2007-02-23 14:10   ` Mike Silva
  2 siblings, 1 reply; 79+ messages in thread
From: Stephen Leake @ 2007-02-23 13:56 UTC (permalink / raw)


"Mike Silva" <snarflemike@yahoo.com> writes:

> First a bit of background.  I'm a longtime embedded programmer and a
> dabbler in Ada (I'd use it more if I could get paid for it).  Now I'd
> like to play around with Ada on a single-board computer.  I have no
> particular goals in mind other than to try something "neat".  So, what
> is likely to be the quickest, most foolproof way for me to get from
> here to there?
>
> I'm assuming I'll want an OS on the board for the runtime stuff.
> Would one of the *BSDs or Linux be the way to go?  If so and given my
> intentions, would there be a reason to choose one over the other?  My
> contrary side wants to try a *BSD, but I have no experience in any of
> them _or_ Linux.
>
> And what about processor family?  I was thinking ARM or Coldfire or
> PPC (something in the MPC5xx family maybe).  Again, would there be an
> Ada- or OS-related reason to choose one over the others?
>
> I did ask an abbreviated version of this question at the bottom of
> another thread, but I'm hoping this thread will have more visibility.
> So, is all of this do-able by a mere mortal?  Many thanks for any
> advice!

I responded to this before. There are several readily available
solutions to the requirements as you state them.

They all cost money, several thousands of dollars. You don't say how
much money you are willing to spend; is $10k too much?

-- 
-- Stephe



^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Preferred OS, processor family for running embedded Ada?
  2007-02-23 13:56 ` Stephen Leake
@ 2007-02-23 14:10   ` Mike Silva
  2007-02-24 10:45     ` Stephen Leake
  2007-02-24 13:59     ` Jacob Sparre Andersen
  0 siblings, 2 replies; 79+ messages in thread
From: Mike Silva @ 2007-02-23 14:10 UTC (permalink / raw)


On Feb 23, 8:56 am, Stephen Leake <stephen_le...@stephe-leake.org>
wrote:
> "Mike Silva" <snarflem...@yahoo.com> writes:
> > First a bit of background.  I'm a longtime embedded programmer and a
> > dabbler in Ada (I'd use it more if I could get paid for it).  Now I'd
> > like to play around with Ada on a single-board computer.  I have no
> > particular goals in mind other than to try something "neat".  So, what
> > is likely to be the quickest, most foolproof way for me to get from
> > here to there?
>
> > I'm assuming I'll want an OS on the board for the runtime stuff.
> > Would one of the *BSDs or Linux be the way to go?  If so and given my
> > intentions, would there be a reason to choose one over the other?  My
> > contrary side wants to try a *BSD, but I have no experience in any of
> > them _or_ Linux.
>
> > And what about processor family?  I was thinking ARM or Coldfire or
> > PPC (something in the MPC5xx family maybe).  Again, would there be an
> > Ada- or OS-related reason to choose one over the others?
>
> > I did ask an abbreviated version of this question at the bottom of
> > another thread, but I'm hoping this thread will have more visibility.
> > So, is all of this do-able by a mere mortal?  Many thanks for any
> > advice!
>
> I responded to this before. There are several readily available
> solutions to the requirements as you state them.
>
> They all cost money, several thousands of dollars. You don't say how
> much money you are willing to spend; is $10k too much?

Yes, you did respond and I was grateful for your comments.  As this is
just a hobby/learning thing at the moment, $10k is way, way too much.
I'd like to keep the cost including SBC under, say, $1000.  Do I dream
the impossible dream?  I hope not, because I'd really like to give
this a try and perhaps learn enough to use embedded Ada commercially
down the line (at which time somebody else could fork up the $10k).




^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Preferred OS, processor family for running embedded Ada?
  2007-02-23  4:41 ` Steve
@ 2007-02-23 16:00   ` Mike Silva
  0 siblings, 0 replies; 79+ messages in thread
From: Mike Silva @ 2007-02-23 16:00 UTC (permalink / raw)


On Feb 22, 11:41 pm, "Steve" <nospam_steve...@comcast.net> wrote:
> Two free RTOS I am aware of are RTEMS and MaRTE:http://www.rtems.com/wiki/index.php/RTEMSAda
> and:http://marte.unican.es/
>
> These both work with GNAT.

Thanks for mentioning these, especially RTEMS.  I have looked at it a
bit in the past, and looking again now, the documentation at least
seems comprehensive, and it has ports to all the processors I'd be
interested in.  I will definitely look closely at RTEMS this time.




^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Preferred OS, processor family for running embedded Ada?
  2007-02-23 14:10   ` Mike Silva
@ 2007-02-24 10:45     ` Stephen Leake
  2007-02-24 12:27       ` Jeffrey Creem
  2007-02-24 19:11       ` Mike Silva
  2007-02-24 13:59     ` Jacob Sparre Andersen
  1 sibling, 2 replies; 79+ messages in thread
From: Stephen Leake @ 2007-02-24 10:45 UTC (permalink / raw)


"Mike Silva" <snarflemike@yahoo.com> writes:

> As this is just a hobby/learning thing at the moment, $10k is way,
> way too much. I'd like to keep the cost including SBC under, say,
> $1000. Do I dream the impossible dream? I hope not, because I'd
> really like to give this a try and perhaps learn enough to use
> embedded Ada commercially down the line (at which time somebody else
> could fork up the $10k).

My main job at work is building a satellite simulator (GDS;
http://fsw.gsfc.nasa.gov/gds/). It's a hard real-time system. Some
people would say it's not "embedded" because it has an ethernet
connection to a sophisticated user interface, but that's another
discussion.

I develop all of the software for GDS on Windows. I've written
emulation packages for some of the hardware. I do this because it's
easier to debug top level code without the hardware getting in the
way, and the development tools (Emacs, GNAT, gdb) work better on
Windows than on the target OS (Lynx). Once it's working on the
emulator, then I run it on the real hardware. Sometimes it Just Works,
sometimes I have to get out the scope and see what's going on. In that
case, I try to fix the emulator so I won't have to use the scope again
:). Using the scope can be fun, but it's always way slower than using
gdb or higher-level tests.

So I suggest you take a similar approach. Make up some hardware that
you'd like to play with, and write an emulator for it. Then write some
code to make that hardware dance.

You can do all of that on free software and cheap hardware.

If I was hiring (which I'm not), I'd look for someone who can
implement algorithms from simple problem descriptions. That's my
biggest need.

Understanding how to use a scope to debug hardware problems is also
good, but not as important. It's easier to learn that on the job.

If you want to expand into "real hardware", there are data acquistion
and control devices that plug into PCI slots, and come with Windows
drivers. I don't use them, but I think they are fairly inexpensive.
Anything for Windows is going to be the cheapest solution, because of
economies of scale. And they are "real-time" enough to get your feet wet. 

Another area to explore is FPGA programming. We use small FPGAs on an
IP bus carrier
(http://www.acromag.com/functions.cfm?Category_ID=24&Group_ID=1) to
interface to our hardware. There is a free "Web" version of Alterra
Quartus
(https://www.altera.com/support/software/download/sof-download_center.html),
or the open-source ghdl VHDL compiler/simulator
(http://ghdl.free.fr/). FPGA development relies heavily on simulation,
which does not require real hardware.

If you are ambitious, you can try to tie the ghdl simulator to your
Ada code, to allow testing the Ada interface to the FPGA in
simulation. I haven't done that yet, but I wish I could.

Someone who can do both Ada and VHDL would be a very valuable person!

-- 
-- Stephe



^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Preferred OS, processor family for running embedded Ada?
  2007-02-24 10:45     ` Stephen Leake
@ 2007-02-24 12:27       ` Jeffrey Creem
  2007-02-24 22:10         ` Dr. Adrian Wrigley
  2007-02-24 19:11       ` Mike Silva
  1 sibling, 1 reply; 79+ messages in thread
From: Jeffrey Creem @ 2007-02-24 12:27 UTC (permalink / raw)


Stephen Leake wrote:

> 
> Someone who can do both Ada and VHDL would be a very valuable person!
> 

I'm always surprised that VHDL engineers are not more open to Ada given 
how close the syntax is. The standard joke where I work is that VHDL is 
just like Ada except the capslock is always stuck on and comments are 
apparently forbidden ;)



^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Preferred OS, processor family for running embedded Ada?
  2007-02-23 14:10   ` Mike Silva
  2007-02-24 10:45     ` Stephen Leake
@ 2007-02-24 13:59     ` Jacob Sparre Andersen
  2007-03-01 19:32       ` Jacob Sparre Andersen
  1 sibling, 1 reply; 79+ messages in thread
From: Jacob Sparre Andersen @ 2007-02-24 13:59 UTC (permalink / raw)


Mike Silva <snarflemike@yahoo.com> wrote:
> Stephen Leake <stephen_le...@stephe-leake.org> wrote:
>> Mike Silva <snarflem...@yahoo.com> wrote:

>> > I'm assuming I'll want an OS on the board for the runtime stuff.
>> > Would one of the *BSDs or Linux be the way to go?  If so and
>> > given my intentions, would there be a reason to choose one over
>> > the other?  My contrary side wants to try a *BSD, but I have no
>> > experience in any of them _or_ Linux.
>>
>> > And what about processor family?  I was thinking ARM or Coldfire
>> > or PPC (something in the MPC5xx family maybe).  Again, would
>> > there be an Ada- or OS-related reason to choose one over the
>> > others?
[...]
>> They all cost money, several thousands of dollars. You don't say
>> how much money you are willing to spend; is $10k too much?
>
> Yes, you did respond and I was grateful for your comments.  As this
> is just a hobby/learning thing at the moment, $10k is way, way too
> much.  I'd like to keep the cost including SBC under, say, $1000.

One of my acquaintances from the local Linux user group works with a
Linux/IA32-based embedded kit.  If I remember correctly, the price is
less than 200 USD for a full system (and the size of the embedded unit
is less than 10 cm�).  I'll ask him about the source of the kit, and
let you know when I have an answer.

Greetings,

Jacob
-- 
There only exist 10 kinds of people: Those who know binary
numbers and those who don't know binary numbers.



^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Preferred OS, processor family for running embedded Ada?
  2007-02-24 10:45     ` Stephen Leake
  2007-02-24 12:27       ` Jeffrey Creem
@ 2007-02-24 19:11       ` Mike Silva
  1 sibling, 0 replies; 79+ messages in thread
From: Mike Silva @ 2007-02-24 19:11 UTC (permalink / raw)


On Feb 24, 5:45 am, Stephen Leake <stephen_le...@stephe-leake.org>
wrote:
> "Mike Silva" <snarflem...@yahoo.com> writes:
> > As this is just a hobby/learning thing at the moment, $10k is way,
> > way too much. I'd like to keep the cost including SBC under, say,
> > $1000. Do I dream the impossible dream? I hope not, because I'd
> > really like to give this a try and perhaps learn enough to use
> > embedded Ada commercially down the line (at which time somebody else
> > could fork up the $10k).
>
> My main job at work is building a satellite simulator (GDS;http://fsw.gsfc.nasa.gov/gds/). It's a hard real-time system. Some
> people would say it's not "embedded" because it has an ethernet
> connection to a sophisticated user interface, but that's another
> discussion.
>
> I develop all of the software for GDS on Windows. I've written
> emulation packages for some of the hardware. I do this because it's
> easier to debug top level code without the hardware getting in the
> way, and the development tools (Emacs, GNAT, gdb) work better on
> Windows than on the target OS (Lynx). Once it's working on the
> emulator, then I run it on the real hardware. Sometimes it Just Works,
> sometimes I have to get out the scope and see what's going on. In that
> case, I try to fix the emulator so I won't have to use the scope again
> :). Using the scope can be fun, but it's always way slower than using
> gdb or higher-level tests.

Yes, I have done something similar, writing emulators on a Windows box
for both the master and the slave components of semiconductor
fabrication equipment while the new hardware was being developed.  >
> So I suggest you take a similar approach. Make up some hardware that
> you'd like to play with, and write an emulator for it. Then write some
> code to make that hardware dance.

> You can do all of that on free software and cheap hardware.

It's that "make the hardware dance" part that seems much more
complicated with Ada than with C-plus-an-OS (but the benefits seem
much greater as well).  That is to say, choosing an underlying runtime
enviornment and getting it not only set up on the hardware, but
integrated with the gcc Ada compiler.  So, ignoring the question of
preferred processor families (least amount of unnecessary gotchas),
I'm still wondering about which OS is the best choice to get something
up and running.  Can anybody comment on the relative merits and
troubles of running Ada on Linux, one of the *BSDs, and RTEMS?

>...
> Another area to explore is FPGA programming. We use small FPGAs on an
> IP bus carrier
> (http://www.acromag.com/functions.cfm?Category_ID=24&Group_ID=1) to
> interface to our hardware. There is a free "Web" version of Alterra
> Quartus
> (https://www.altera.com/support/software/download/sof-download_center....),
> or the open-source ghdl VHDL compiler/simulator
> (http://ghdl.free.fr/). FPGA development relies heavily on simulation,
> which does not require real hardware.
>
> If you are ambitious, you can try to tie the ghdl simulator to your
> Ada code, to allow testing the Ada interface to the FPGA in
> simulation. I haven't done that yet, but I wish I could.
>
> Someone who can do both Ada and VHDL would be a very valuable person!

Well, I did pick up a VHDL book a while back.  Maybe it's a sign. :)
But first I want to get Ada running on a SBC.





^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Preferred OS, processor family for running embedded Ada?
  2007-02-24 12:27       ` Jeffrey Creem
@ 2007-02-24 22:10         ` Dr. Adrian Wrigley
  2007-02-25 13:10           ` roderick.chapman
                             ` (2 more replies)
  0 siblings, 3 replies; 79+ messages in thread
From: Dr. Adrian Wrigley @ 2007-02-24 22:10 UTC (permalink / raw)


On Sat, 24 Feb 2007 07:27:01 -0500, Jeffrey Creem wrote:

> Stephen Leake wrote:
> 
>> Someone who can do both Ada and VHDL would be a very valuable person!
>> 
> I'm always surprised that VHDL engineers are not more open to Ada given 
> how close the syntax is. The standard joke where I work is that VHDL is 
> just like Ada except the capslock is always stuck on and comments are 
> apparently forbidden ;)

I came to Ada from VHDL.  When I first encountered VHDL, my first though
was "Wow!  You can say what you mean clearly".  Features like user
defined types (ranges, enumerations, modular types, multi-dimensional
arrays) gave a feeling of clarity and integrity absent from software
development languages.

So when I found that you could get the same benefits of integrity
in software development from a freely available compiler, it didn't
take long to realize what I'd been missing!  Ada is without doubt
the language at the pinnacle of software engineering, and infinitely
preferable to Pascal, C++ or Modula 3 as a first language in teaching.

But I have ever since wondered why the VHDL and Ada communities
are so far apart.  It seems like such a natural partnership for
hardware/software codevelopment.  And there is significant scope
for convergence of language features - fixing the niggling and
unnecessary differences too.  Physical types, reverse ranges,
configurations, architectures, defered constants and ultra-light
concurrency come to mind from VHDL.  And general generics, private types,
tagged types, controlled types from Ada (does the latest VHDL have these?)

Perhaps a common denominator language can be devised which has the
key features of both, with none of the obsolescent features, and
can be translated into either automatically?  Something like this
might allow a "rebranding" of Ada (i.e. a new name, with full buzzword
compliance), and would be ideal to address the "new" paradigm of
multicore/multithreaded processor software, using the lightweight
threading and parallelism absent from Ada as we know it. For those who
know Occam, something like the 'PAR' and "SEQ" constructs are missing in
Ada.

While the obscenities of C-like languages thrive with new additions
seemingly every month, the Pascal family has withered.  Where is
Wirth when you need him?

(don't take it that I dislike C.  Or assembler.  Both have their
legitimate place as low-level languages to get the machine code
you want.  Great for hardware hacking.  Lousy for big teams, complex code)

One can dream...




^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Preferred OS, processor family for running embedded Ada?
  2007-02-24 22:10         ` Dr. Adrian Wrigley
@ 2007-02-25 13:10           ` roderick.chapman
  2007-02-25 17:53             ` Jeffrey R. Carter
  2007-02-25 15:08           ` Stephen Leake
  2007-02-26 16:34           ` Preferred OS, processor family for running embedded Ada? Jean-Pierre Rosen
  2 siblings, 1 reply; 79+ messages in thread
From: roderick.chapman @ 2007-02-25 13:10 UTC (permalink / raw)


>Where is Wirth when you need him?

In retirement.  He did give the after-dinner speech at the VSTTE
conference
in Zurich in 2005, and he was brilliant.  I wish I could remember
exactly what he said about C++ - I think the word "abomination" was
in there somewhere... :-)

I met him afterwards and had a brief chance to chat and thank
him for his influence on SPARK.

 - Rod, SPARK Team




^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Preferred OS, processor family for running embedded Ada?
  2007-02-24 22:10         ` Dr. Adrian Wrigley
  2007-02-25 13:10           ` roderick.chapman
@ 2007-02-25 15:08           ` Stephen Leake
  2007-02-28 17:20             ` Embedded languages based on early Ada (from "Re: Preferred OS, processor family for running embedded Ada?") Colin Paul Gloster
  2007-02-26 16:34           ` Preferred OS, processor family for running embedded Ada? Jean-Pierre Rosen
  2 siblings, 1 reply; 79+ messages in thread
From: Stephen Leake @ 2007-02-25 15:08 UTC (permalink / raw)


"Dr. Adrian Wrigley" <amtw@linuxchip.demon.co.uk.uk.uk> writes:

> I came to Ada from VHDL.  When I first encountered VHDL, my first though
> was "Wow!  You can say what you mean clearly".  Features like user
> defined types (ranges, enumerations, modular types, multi-dimensional
> arrays) gave a feeling of clarity and integrity absent from software
> development languages.

I had the same feeling when I first met Pascal, after learning APL and
Basic. Then Ada was just more of the same :).

> But I have ever since wondered why the VHDL and Ada communities are
> so far apart. It seems like such a natural partnership for
> hardware/software codevelopment. And there is significant scope for
> convergence of language features - fixing the niggling and
> unnecessary differences too. Physical types, reverse ranges,
> configurations, architectures, defered constants and ultra-light
> concurrency come to mind from VHDL. And general generics, private
> types, tagged types, controlled types from Ada (does the latest VHDL
> have these?)

I haven't actually studied the additions in VHDL 2003, but I don't
think most of these Ada features make sense for VHDL. At least, if you
are using VHDL to program FPGAs.

And reverse ranges make things ambiguous, especially for slices of
unconstrainded arrays. So I don't want to see those in Ada.

One big problem with VHDL is that it was not actually designed for
programming FPGAs; it was designed as a hardware modeling language.
People discovered that you can sort of use it for FPGA programming,
and it was the only standard language available for that purpose.
There are many things that you can say in VHDL that make no sense in
an FPGA, so each compiler vendor picks a slightly different subset of
VHDL to support for FPGAs, and gives things different meanings. 

> Perhaps a common denominator language can be devised which has the
> key features of both, with none of the obsolescent features, and
> can be translated into either automatically?  

Why would you want to translate them into each other? The semantics of
VHDL are _significantly_ different from Ada. A VHDL process is _not_
an Ada task.

Although I suppose if you decided to use VHDL to write code for a CPU
instead of an FPGA, you could decide that they were the same.

-- 
-- Stephe



^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Preferred OS, processor family for running embedded Ada?
  2007-02-25 13:10           ` roderick.chapman
@ 2007-02-25 17:53             ` Jeffrey R. Carter
  0 siblings, 0 replies; 79+ messages in thread
From: Jeffrey R. Carter @ 2007-02-25 17:53 UTC (permalink / raw)


roderick.chapman@googlemail.com wrote:
> In retirement.  He did give the after-dinner speech at the VSTTE
> conference
> in Zurich in 2005, and he was brilliant.  I wish I could remember
> exactly what he said about C++ - I think the word "abomination" was
> in there somewhere... :-)

Sounds like a good candidate for a signature. Would a transcript be 
available anywhere?

-- 
Jeff Carter
"When Roman engineers built a bridge, they had to stand under it
while the first legion marched across. If programmers today
worked under similar ground rules, they might well find
themselves getting much more interested in Ada!"
Robert Dewar
62



^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Preferred OS, processor family for running embedded Ada?
  2007-02-24 22:10         ` Dr. Adrian Wrigley
  2007-02-25 13:10           ` roderick.chapman
  2007-02-25 15:08           ` Stephen Leake
@ 2007-02-26 16:34           ` Jean-Pierre Rosen
  2007-02-26 21:18             ` Dr. Adrian Wrigley
  2 siblings, 1 reply; 79+ messages in thread
From: Jean-Pierre Rosen @ 2007-02-26 16:34 UTC (permalink / raw)


Dr. Adrian Wrigley a �crit :
[Ada and VHDL]
> Perhaps a common denominator language can be devised 
Have you looked at AADL?

-- 
---------------------------------------------------------
            J-P. Rosen (rosen@adalog.fr)
Visit Adalog's web site at http://www.adalog.fr



^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Preferred OS, processor family for running embedded Ada?
  2007-02-26 16:34           ` Preferred OS, processor family for running embedded Ada? Jean-Pierre Rosen
@ 2007-02-26 21:18             ` Dr. Adrian Wrigley
  2007-02-27 15:39               ` Jean-Pierre Rosen
  0 siblings, 1 reply; 79+ messages in thread
From: Dr. Adrian Wrigley @ 2007-02-26 21:18 UTC (permalink / raw)


On Mon, 26 Feb 2007 17:34:09 +0100, Jean-Pierre Rosen wrote:

> Dr. Adrian Wrigley a �crit :
> [Ada and VHDL]
>> Perhaps a common denominator language can be devised 
> Have you looked at AADL?

I hadn't seen this.  Interesting.

It looks quite similar in some respects to what I was thinking of.
Particularly the emphasis on multiple representations of the
underlying program (graphical, XML, plain text etc).

It looks like it draws together aspects of VHDL and Ada without
really being based on either.  Is it going to be the next Big Thing?
--
Adrian




^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Preferred OS, processor family for running embedded Ada?
  2007-02-26 21:18             ` Dr. Adrian Wrigley
@ 2007-02-27 15:39               ` Jean-Pierre Rosen
  2007-02-28 12:25                 ` Jerome Hugues
  0 siblings, 1 reply; 79+ messages in thread
From: Jean-Pierre Rosen @ 2007-02-27 15:39 UTC (permalink / raw)


Dr. Adrian Wrigley a �crit :
> On Mon, 26 Feb 2007 17:34:09 +0100, Jean-Pierre Rosen wrote:
> 
>> Dr. Adrian Wrigley a �crit :
>> [Ada and VHDL]
>>> Perhaps a common denominator language can be devised 
>> Have you looked at AADL?
> 
> I hadn't seen this.  Interesting.
> 
> It looks quite similar in some respects to what I was thinking of.
> Particularly the emphasis on multiple representations of the
> underlying program (graphical, XML, plain text etc).
> 
> It looks like it draws together aspects of VHDL and Ada without
> really being based on either.  Is it going to be the next Big Thing?

A lot of people is trying to make this happen :-). In an nutshell, AADL 
is a design language at system level; many concepts are inherited from 
Ada, and you'll find many Ada people involved (Joyce Tokar did the Ada 
binding), as well as AADL presentations at Ada conferences.

-- 
---------------------------------------------------------
            J-P. Rosen (rosen@adalog.fr)
Visit Adalog's web site at http://www.adalog.fr



^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Preferred OS, processor family for running embedded Ada?
  2007-02-27 15:39               ` Jean-Pierre Rosen
@ 2007-02-28 12:25                 ` Jerome Hugues
  0 siblings, 0 replies; 79+ messages in thread
From: Jerome Hugues @ 2007-02-28 12:25 UTC (permalink / raw)


In article <naj1se.3ir.ln@hunter.axlog.fr>, Jean-Pierre Rosen wrote:
> Dr. Adrian Wrigley a �crit :
>> On Mon, 26 Feb 2007 17:34:09 +0100, Jean-Pierre Rosen wrote:
>> 
>>> Dr. Adrian Wrigley a �crit :
>>> [Ada and VHDL]
>>>> Perhaps a common denominator language can be devised 
>>> Have you looked at AADL?
>> 
>> I hadn't seen this.  Interesting.
>> 
>> It looks quite similar in some respects to what I was thinking of.
>> Particularly the emphasis on multiple representations of the
>> underlying program (graphical, XML, plain text etc).
>> 
>> It looks like it draws together aspects of VHDL and Ada without
>> really being based on either.  Is it going to be the next Big Thing?
> 
> A lot of people is trying to make this happen :-). In an nutshell, AADL 
> is a design language at system level; many concepts are inherited from 
> Ada, and you'll find many Ada people involved (Joyce Tokar did the Ada 
> binding), as well as AADL presentations at Ada conferences.
> 

AADL is not just a design language, it also allows you to perform a
wide range of checks and code generation on high level models, or some
refinements of them.

<some ad> 
We, at ENST, are developping Ocarina, that includes an
AADL-to-Ada code generator. We got some interesting results in
generating Ada code that matches many restrictions from the HIS annex
from AADL models.

See http://ocarina.enst.fr/ for more details
</>

Also, Cheddar, the scheduling toolsuite, has some support for AADL,
same goes for STOOD from Ellidiss.

Which means, as stated by Jean-Pierre, that the Ada community is also
involved in this language, and that links between the two are strong.

-- 
Jerome



^ permalink raw reply	[flat|nested] 79+ messages in thread

* Embedded languages based on early Ada (from "Re: Preferred OS, processor family for running embedded Ada?")
  2007-02-25 15:08           ` Stephen Leake
@ 2007-02-28 17:20             ` Colin Paul Gloster
  2007-03-01  9:18               ` Jean-Pierre Rosen
                                 ` (2 more replies)
  0 siblings, 3 replies; 79+ messages in thread
From: Colin Paul Gloster @ 2007-02-28 17:20 UTC (permalink / raw)


I post from news:comp.lang.ada to news:comp.lang.vhdl .

Stephen A. Leake wrote in news:news:u649rx29a.fsf@stephe-leake.org on
news:comp.lang.ada for the benefit of Mike Silva:

"[..]

[..] FPGA development relies heavily on simulation,
which does not require real hardware."


A warning to Mike Silva: supposed simulators for languages chiefly
used for FPGAs often behave differently to how source code will
actually behave when synthesized (implemented) on a field programmable
gate array. This is true of cheap and expensive tools promoted as
simulators.


Stephen A. Leake wrote in news:news:u649rx29a.fsf@stephe-leake.org on
news:comp.lang.ada :

"[..]

Someone who can do both Ada and VHDL would be a very valuable person!"


A very euphemistic approach to stating that someone who could not do
one of Ada and VHDL after a few days with literature and tools, is
probably not good at the other of VHDL and Ada.


In news:pan.2007.02.24.22.11.44.430179@linuxchip.demon.co.uk.uk.uk
timestamped Sat, 24 Feb 2007 22:10:22 GMT, "Dr. Adrian Wrigley"
<amtw@linuxchip.demon.co.uk.uk.uk> posted:
"On Sat, 24 Feb 2007 07:27:01 -0500, Jeffrey Creem wrote:

> Stephen Leake wrote:
>
>> Someone who can do both Ada and VHDL would be a very valuable person!
>>
> I'm always surprised that VHDL engineers are not more open to Ada given
> how close the syntax is. The standard joke where I work is that VHDL is
> just like Ada except the capslock is always stuck on and comments are
> apparently forbidden ;)"


Just as one can find expert C++ programmers who lament what C++ code
from typical C++ programmers and typical C++ education are like, one
can find expert VHDL interlocutors who are not fond of typical VHDL
users. Such people who are so incompetent with VHDL are incompetent
with it because they are forced to use VHDL and they can get away with
not being good at it, so they are not likely to really want to use Ada either.


"I came to Ada from VHDL.  When I first encountered VHDL, my first though
was "Wow!  You can say what you mean clearly".  Features like user
defined types (ranges, enumerations, modular types,"


VHDL does not have Ada 2005's and Ada 95's mod types (
WWW.AdaIC.com/standards/05rm/html/RM-3-5-4.html
), unless I am mistaken.


" multi-dimensional
arrays) gave a feeling of clarity and integrity absent from software
development languages."


Apparently for many years, VHDL subset tools (the only kind of VHDL
tools which exist, so even if the language was excellent one would
need to restrict one's code to what was supported) used to not support
multi-dimensional arrays.


"So when I found that you could get the same benefits of integrity
in software development from a freely available compiler, it didn't
take long to realize what I'd been missing!  Ada is without doubt
the language at the pinnacle of software engineering, and infinitely
preferable to Pascal, C++ or Modula 3 as a first language in teaching."


Ada is good in the relative sense that it is less bad than something
which is worse than Ada, which is in no way similar to an absolute
statement that Ada is not bad. Unfortunately Ada (including VHDL)
allows

procedure Ada_Is_As_Unsafe_As_VHDL_And_GNAT_Will_Not_Even_Warn is
   value,Y:Integer;
begin
   if False then
      Value:=0;
   end if;
   Y:=Value;
end Ada_Is_As_Unsafe_As_VHDL_And_GNAT_Will_Not_Even_Warn;
   --GNAT is an abbreviation of "the GNU Ada Translator".

I do not know Modula-3 and Oberon. Do they allow such a trivially
detectable accident?


Dr. Adrian Wrigley wrote:

"But I have ever since wondered why the VHDL and Ada communities
are so far apart."


Sometimes they are not as ignorant of each other as they might
typically be. E.g. Ada 95's and the latest VHDL's permissiveness of
reading things of mode out may be just a coincidence, unlike the
syntax for protected types in IEEE Std 1076-2002 which is copied from
ISO/IEC 8652:1995 (however the semantics are not fully copied,
e.g. mutual exclusion is ridiculously stipulated in VHDL for pure
functions on a common protected object). It is true that the
communities are not always close, e.g. shared variables were
introduced in IEEE Std 1076-1993 and were very bad (but at least even
IEEE Std 1076-1993 contained an explicit warning against them). In my
experience, a person who has specialized in VHDL has heard of Ada but
has not been interested to read anything about Ada except in a brief
history of how VHDL had been started.


"  It seems like such a natural partnership for
hardware/software codevelopment."


A thing which may be perceived to be obvious to one might not be perceived to
be such by another. For example, from Jamal Guennouni, "Using Ada as a
Language for a CAD Tool Development: Lessons and Experiences",
Proceedings of the fifth Washington Ada symposium on Ada WADAS 1988:

"[..]

[..] On the other hand, our approach for hardware
description is also different from the one taken by Shahdad.
Indeed, we have stuck to the Ada language whereas Shahdad
[20] has developed a new language based on Ada called
VHDL (a part of the VHSIC program) dedicated to
hardware description and simulation. Moreover, we have
used the same language (i.e., Ada) for hardware and
software simulations whereas the VHDL language is used
for hardware descriptions only. This has several drawbacks
since it sets up a border between a software designer and a
circuit designer. It cannot benefit from the advantages connected
with the use of a single language during the whole
design process (e.g., use of the existing development and
debugging support tools).

[..]"

Don't get excited, Jamal Guennouni was writing about Ada for hardware
much as VHDL had been originally intended for hardware - that is: not
for automatic generation of a netlist. However, work on subsets for
synthesizable Ada has been expended. Now you may become excited, if
you are willing to allow yourself the risk of becoming disappointed,
after all if work was published on this in the 1980s then why did
WWW-users.CS.York.ac.UK/~mward/compiler/
get a publication in 2001 without citing nor even mentioning any of
the earlier relevant works of synthesizable Ada projects?


Dr. Adrian Wrigley wrote:

"  And there is significant scope
for convergence of language features - fixing the niggling and
unnecessary differences too."


That would be nice. However complete uniformity shall not happen if
things which are currently compatible are to maintain
compatibility. Does it really make sense to replace one
incompatibility with compatibility and at the same time replace a
different compatibility with an incompatibility? E.g. VHDL differs
from Ada in not allowing to specify an array's dimensions as
follows: array (address_type'range) if address_type is an INTEGER
(sub)type (making array(address_type) legal) but if address_type is a
(sub)type of IEEE.numeric_std.(un)signed it is legal to have
array(address_type'range) but it means something different because
VHDL (un)signed is an array type (making array(address_type) illegal).

Should Ada's attribute 'First refer to the lower bound of the index of
a type based on IEEE.numeric_std.(un)signed or to the lower bound of
the numbers which can be interpreted as being represented by such an
array? It is impossible to choose one or the other without a resulting
incompatibility.

In Ada 2005 and Ada 95, a mod type which represents a whole
number is treated not as an array type when applying attributes,
but in VHDL, IEEE.numeric_std.unsigned and IEEE.numeric_std.unsigned
are treated as array types when applying attributes, even though they
may be treated as whole numbers in other contexts. Mod types and
IEEE.numeric_std.(un)signed are used for logical either true or false
operators.

Even in Ada, an array of Ada Booleans is not identical to an Ada mod
type, and even in VHDL, IEEE.numeric_std.unsigned and INTEGER are not
identical, so why should a huge amount of effort be spent to integrate
similar or other aspects of Ada and VHDL?


"  Physical types,"


Why bother?


" reverse ranges,"


Stephen A. Leake responded in news:u7iu68eba.fsf@stephe-leake.org :

"[..] reverse ranges make things ambiguous, especially for slices of
unconstrainded arrays. So I don't want to see those in Ada."


Maybe. From the VHDL standard:
"[..]
[..] the following two block configurations are equivalent:
for Adders(31 downto 0) ... end for;
for Adders(0 to 31) ... end for;
[..]
Examples:
variable REAL_NUMBER : BIT_VECTOR (0 to 31);
[..]
alias MANTISSA : BIT_VECTOR (23 downto 0) is REAL_NUMBER (8 to 31);
-- MANTISSA is a 24b value whose range is 23 downto 0.
-- Note that the ranges of MANTISSA and REAL_NUMBER (8 to 31)
-- have opposite directions. A reference to MANTISSA (23 downto 18)
-- is equivalent to a reference to REAL_NUMBER (8 to 13).
[..]"

It is true that an illegal description can result by mixing up
directions, but could you please give a concrete example of how
directions can be ambiguous? The VHDL attribute 'RANGE returns
"The range A'LEFT(N) to A'RIGHT(N) if the Nth index range
of A is ascending, or the range A'LEFT(N) downto
A'RIGHT(N) if the Nth index range of A is descending", and similarly
'REVERSE_RANGE returns "The range A'RIGHT(N) downto A'LEFT(N) if the
Nth index range of A is ascending, or the range A'RIGHT(N) to
A'LEFT(N) if the Nth index range of A is descending." Similarly:
"NOTES
1 - The relationship between the values of the LEFT, RIGHT, LOW, and
HIGH attributes is expressed as follows:


                 Ascending range         Descending range
T'LEFT =          T'LOW                   T'HIGH
T'RIGHT =         T'HIGH                  T'LOW"


Dr. Adrian Wrigley wrote:

"configurations, architectures,"


It is true that we miss these in dialects of Ada not called VHDL, but
we can cope. An interesting recent post on how to not bother with the
binding of a component instance to a design entity of an architecture by
a configuration specification without sacrificing the intended benefit
of configurations and architectures is
news:54gnriF20p8onU1@mid.individual.net


Dr. Adrian Wrigley wrote:

" defered constants"


Ada 2005 does allow deferred constants, so I am unsure as to what
improvement Dr. Wrigley wants for this area. Perhaps he would like to
be able to assign the value with := in the package body like in VHDL,
which is not allowed in Ada 2005. Perhaps Ada vendors would be willing to
make Ada less like C++ by removing exposed implementation details from
a package_declaration, but you would have been better off proposing
this before Ada 2005 was finalized.


Dr. Adrian Wrigley wrote:

" and ultra-light
concurrency come to mind from VHDL."


In what way is copying concurrency from VHDL where it is not already
present in Ada desirable?


Dr. Adrian Wrigley wrote:

"  And general generics, private types,
tagged types, controlled types from Ada (does the latest VHDL have these?)"


No mainstream version of VHDL has these. Interfaces and tagged types might be
added in a future version:
WWW.SIGDA.org/Archives/NewsGroupArchives/comp.lang.vhdl/2006/Jun/comp.lang.vhdl.57450.txt


Stephen A. Leake responded:

"I haven't actually studied the additions in VHDL 2003, but I don't
think most of these Ada features make sense for VHDL. At least, if you
are using VHDL to program FPGAs.

[..]"

Why?

The latest IEEE VHDL standard, does not have these, but the IEEE VHDL
draft standard Draft IEEE P1076/D3.2, December 10, 2006 contains the
addition of subprogram and package generics but VHDL never had (and even
in the draft still does not have) generic instantiations in which the
parameter is a type (Ada83 had all of these kinds of generics). These
could be nice for FPGAs.


Dr. Adrian Wrigley wrote:

"Perhaps a common denominator language can be devised which has the
key features of both, with none of the obsolescent features,"


Perhaps. But people could continue with what they are using. From
Dr. SY Wong, "Hardware/Software Co-Design Language
Compatible with VHDL", WESCON, 1998:

"Introduction.
   This Hardware/Software (hw/sw) Co-
Design Language (CDL) (ref.2) is a
small subset of ANSI/ISO Ada. It has
existed since 1980 when VHDL was
initiated and is contained in IEEE 1076
VHDL-1993 with only minor differences.
[..]"


Dr. Adrian Wrigley wrote:

" and
can be translated into either automatically?"

Stephe Leake responded:

"Why would you want to translate them into each other? The semantics
of
VHDL are _significantly_ different from Ada. A VHDL process is _not_
an Ada task.

[..]"


Perhaps for the same reasons people generate Verilog files from VHDL
files, and vice versa.


Dr. Adrian Wrigley wrote:

"  Something like this
might allow a "rebranding" of Ada (i.e. a new name, with full buzzword
compliance), and would be ideal to address the "new" paradigm of
multicore/multithreaded processor software, using the lightweight
threading and parallelism absent from Ada as we know it. For those who
know Occam, something like the 'PAR' and "SEQ" constructs are missing in
Ada."


I really fail to see the relevance of multiple processors to
lightweight threading.

Apparently Verilog is used more than VHDL. Verilog apparently has very
little thought given to safe parallelism. (E.g. Jonathan Bromley on
2005 May 20th on news:comp.lang.vhdl :
"[..]

[..] Verilog's cavalier attitude to 
process synchronisation (in summary: who cares?!) is a 
major problem for anyone who has ever stopped to think about 
concurrent programming for more than about five minutes.

[..]")
Papers on multicore topics in the near term are more likely to contain
SystemC(R) or SystemVerilog boasts. Some people do not reason. I was
recently involved in one of the European Commission's major multicore
research projects in which SystemC(R) development was supposedly
going to provide great temporal improvements, but it did not do so
(somehow, I was not allowed to highlight this). Is this a surprise?
From 4.2.1 The scheduling algorithm of "IEEE Standard SystemC(R) Language
                                TM
Reference Manual", IEEE Std 1666  -2005, "Approved 28 March 2006 American
National Standards Institute", "Approved 6 December 2005 IEEE-SA Standards
Board", supposedly "Published 31 March 2006" even though the Adobe
timestamp indicates 2006 March 29th, ISBN 0-7381-4870-9 SS95505:

"The semantics of the scheduling algorithm are defined in the
following subclauses.
[..]
An implementation may substitute an alternative scheme, provided the
scheduling
semantics given here are retained.
[..]
4.2.1.2 Evaluation phase
From the set of runnable processes, select a process instance and
trigger or resume
its execution. Run the process instance immediately and without
interruption up to
the point where it either returns or calls the function wait.
Since process instances execute without interruption, only a single
process instance
can be running at any one time, and no other process instance can
execute until the
currently executing process instance has yielded control to the
kernel. A process shall
not pre-empt or interrupt the execution of another process. This is
known as co-routine
semantics or co-operative multitasking.
[..]
A process may call the member function request update of a primitive
channel,
which will cause the member function update of that same primitive
channel to be
called back during the very next update phase.
Repeat this step until the set of runnable processes is empty, then go
on to the
update phase.
NOTE 1.The scheduler is not pre-emptive. An application can assume
that a
method process will execute in its entirety without interruption, and
a thread or clocked
thread process will execute the code between two consecutive calls to
function wait
without interruption.
[..]
NOTE 3.An implementation running on a machine that provides hardware
support
for concurrent processes may permit two or more processes to run
concurrently,
provided that the behavior appears identical to the co-routine
semantics defined in
this subclause. In other words, the implementation would be obliged to
analyze any
dependencies between processes and constrain their execution to match
the co-routine
semantics."

Anyone stupid enough to choose C++ deserves all the inevitable
woes. Especially as many of the involved parties did not even know C++
so allowable compilers were required to be restricted to a set of
compilers which are not conformant to the C++ standard as much of the
code was not written in genuine C++. This is not a surprise as the
Open SystemC Initiative's SystemC(R) reference implementation of the
time was written in an illegal distortion of C++.


Dr. Adrian Wrigley wrote:

"While the obscenities of C-like languages thrive with new additions
seemingly every month, the Pascal family has withered.  Where is
Wirth when you need him?

(don't take it that I dislike C."


I dislike C.

"  Or assembler.  Both have their
legitimate place as low-level languages to get the machine code
you want.  Great for hardware hacking.  Lousy for big teams, complex code)

One can dream..."

C is not great for hardware hacking.



^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Embedded languages based on early Ada (from "Re: Preferred OS, processor family for running embedded Ada?")
  2007-02-28 17:20             ` Embedded languages based on early Ada (from "Re: Preferred OS, processor family for running embedded Ada?") Colin Paul Gloster
@ 2007-03-01  9:18               ` Jean-Pierre Rosen
  2007-03-01 11:22               ` Dr. Adrian Wrigley
  2007-03-01 13:23               ` Martin Thompson
  2 siblings, 0 replies; 79+ messages in thread
From: Jean-Pierre Rosen @ 2007-03-01  9:18 UTC (permalink / raw)


Colin Paul Gloster a �crit :
> procedure Ada_Is_As_Unsafe_As_VHDL_And_GNAT_Will_Not_Even_Warn is
>    value,Y:Integer;
> begin
>    if False then
>       Value:=0;
>    end if;
>    Y:=Value;
> end Ada_Is_As_Unsafe_As_VHDL_And_GNAT_Will_Not_Even_Warn;
>    --GNAT is an abbreviation of "the GNU Ada Translator".
> 
> I do not know Modula-3 and Oberon. Do they allow such a trivially
> detectable accident?
> 

Here is what AdaControl says about it:
Error: IMPROPER_INITIALIZATION: use of uninitialized variable: Value
Error: IMPROPER_INITIALIZATION: variable "value" used before 
initialisation>

NB: I think this kind of analysis is for external tools, not for 
compilers. Nice when compilers can provide extra warnings, but that's 
not their primary job.
-- 
---------------------------------------------------------
            J-P. Rosen (rosen@adalog.fr)
Visit Adalog's web site at http://www.adalog.fr



^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Embedded languages based on early Ada (from "Re: Preferred OS, processor family for running embedded Ada?")
  2007-02-28 17:20             ` Embedded languages based on early Ada (from "Re: Preferred OS, processor family for running embedded Ada?") Colin Paul Gloster
  2007-03-01  9:18               ` Jean-Pierre Rosen
@ 2007-03-01 11:22               ` Dr. Adrian Wrigley
  2007-03-01 11:47                 ` claude.simon
                                   ` (2 more replies)
  2007-03-01 13:23               ` Martin Thompson
  2 siblings, 3 replies; 79+ messages in thread
From: Dr. Adrian Wrigley @ 2007-03-01 11:22 UTC (permalink / raw)


On Wed, 28 Feb 2007 17:20:37 +0000, Colin Paul Gloster wrote:
...
> Dr. Adrian Wrigley wrote:
> 
> "  Something like this
> might allow a "rebranding" of Ada (i.e. a new name, with full buzzword
> compliance), and would be ideal to address the "new" paradigm of
> multicore/multithreaded processor software, using the lightweight
> threading and parallelism absent from Ada as we know it. For those who
> know Occam, something like the 'PAR' and "SEQ" constructs are missing in
> Ada."
> 
> I really fail to see the relevance of multiple processors to
> lightweight threading.

????

If you don't have multiple processors, lightweight threading is
less attractive than if you do?  Inmos/Occam/Transputer was founded
on the basis that lightweight threading was highly relevant to multiple
processors.

Ada has no means of saying "Do these bits concurrently, if you like,
because I don't care what the order of execution is".  And a compiler
can't work it out from the source.  If your CPU has loads of threads,
compiling code with "PAR" style language concurrency is rather useful
and easy.
--
Adrian




^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Embedded languages based on early Ada (from "Re: Preferred OS, processor family for running embedded Ada?")
  2007-03-01 11:22               ` Dr. Adrian Wrigley
@ 2007-03-01 11:47                 ` claude.simon
  2007-03-01 13:57                 ` Dmitry A. Kazakov
  2007-03-01 16:09                 ` Colin Paul Gloster
  2 siblings, 0 replies; 79+ messages in thread
From: claude.simon @ 2007-03-01 11:47 UTC (permalink / raw)


On 1 mar, 12:22, "Dr. Adrian Wrigley"
<a...@linuxchip.demon.co.uk.uk.uk> wrote:
> On Wed, 28 Feb 2007 17:20:37 +0000, Colin Paul Gloster wrote:
>
> ...
>
> > Dr. Adrian Wrigley wrote:
>
> > "  Something like this
> > might allow a "rebranding" of Ada (i.e. a new name, with full buzzword
> > compliance), and would be ideal to address the "new" paradigm of
> > multicore/multithreaded processor software, using the lightweight
> > threading and parallelism absent from Ada as we know it. For those who
> > know Occam, something like the 'PAR' and "SEQ" constructs are missing in
> > Ada."
>
> > I really fail to see the relevance of multiple processors to
> > lightweight threading.
>
> ????
>
> If you don't have multiple processors, lightweight threading is
> less attractive than if you do?  Inmos/Occam/Transputer was founded
> on the basis that lightweight threading was highly relevant to multiple
> processors.
>
> Ada has no means of saying "Do these bits concurrently, if you like,
> because I don't care what the order of execution is".  And a compiler
> can't work it out from the source.  If your CPU has loads of threads,
> compiling code with "PAR" style language concurrency is rather useful
> and easy.
> --
> Adrian

If my memory is ok Jean Pierre Rosen had a proposal :

for I in all 1 .. n loop
...
end loop;




^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Embedded languages based on early Ada (from "Re: Preferred OS, processor family for running embedded Ada?")
  2007-02-28 17:20             ` Embedded languages based on early Ada (from "Re: Preferred OS, processor family for running embedded Ada?") Colin Paul Gloster
  2007-03-01  9:18               ` Jean-Pierre Rosen
  2007-03-01 11:22               ` Dr. Adrian Wrigley
@ 2007-03-01 13:23               ` Martin Thompson
  2 siblings, 0 replies; 79+ messages in thread
From: Martin Thompson @ 2007-03-01 13:23 UTC (permalink / raw)


Colin Paul Gloster <Colin_Paul_Gloster@ACM.org> writes:

> I post from news:comp.lang.ada to news:comp.lang.vhdl .
>

I'll leap in then :-)

> Stephen A. Leake wrote in news:news:u649rx29a.fsf@stephe-leake.org on
> news:comp.lang.ada for the benefit of Mike Silva:
>
> "[..]
>
> [..] FPGA development relies heavily on simulation,
> which does not require real hardware."
>
>
> A warning to Mike Silva: supposed simulators for languages chiefly
> used for FPGAs often behave differently to how source code will
> actually behave when synthesized (implemented) on a field programmable
> gate array. This is true of cheap and expensive tools promoted as
> simulators.
>

What do you mean by this?  The VHDL I simulate behaves the same as the
FPGA, unless I do something bad like doing asynchronous design, or
miss a timing constraint.  These are both design problems, not
simulation or language problems.

<snip>
> " multi-dimensional
> arrays) gave a feeling of clarity and integrity absent from software
> development languages."
>
>
> Apparently for many years, VHDL subset tools (the only kind of VHDL
> tools which exist, so even if the language was excellent one would
> need to restrict one's code to what was supported) used to not support
> multi-dimensional arrays.
>

What's this about "only VHDL subset" tools existing?  Modelsim supports
all of VHDL...  It is true that synthesis tools only support a subset of
VHDL, but a lot of that is down to the fact that turning (say) an
access type into hardware is a bit tricky.

Multi dimensional arrays have worked (even in synthesis) for years in
my experience.

<snip>

(Followup-to trimmed to comp.ang.vhdl)

Cheers,
Martin

-- 
martin.j.thompson@trw.com 
TRW Conekt - Consultancy in Engineering, Knowledge and Technology
http://www.conekt.net/electronics.html



^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Embedded languages based on early Ada (from "Re: Preferred OS, processor family for running embedded Ada?")
  2007-03-01 11:22               ` Dr. Adrian Wrigley
  2007-03-01 11:47                 ` claude.simon
@ 2007-03-01 13:57                 ` Dmitry A. Kazakov
  2007-03-01 18:09                   ` Ray Blaak
                                     ` (2 more replies)
  2007-03-01 16:09                 ` Colin Paul Gloster
  2 siblings, 3 replies; 79+ messages in thread
From: Dmitry A. Kazakov @ 2007-03-01 13:57 UTC (permalink / raw)


On Thu, 01 Mar 2007 11:22:32 GMT, Dr. Adrian Wrigley wrote:

> If you don't have multiple processors, lightweight threading is
> less attractive than if you do?  Inmos/Occam/Transputer was founded
> on the basis that lightweight threading was highly relevant to multiple
> processors.
> 
> Ada has no means of saying "Do these bits concurrently, if you like,
> because I don't care what the order of execution is".  And a compiler
> can't work it out from the source.  If your CPU has loads of threads,
> compiling code with "PAR" style language concurrency is rather useful
> and easy.

But par is quite low-level. What would be the semantics of:

   declare
      Thing : X;
   begin
      par
         Foo Thing);
      and
         Bar Thing);
      and
         Baz Thing);
      end par;
   end;

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Embedded languages based on early Ada (from "Re: Preferred OS, processor family for running embedded Ada?")
  2007-03-01 11:22               ` Dr. Adrian Wrigley
  2007-03-01 11:47                 ` claude.simon
  2007-03-01 13:57                 ` Dmitry A. Kazakov
@ 2007-03-01 16:09                 ` Colin Paul Gloster
  2 siblings, 0 replies; 79+ messages in thread
From: Colin Paul Gloster @ 2007-03-01 16:09 UTC (permalink / raw)


In news:pan.2007.03.01.11.23.01.229462@linuxchip.demon.co.uk.uk.uk
timestamped Thu, 01 Mar 2007 11:22:32 GMT, "Dr. Adrian Wrigley"
<amtw@linuxchip.demon.co.uk.uk.uk> posted:
"On Wed, 28 Feb 2007 17:20:37 +0000, Colin Paul Gloster wrote:
...
> Dr. Adrian Wrigley wrote:
>
> "  Something like this
> might allow a "rebranding" of Ada (i.e. a new name, with full buzzword
> compliance), and would be ideal to address the "new" paradigm of
> multicore/multithreaded processor software, using the lightweight
> threading and parallelism absent from Ada as we know it. For those who
> know Occam, something like the 'PAR' and "SEQ" constructs are missing in
> Ada."
>
> I really fail to see the relevance of multiple processors to
> lightweight threading.

????

If you don't have multiple processors, lightweight threading is
less attractive than if you do?"

I was thinking that heavyweight processes -- whatever that term might mean,
maybe involving many processes, each of which is working on processing
intensive work without threads' unrestricted access to shared memory --
would be suitable for multiple processors.

"  Inmos/Occam/Transputer was founded
on the basis that lightweight threading was highly relevant to multiple
processors."

I reread a little of occam2 and transputers for this post. I do not
know much about them. I do not know.

"Ada has no means of saying "Do these bits concurrently, if you like,
because I don't care what the order of execution is"."

How do you interpret part 11 of Section 9: Tasks and Synchronization
of Ada 2005? (On
WWW.ADAIC.com/standards/05rm/html/RM-9.html#I3506
: "[..]

NOTES
11 1  Concurrent task execution may be implemented on multicomputers,
multiprocessors, or with interleaved execution on a single physical
processor. On the other hand, whenever an implementation can determine
that the required semantic effects can be achieved when parts of the
execution of a given task are performed by different physical
processors acting in parallel, it may choose to perform them in this way.

[..]")

Dr. Adrian Wrigley wrote:

"  And a compiler
can't work it out from the source.  If your CPU has loads of threads,
compiling code with "PAR" style language concurrency is rather useful
and easy.
--
Adrian"

Maybe I will read more about this some time.



^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Embedded languages based on early Ada (from "Re: Preferred OS, processor family for running embedded Ada?")
  2007-03-01 13:57                 ` Dmitry A. Kazakov
@ 2007-03-01 18:09                   ` Ray Blaak
  2007-03-02 11:36                   ` Dr. Adrian Wrigley
  2007-03-05 15:23                   ` Embedded languages based on early Ada (from "Re: Preferred OS, processor family for running embedded Ada?") Colin Paul Gloster
  2 siblings, 0 replies; 79+ messages in thread
From: Ray Blaak @ 2007-03-01 18:09 UTC (permalink / raw)


"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:
> But par is quite low-level. What would be the semantics of:
> 
>    declare
>       Thing : X;
>    begin
>       par
>          Foo Thing);
>       and
>          Bar Thing);
>       and
>          Baz Thing);
>       end par;
>    end;

Well, that depends on the definitions of Foo, Bar, and Baz, of course :-).

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
rAYblaaK@STRIPCAPStelus.net                    The Rhythm has my soul.



^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Preferred OS, processor family for running embedded Ada?
  2007-02-24 13:59     ` Jacob Sparre Andersen
@ 2007-03-01 19:32       ` Jacob Sparre Andersen
  2007-03-01 20:22         ` Mike Silva
  0 siblings, 1 reply; 79+ messages in thread
From: Jacob Sparre Andersen @ 2007-03-01 19:32 UTC (permalink / raw)


Jacob Sparre Andersen <sparre@nbi.dk> wrote:

> One of my acquaintances from the local Linux user group works with a
> Linux/IA32-based embedded kit.  If I remember correctly, the price
> is less than 200 USD for a full system (and the size of the embedded
> unit is less than 10 cm�).  I'll ask him about the source of the
> kit, and let you know when I have an answer.

The system I remembered is called UNC20, but Poul Erik says that the
newer UNC90 is preferable, since it has proper hardware memory
management.

Greetings,

Jacob
-- 
I'm giving a short talk at Game Developers Conference (Mobile
Game Innovation Hunt) Monday afternoon:
          http://www.gdconf.com/conference/gdcmobile_hunt.htm



^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Preferred OS, processor family for running embedded Ada?
  2007-03-01 19:32       ` Jacob Sparre Andersen
@ 2007-03-01 20:22         ` Mike Silva
  0 siblings, 0 replies; 79+ messages in thread
From: Mike Silva @ 2007-03-01 20:22 UTC (permalink / raw)


On Mar 1, 2:32 pm, Jacob Sparre Andersen <spa...@nbi.dk> wrote:
> Jacob Sparre Andersen <spa...@nbi.dk> wrote:
>
> > One of my acquaintances from the local Linux user group works with a
> > Linux/IA32-based embedded kit.  If I remember correctly, the price
> > is less than 200 USD for a full system (and the size of the embedded
> > unit is less than 10 cm³).  I'll ask him about the source of the
> > kit, and let you know when I have an answer.
>
> The system I remembered is called UNC20, but Poul Erik says that the
> newer UNC90 is preferable, since it has proper hardware memory
> management.

Thanks for the update.  I've ended up going down a somewhat different
path, for now at least.  I've gotten this board http://www.olimex.com/dev/lpc-e2294rb.html
because it has just about the right mix of horsepower and features for
some ideas I have.  I know this board isn't big enough to run Linux or
FreeBSD, so I am going to look at Ada on RTEMS instead.  But again,
thanks for the followup, and I am going to look up the UNC90 as well.

Mike




^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Embedded languages based on early Ada (from "Re: Preferred OS, processor family for running embedded Ada?")
  2007-03-01 13:57                 ` Dmitry A. Kazakov
  2007-03-01 18:09                   ` Ray Blaak
@ 2007-03-02 11:36                   ` Dr. Adrian Wrigley
  2007-03-02 16:32                     ` Dmitry A. Kazakov
  2007-03-05 15:23                   ` Embedded languages based on early Ada (from "Re: Preferred OS, processor family for running embedded Ada?") Colin Paul Gloster
  2 siblings, 1 reply; 79+ messages in thread
From: Dr. Adrian Wrigley @ 2007-03-02 11:36 UTC (permalink / raw)


On Thu, 01 Mar 2007 14:57:01 +0100, Dmitry A. Kazakov wrote:

> On Thu, 01 Mar 2007 11:22:32 GMT, Dr. Adrian Wrigley wrote:
> 
>> If you don't have multiple processors, lightweight threading is
>> less attractive than if you do?  Inmos/Occam/Transputer was founded
>> on the basis that lightweight threading was highly relevant to multiple
>> processors.
>> 
>> Ada has no means of saying "Do these bits concurrently, if you like,
>> because I don't care what the order of execution is".  And a compiler
>> can't work it out from the source.  If your CPU has loads of threads,
>> compiling code with "PAR" style language concurrency is rather useful
>> and easy.
> 
> But par is quite low-level. What would be the semantics of:
> 
>    declare
>       Thing : X;
>    begin
>       par
>          Foo Thing);
>       and
>          Bar Thing);
>       and
>          Baz Thing);
>       end par;
>    end;

Do Foo, Bar and Baz in any order or concurrently, all accessing Thing.

Roughly equivalent to doing the same operations in three separate
tasks.  Thing could be a protected object, if concurrent writes
are prohibited.  Seems simple enough!

I'm looking for something like Cilk, but even the concurrent loop
(JPR's for I in all 1 .. n loop?) would be a help.
--
Adrian




^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Embedded languages based on early Ada (from "Re: Preferred OS, processor family for running embedded Ada?")
  2007-03-02 11:36                   ` Dr. Adrian Wrigley
@ 2007-03-02 16:32                     ` Dmitry A. Kazakov
  2007-03-03  0:00                       ` Dr. Adrian Wrigley
  2007-03-03  1:58                       ` Ray Blaak
  0 siblings, 2 replies; 79+ messages in thread
From: Dmitry A. Kazakov @ 2007-03-02 16:32 UTC (permalink / raw)


On Fri, 02 Mar 2007 11:36:22 GMT, Dr. Adrian Wrigley wrote:

> On Thu, 01 Mar 2007 14:57:01 +0100, Dmitry A. Kazakov wrote:
> 
>> On Thu, 01 Mar 2007 11:22:32 GMT, Dr. Adrian Wrigley wrote:
>> 
>>> If you don't have multiple processors, lightweight threading is
>>> less attractive than if you do?  Inmos/Occam/Transputer was founded
>>> on the basis that lightweight threading was highly relevant to multiple
>>> processors.
>>> 
>>> Ada has no means of saying "Do these bits concurrently, if you like,
>>> because I don't care what the order of execution is".  And a compiler
>>> can't work it out from the source.  If your CPU has loads of threads,
>>> compiling code with "PAR" style language concurrency is rather useful
>>> and easy.
>> 
>> But par is quite low-level. What would be the semantics of:
>> 
>>    declare
>>       Thing : X;
>>    begin
>>       par
>>          Foo Thing);
>>       and
>>          Bar Thing);
>>       and
>>          Baz Thing);
>>       end par;
>>    end;
> 
> Do Foo, Bar and Baz in any order or concurrently, all accessing Thing.

That's the question. If they just have an arbitrary execution order being
mutually exclusive then the above is a kind of select with anonymous
accepts invoking Foo, Bar, Baz. The semantics is clean.

> Roughly equivalent to doing the same operations in three separate
> tasks.  Thing could be a protected object, if concurrent writes
> are prohibited.  Seems simple enough!

This is a very different variant:

   declare
      Thing : X;
   begin
      declare -- par
         task Alt_1; task Alt_2; task Alt_3;
         task body Alt_1 is
         begin
             Foo (Thing);
         end Alt_1;
         task body Alt_2 is
         begin
             Bar (Thing);
         end Alt_2;
         task body Alt_3 is
         begin
             Baz (Thing);
         end Alt_3;
      begin
         null;
      end; -- par

If par is a sugar for this, then Thing might easily get corrupted. The
problem with such par is that the rules of nesting and visibility for the
statements, which are otherwise safe, become very dangerous in the case of
par.

Another problem is that Thing cannot be a protected object. Clearly Foo,
Bar and Baz resynchronize themselves on Thing after updating its parts. But
the compiler cannot know this. It also does not know that the updates do
not influence each other. It does not know that the state of Thing is
invalid until resynchronization. So it will serialize alternatives on write
access to Thing. (I cannot imagine a use case where Foo, Bar and Baz would
be pure. There seems to always be a shared outcome which would block them.)
Further Thing should be locked for the outer world while Foo, Bar, Baz are
running. So the standard functionality of protected objects looks totally
wrong here.

> I'm looking for something like Cilk, but even the concurrent loop
> (JPR's for I in all 1 .. n loop?) would be a help.

Maybe, just a guess, the functional decomposition rather than statements
could be more appropriate here. The alternatives would access their
arguments by copy-in and resynchronize by copy-out.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Embedded languages based on early Ada (from "Re: Preferred OS, processor family for running embedded Ada?")
  2007-03-02 16:32                     ` Dmitry A. Kazakov
@ 2007-03-03  0:00                       ` Dr. Adrian Wrigley
  2007-03-03 11:00                         ` Dmitry A. Kazakov
  2007-03-03  1:58                       ` Ray Blaak
  1 sibling, 1 reply; 79+ messages in thread
From: Dr. Adrian Wrigley @ 2007-03-03  0:00 UTC (permalink / raw)


On Fri, 02 Mar 2007 17:32:26 +0100, Dmitry A. Kazakov wrote:

> On Fri, 02 Mar 2007 11:36:22 GMT, Dr. Adrian Wrigley wrote:
> 
>> On Thu, 01 Mar 2007 14:57:01 +0100, Dmitry A. Kazakov wrote:
>> 
>>> On Thu, 01 Mar 2007 11:22:32 GMT, Dr. Adrian Wrigley wrote:
>>> 
>>>> If you don't have multiple processors, lightweight threading is
>>>> less attractive than if you do?  Inmos/Occam/Transputer was founded
>>>> on the basis that lightweight threading was highly relevant to multiple
>>>> processors.
>>>> 
>>>> Ada has no means of saying "Do these bits concurrently, if you like,
>>>> because I don't care what the order of execution is".  And a compiler
>>>> can't work it out from the source.  If your CPU has loads of threads,
>>>> compiling code with "PAR" style language concurrency is rather useful
>>>> and easy.
>>> 
>>> But par is quite low-level. What would be the semantics of:
>>> 
>>>    declare
>>>       Thing : X;
>>>    begin
>>>       par
>>>          Foo Thing);
>>>       and
>>>          Bar Thing);
>>>       and
>>>          Baz Thing);
>>>       end par;
>>>    end;
>> 
>> Do Foo, Bar and Baz in any order or concurrently, all accessing Thing.
> 
> That's the question. If they just have an arbitrary execution order being
> mutually exclusive then the above is a kind of select with anonymous
> accepts invoking Foo, Bar, Baz. The semantics is clean.
> 
>> Roughly equivalent to doing the same operations in three separate
>> tasks.  Thing could be a protected object, if concurrent writes
>> are prohibited.  Seems simple enough!
> 
> This is a very different variant:
> 
>    declare
>       Thing : X;
>    begin
>       declare -- par
>          task Alt_1; task Alt_2; task Alt_3;
>          task body Alt_1 is
>          begin
>              Foo (Thing);
>          end Alt_1;
>          task body Alt_2 is
>          begin
>              Bar (Thing);
>          end Alt_2;
>          task body Alt_3 is
>          begin
>              Baz (Thing);
>          end Alt_3;
>       begin
>          null;
>       end; -- par
> 
> If par is a sugar for this, then Thing might easily get corrupted. The
> problem with such par is that the rules of nesting and visibility for the
> statements, which are otherwise safe, become very dangerous in the case of
> par.

This is what I was thinking.

Syntax might be even simpler:
  declare
     Thing : X;
  begin par
     Foo (Thing);
     Bar (Thing);
     Baz (Thing);
  end par;

Thing won't get corrupted if the programmed knows what they're doing!
In the case of pure functions, there is "obviously" no problem:

  declare
     Thing : X := InitThing;
  begin par
     A1 := Foo (Thing);
     A2 := Bar (Thing);
     A3 := Baz (Thing);
  end par;
  return A1+A2+A3;

In the case of procedures, there are numerous reasonable uses.
Perhaps the three procedures read Thing, and output three separate files.
Or maybe they write different parts of Thing.  Maybe they validate
different properties of Thing, and raise an exception if a fault is found.
Perhaps they update statistics stored in a protected object, not shown.

The most obvious case is if the procedures are called on different
objects.  Next most likely is if they are pure functions

> Another problem is that Thing cannot be a protected object. Clearly Foo,
> Bar and Baz resynchronize themselves on Thing after updating its parts. But
> the compiler cannot know this. It also does not know that the updates do
> not influence each other. It does not know that the state of Thing is
> invalid until resynchronization. So it will serialize alternatives on write
> access to Thing. (I cannot imagine a use case where Foo, Bar and Baz would
> be pure. There seems to always be a shared outcome which would block them.)
> Further Thing should be locked for the outer world while Foo, Bar, Baz are
> running. So the standard functionality of protected objects looks totally
> wrong here.

Could Thing be composed of protected objects?  That way updates
would be serialised but wouldn't necessarily block the other procedures.

Maybe the procedures are very slow, but only touch Thing at the end?
Couldn't they run concurrently, and be serialised in an arbitrary order
at the end?

Nothing in this problem is different from the issues of doing it with
separate tasks.  So why is this any more problematic?

The semantics I want permit serial execution in any order.  And permit
operation even with a very large number of parallel statements in
effect.  Imagine a recursive call with each level having many parallel
statements.  Creating a task for each directly would probably break.
Something like an FFT, for example.  FFT the upper and lower halves
of Thing in parallel.  Combine serially.

Exception sematics would probably differ.  Any statement excepting
would stop all other par statements(?)

The compiler should be able to generate code which generates a
reasonable number of threads, depending on the hardware being used.

>> I'm looking for something like Cilk, but even the concurrent loop
>> (JPR's for I in all 1 .. n loop?) would be a help.
> 
> Maybe, just a guess, the functional decomposition rather than statements
> could be more appropriate here. The alternatives would access their
> arguments by copy-in and resynchronize by copy-out.

Maybe you're right.  But I can't see how to glue this in with
Ada (or VHDL) semantics.
--
Adrian





^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Embedded languages based on early Ada (from "Re: Preferred OS, processor family for running embedded Ada?")
  2007-03-02 16:32                     ` Dmitry A. Kazakov
  2007-03-03  0:00                       ` Dr. Adrian Wrigley
@ 2007-03-03  1:58                       ` Ray Blaak
  2007-03-03  8:14                         ` Pascal Obry
  2007-03-03 11:00                         ` Dmitry A. Kazakov
  1 sibling, 2 replies; 79+ messages in thread
From: Ray Blaak @ 2007-03-03  1:58 UTC (permalink / raw)


"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:
> If par is a sugar for this, then Thing might easily get corrupted. The
> problem with such par is that the rules of nesting and visibility for the
> statements, which are otherwise safe, become very dangerous in the case of
> par.
> 
> Another problem is that Thing cannot be a protected object. 

I am somewhat rusty on my Ada tasking knowledge, but why can't Thing be a
protected object?

It seems to me that is precisely the kind of synchronization control mechanism
you want to be able to have here.
-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
rAYblaaK@STRIPCAPStelus.net                    The Rhythm has my soul.



^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Embedded languages based on early Ada (from "Re: Preferred OS, processor family for running embedded Ada?")
  2007-03-03  1:58                       ` Ray Blaak
@ 2007-03-03  8:14                         ` Pascal Obry
  2007-03-03 11:00                         ` Dmitry A. Kazakov
  1 sibling, 0 replies; 79+ messages in thread
From: Pascal Obry @ 2007-03-03  8:14 UTC (permalink / raw)
  To: Ray Blaak

Ray Blaak a �crit :
> I am somewhat rusty on my Ada tasking knowledge, but why can't Thing be a
> protected object?

I don't think this is true. Thing can be a protected object and passed
to some procedures. No problem here I would say and probably the right
approach.

> It seems to me that is precisely the kind of synchronization control mechanism
> you want to be able to have here.

Agreed.

Pascal.

-- 

--|------------------------------------------------------
--| Pascal Obry                           Team-Ada Member
--| 45, rue Gabriel Peri - 78114 Magny Les Hameaux FRANCE
--|------------------------------------------------------
--|              http://www.obry.net
--| "The best way to travel is by means of imagination"
--|
--| gpg --keyserver wwwkeys.pgp.net --recv-key C1082595



^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Embedded languages based on early Ada (from "Re: Preferred OS, processor family for running embedded Ada?")
  2007-03-03  0:00                       ` Dr. Adrian Wrigley
@ 2007-03-03 11:00                         ` Dmitry A. Kazakov
  2007-03-03 11:27                           ` Jonathan Bromley
  0 siblings, 1 reply; 79+ messages in thread
From: Dmitry A. Kazakov @ 2007-03-03 11:00 UTC (permalink / raw)


On Sat, 03 Mar 2007 00:00:52 GMT, Dr. Adrian Wrigley wrote:

> On Fri, 02 Mar 2007 17:32:26 +0100, Dmitry A. Kazakov wrote:
> 
>> On Fri, 02 Mar 2007 11:36:22 GMT, Dr. Adrian Wrigley wrote:
>> 
>>> On Thu, 01 Mar 2007 14:57:01 +0100, Dmitry A. Kazakov wrote:
>>> 
>>>> On Thu, 01 Mar 2007 11:22:32 GMT, Dr. Adrian Wrigley wrote:
>>>> 
>>> Roughly equivalent to doing the same operations in three separate
>>> tasks.  Thing could be a protected object, if concurrent writes
>>> are prohibited.  Seems simple enough!
>> 
>> This is a very different variant:
>> 
>>    declare
>>       Thing : X;
>>    begin
>>       declare -- par
>>          task Alt_1; task Alt_2; task Alt_3;
>>          task body Alt_1 is
>>          begin
>>              Foo (Thing);
>>          end Alt_1;
>>          task body Alt_2 is
>>          begin
>>              Bar (Thing);
>>          end Alt_2;
>>          task body Alt_3 is
>>          begin
>>              Baz (Thing);
>>          end Alt_3;
>>       begin
>>          null;
>>       end; -- par
>> 
>> If par is a sugar for this, then Thing might easily get corrupted. The
>> problem with such par is that the rules of nesting and visibility for the
>> statements, which are otherwise safe, become very dangerous in the case of
>> par.
> 
> This is what I was thinking.
> 
> Syntax might be even simpler:
>   declare
>      Thing : X;
>   begin par
>      Foo (Thing);
>      Bar (Thing);
>      Baz (Thing);
>   end par;
> 
> Thing won't get corrupted if the programmed knows what they're doing!

Surely, but it becomes a pitfall for those who don't. The construct is
inherently unsafe, because it makes no any sense without some mutable Thing
or equivalent. This mutable thing is accessed unsafely, or else concurrency
gets killed.

> In the case of pure functions, there is "obviously" no problem:
> 
>   declare
>      Thing : X := InitThing;
>   begin par
>      A1 := Foo (Thing);
>      A2 := Bar (Thing);
>      A3 := Baz (Thing);
>   end par;
>   return A1+A2+A3;
> 
> In the case of procedures, there are numerous reasonable uses.
> Perhaps the three procedures read Thing, and output three separate files.
> Or maybe they write different parts of Thing.  Maybe they validate
> different properties of Thing, and raise an exception if a fault is found.
> Perhaps they update statistics stored in a protected object, not shown.
> 
> The most obvious case is if the procedures are called on different
> objects.  Next most likely is if they are pure functions

The problem is that there always exists the final "A1+A2+A3" which
semantics is in question. The alternatives resynchronize on "A1+A2+A3" and
I see no obvious way to express this. A PAR statement would not really help
to decompose it.

(What you have done is replacing mutable Thing with mutable set {A1,A2,A3}.
Let's rename {A1,A2,A3} to Thing, the problem is still there.)

>> Another problem is that Thing cannot be a protected object. Clearly Foo,
>> Bar and Baz resynchronize themselves on Thing after updating its parts. But
>> the compiler cannot know this. It also does not know that the updates do
>> not influence each other. It does not know that the state of Thing is
>> invalid until resynchronization. So it will serialize alternatives on write
>> access to Thing. (I cannot imagine a use case where Foo, Bar and Baz would
>> be pure. There seems to always be a shared outcome which would block them.)
>> Further Thing should be locked for the outer world while Foo, Bar, Baz are
>> running. So the standard functionality of protected objects looks totally
>> wrong here.
> 
> Could Thing be composed of protected objects?  That way updates
> would be serialised but wouldn't necessarily block the other procedures.

That could be a "hierarchical" mutex. But mutexes are themselves very
low-level. The unsafety were still there, it just would show itself as
deadlocks, rather than as corrupted data.

> Maybe the procedures are very slow, but only touch Thing at the end?
> Couldn't they run concurrently, and be serialised in an arbitrary order
> at the end?

That is the key issue, IMO. An ability to chop large chunks when the
procedures run most of the time independently into independent and
serialized parts is all the decomposition is about...

> Nothing in this problem is different from the issues of doing it with
> separate tasks.  So why is this any more problematic?

Because tasks additionally have safe synchronization and data exchange
mechanisms, while PAR should rely on inherently unsafe memory sharing.

> The semantics I want permit serial execution in any order.  And permit
> operation even with a very large number of parallel statements in
> effect.  Imagine a recursive call with each level having many parallel
> statements.  Creating a task for each directly would probably break.
> Something like an FFT, for example.  FFT the upper and lower halves
> of Thing in parallel.  Combine serially.

Yes, and the run-time could assign the worker tasks from some pool of,
fully transparently to the program. That would be very cool.

> Exception sematics would probably differ.  Any statement excepting
> would stop all other par statements(?)

But not by abort, rather it should wait for the next synchronization point
an propagate an exception out of there, so that the alternatives might
clean up the temporal objects they create. (The synchronization points
could be explicit, for example when an alternative calls to an entry point
or procedure of a shared thing.)

> The compiler should be able to generate code which generates a
> reasonable number of threads, depending on the hardware being used.

Yes

>>> I'm looking for something like Cilk, but even the concurrent loop
>>> (JPR's for I in all 1 .. n loop?) would be a help.
>> 
>> Maybe, just a guess, the functional decomposition rather than statements
>> could be more appropriate here. The alternatives would access their
>> arguments by copy-in and resynchronize by copy-out.
> 
> Maybe you're right.  But I can't see how to glue this in with
> Ada (or VHDL) semantics.

That is the most difficult part! (:-))

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Embedded languages based on early Ada (from "Re: Preferred OS, processor family for running embedded Ada?")
  2007-03-03  1:58                       ` Ray Blaak
  2007-03-03  8:14                         ` Pascal Obry
@ 2007-03-03 11:00                         ` Dmitry A. Kazakov
  2007-03-03 21:13                           ` Ray Blaak
  1 sibling, 1 reply; 79+ messages in thread
From: Dmitry A. Kazakov @ 2007-03-03 11:00 UTC (permalink / raw)


On Sat, 03 Mar 2007 01:58:35 GMT, Ray Blaak wrote:

> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:
>> If par is a sugar for this, then Thing might easily get corrupted. The
>> problem with such par is that the rules of nesting and visibility for the
>> statements, which are otherwise safe, become very dangerous in the case of
>> par.
>> 
>> Another problem is that Thing cannot be a protected object. 
> 
> I am somewhat rusty on my Ada tasking knowledge, but why can't Thing be a
> protected object?

I tried to explain it in my previous post.

When Thing is a protected object, then the procedures and entries of,
called from the concurrent alternatives are all mutually exclusive. This is
not the semantics expected from PAR. Probably it would be better to rewrite
as:

   declare
      Thing : X;
   begin
      par -- Though this appears concurrent, it is not
         Thing.Foo;
      and
         Thing.Bar;
      and
         Thing.Baz;
      end par;
   end;

> It seems to me that is precisely the kind of synchronization control mechanism
> you want to be able to have here.

No. The implied semantics of PAR is such that Thing should be accessed from
alternatives without interlocking because one *suggests* that the updates
are mutually independent. When Thing is visible from outside it should be
blocked by PAR for everyone else. This is not the behaviour of a protected
object. It is rather a "hierarchical" mutex.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Embedded languages based on early Ada (from "Re: Preferred OS, processor family for running embedded Ada?")
  2007-03-03 11:00                         ` Dmitry A. Kazakov
@ 2007-03-03 11:27                           ` Jonathan Bromley
  2007-03-03 12:12                             ` Simon Farnsworth
  2007-03-03 13:40                             ` Dr. Adrian Wrigley
  0 siblings, 2 replies; 79+ messages in thread
From: Jonathan Bromley @ 2007-03-03 11:27 UTC (permalink / raw)


On Sat, 3 Mar 2007 12:00:08 +0100, "Dmitry A. Kazakov"
<mailbox@dmitry-kazakov.de> wrote:

>Because tasks additionally have safe synchronization and data exchange
>mechanisms, while PAR should rely on inherently unsafe memory sharing.

The PAR that I'm familiar with (from CSP/occam) most certainly does
*not* have "inherently unsafe memory sharing".  There seems to be
an absurd amount of wheel-reinvention going on in this thread.

>> The semantics I want permit serial execution in any order.  And permit
>> operation even with a very large number of parallel statements in
>> effect.  Imagine a recursive call with each level having many parallel
>> statements.  Creating a task for each directly would probably break.
>> Something like an FFT, for example.  FFT the upper and lower halves
>> of Thing in parallel.  Combine serially.
>
>Yes, and the run-time could assign the worker tasks from some pool of,
>fully transparently to the program. That would be very cool.

And easy to do, and done many times before.

>> The compiler should be able to generate code which generates a
>> reasonable number of threads, depending on the hardware being used.
>
>Yes

For heaven's sake...   You have a statically-determinable number of
processors.  It's your (or your compiler's) choice whether each of
those processors runs a single thread, or somehow runs multiple
threads.   If each processor is entitled to run multiple threads, then
there's no reason why the number and structure of cooperating
threads should not be dynamically variable.  If you choose to run
one thread on each processor, your thread structure is similarly
static.  Hardware people have been obliged to think about this
kind of thing for decades.  Software people seem to have a 
pretty good grip on it too, if the textbooks and papers I've read
are anything to go by.  Why is it suddenly such a big deal?

>> Maybe you're right.  But I can't see how to glue this in with
>> Ada (or VHDL) semantics.

In VHDL, a process represents a single statically-constructed
thread.  It talks to its peers in an inherently safe way through
signals.  With this mechanism, together with dynamic memory
allocation, you can easily fake-up whatever threading regime
takes your fancy.  You probably wouldn't bother because
there are more convenient tools to do such things in software
land, but it can be done.  In hardware you can do exactly the
same thing, but one (or more) of your processes must then 
take responsibility for emulating the dynamic memory allocation,
carving up some real static physical memory according to 
whatever strategy you choose to implement.

>That is the most difficult part! (:-))

Maybe.  But then again, maybe organising the structure of
the actual application is the most difficult part, and this
vapid rambling about things that are already well-understood
is actually rather straightforward.
-- 
Jonathan Bromley, Consultant

DOULOS - Developing Design Know-how
VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services

Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK
jonathan.bromley@MYCOMPANY.com
http://www.MYCOMPANY.com

The contents of this message may contain personal views which 
are not the views of Doulos Ltd., unless specifically stated.



^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Embedded languages based on early Ada (from "Re: Preferred OS, processor family for running embedded Ada?")
  2007-03-03 11:27                           ` Jonathan Bromley
@ 2007-03-03 12:12                             ` Simon Farnsworth
  2007-03-03 14:07                               ` Dr. Adrian Wrigley
  2007-03-03 13:40                             ` Dr. Adrian Wrigley
  1 sibling, 1 reply; 79+ messages in thread
From: Simon Farnsworth @ 2007-03-03 12:12 UTC (permalink / raw)


Jonathan Bromley wrote:

> On Sat, 3 Mar 2007 12:00:08 +0100, "Dmitry A. Kazakov"
> <mailbox@dmitry-kazakov.de> wrote:
> 
>>> The compiler should be able to generate code which generates a
>>> reasonable number of threads, depending on the hardware being used.
>>
>>Yes
> 
> For heaven's sake...   You have a statically-determinable number of
> processors.  It's your (or your compiler's) choice whether each of
> those processors runs a single thread, or somehow runs multiple
> threads.   If each processor is entitled to run multiple threads, then
> there's no reason why the number and structure of cooperating
> threads should not be dynamically variable.  If you choose to run
> one thread on each processor, your thread structure is similarly
> static.  Hardware people have been obliged to think about this
> kind of thing for decades.  Software people seem to have a
> pretty good grip on it too, if the textbooks and papers I've read
> are anything to go by.  Why is it suddenly such a big deal?
> 
Not disagreeing with most of what you're saying, but I do feel the need to
point out the existence of systems with hotpluggable CPUs. Sun and IBM have
both sold systems for some years where CPUs can be added and removed at
runtime; software is expected to just cope with this.

Also in the software domain; there is a cost to switching between different
threads. Thus, in software, the aim is to limit the number of runnable
threads to the number of active CPUs. If there are more threads runnable
than CPUs available, some CPU time is wasted switching between threads,
which is normally undesirable behaviour.
-- 
Simon Farnsworth



^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Embedded languages based on early Ada (from "Re: Preferred OS, processor family for running embedded Ada?")
  2007-03-03 11:27                           ` Jonathan Bromley
  2007-03-03 12:12                             ` Simon Farnsworth
@ 2007-03-03 13:40                             ` Dr. Adrian Wrigley
  2007-03-03 15:26                               ` Jonathan Bromley
  2007-03-05 15:36                               ` Colin Paul Gloster
  1 sibling, 2 replies; 79+ messages in thread
From: Dr. Adrian Wrigley @ 2007-03-03 13:40 UTC (permalink / raw)


On Sat, 03 Mar 2007 11:27:50 +0000, Jonathan Bromley wrote:

> On Sat, 3 Mar 2007 12:00:08 +0100, "Dmitry A. Kazakov"
> <mailbox@dmitry-kazakov.de> wrote:
> 
>>Because tasks additionally have safe synchronization and data exchange
>>mechanisms, while PAR should rely on inherently unsafe memory sharing.
> 
> The PAR that I'm familiar with (from CSP/occam) most certainly does
> *not* have "inherently unsafe memory sharing".  There seems to be
> an absurd amount of wheel-reinvention going on in this thread.

I think reinvention is necessary.  Whatever "par" semantics was
in Occam is not available in Ada (or C, C++, Perl or whatever).
It was considered useful then - bring it back!

>>> The semantics I want permit serial execution in any order.  And permit
>>> operation even with a very large number of parallel statements in
>>> effect.  Imagine a recursive call with each level having many parallel
>>> statements.  Creating a task for each directly would probably break.
>>> Something like an FFT, for example.  FFT the upper and lower halves
>>> of Thing in parallel.  Combine serially.
>>
>>Yes, and the run-time could assign the worker tasks from some pool of,
>>fully transparently to the program. That would be very cool.
> 
> And easy to do, and done many times before.

How do you do this in Ada?  Or VHDL?  It's been done many times
before, yes, but not delivered in any currently usable form for
the general programmer :(  It's not in any mainstream language I know.

>>> The compiler should be able to generate code which generates a
>>> reasonable number of threads, depending on the hardware being used.
>>
>>Yes
> 
> For heaven's sake...   You have a statically-determinable number of
> processors.  It's your (or your compiler's) choice whether each of
> those processors runs a single thread, or somehow runs multiple
> threads.   If each processor is entitled to run multiple threads, then
> there's no reason why the number and structure of cooperating
> threads should not be dynamically variable.  If you choose to run
> one thread on each processor, your thread structure is similarly

Of course.  But how do I make this choice with the OSs and languages
of today?  "nice" doesn't seem to be able to be able to control
this when code is written in Ada or VHDL.  Nor is it defined
anywhere in the source code.

> static.  Hardware people have been obliged to think about this
> kind of thing for decades.  Software people seem to have a 
> pretty good grip on it too, if the textbooks and papers I've read
> are anything to go by.  Why is it suddenly such a big deal?

It's been a big deal for a long time as far as I'm concerned.
It's not a matter of "invention" mostly, but one of availability
and standards.  There is no means in Ada to say "run this in
a separate task, if appropriate".  Only a few, academic
and experimental tools offer the flexibility.  Papers /= practise.

>>> Maybe you're right.  But I can't see how to glue this in with
>>> Ada (or VHDL) semantics.
> 
> In VHDL, a process represents a single statically-constructed
> thread.  It talks to its peers in an inherently safe way through
> signals.  With this mechanism, together with dynamic memory
> allocation, you can easily fake-up whatever threading regime
> takes your fancy.  You probably wouldn't bother because
> there are more convenient tools to do such things in software
> land, but it can be done.

I'm not sure what you're talking about here.  Do you mean like any/all of
Split-C, Cilk, C*, ZPL, HPF, F, data-parallel C, MPI-1, MPI-2, OpenMP,
ViVA, MOSIX, PVM, SVM, Paderborn BSP, Oxford BSP toolset and IBM's TSpaces?

Specifying and using fine-grain parallelism requires language,
compiler and hardware support, I think.

Consider:
   begin par
      x := sin(theta);
      y := cos(theta);
   end par;

you probably *do* want to create a new thread, if thread creation
and destruction is much faster than the function calls.  You don't
know this at compile-time, because this depends on the library in use,
and the actual parameters.  Maybe X, Y are of dynamically allocated
length (multi-precision).

You can't justify designing hardware with very short thread
creation/destruction times, unless the software can be written
to take advantage.  But none of the mainstream languages
allow fine grain reordering and concurrency to be specified.
That's the Catch-22 that Inmos/Occam solved.  Technically.

The need is emerging again, now more threads on a chip
is easier than higher sequential instruction rate. 

>  In hardware you can do exactly the
> same thing, but one (or more) of your processes must then 
> take responsibility for emulating the dynamic memory allocation,
> carving up some real static physical memory according to 
> whatever strategy you choose to implement.
> 
>>That is the most difficult part! (:-))
> 
> Maybe.  But then again, maybe organising the structure of
> the actual application is the most difficult part, and this

This is sometimes true.

> vapid rambling about things that are already well-understood
> is actually rather straightforward.

Somewhere our models don't mesh.  What is "straightforward" to
you is "impossible" for me.  What syntax do I use, and which
compiler, OS and processor do I need to specify and exploit
fine-grain concurrency?

In 1987, the answers were "par", Occam, Transputer. Twenty
years later, Ada (or VHDL, C++, C#), Linux (or Windows), Niagara
(or Tukwila, XinC, ClearSpeed, Cell) do not offer us anything
remotely similar.  In fact, in twenty years, things have
got worse :(
--
Adrian





^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Embedded languages based on early Ada (from "Re: Preferred OS, processor family for running embedded Ada?")
  2007-03-03 12:12                             ` Simon Farnsworth
@ 2007-03-03 14:07                               ` Dr. Adrian Wrigley
  2007-03-03 17:28                                 ` Pascal Obry
  0 siblings, 1 reply; 79+ messages in thread
From: Dr. Adrian Wrigley @ 2007-03-03 14:07 UTC (permalink / raw)


On Sat, 03 Mar 2007 12:12:10 +0000, Simon Farnsworth wrote:

> Jonathan Bromley wrote:
> 
>> On Sat, 3 Mar 2007 12:00:08 +0100, "Dmitry A. Kazakov"
>> <mailbox@dmitry-kazakov.de> wrote:
>> 
>>>> The compiler should be able to generate code which generates a
>>>> reasonable number of threads, depending on the hardware being used.
>>>
>>>Yes
>> 
>> For heaven's sake...   You have a statically-determinable number of
>> processors.  It's your (or your compiler's) choice whether each of
>> those processors runs a single thread, or somehow runs multiple
>> threads.   If each processor is entitled to run multiple threads, then
>> there's no reason why the number and structure of cooperating
>> threads should not be dynamically variable.  If you choose to run
>> one thread on each processor, your thread structure is similarly
>> static.  Hardware people have been obliged to think about this
>> kind of thing for decades.  Software people seem to have a
>> pretty good grip on it too, if the textbooks and papers I've read
>> are anything to go by.  Why is it suddenly such a big deal?
>> 
> Not disagreeing with most of what you're saying, but I do feel the need to
> point out the existence of systems with hotpluggable CPUs. Sun and IBM have
> both sold systems for some years where CPUs can be added and removed at
> runtime; software is expected to just cope with this.
> 
> Also in the software domain; there is a cost to switching between different
> threads. Thus, in software, the aim is to limit the number of runnable
> threads to the number of active CPUs. If there are more threads runnable
> than CPUs available, some CPU time is wasted switching between threads,
> which is normally undesirable behaviour.

This is part of the problem.  Parallelism has to be be *inhibited* by
explicit serialisation, to limit the number of threads created.

So a construct like (in Ada):

for I in Truth'Range loop
   Z := Z xor Truth (I);
end loop;

deliberately forces an execution serial order, even although we
know that order does not matter at all, in this case.

There is no effective consruct to permit but not require concurrency.

The software compiler can't sensibly parallelise this because:

The semantics of xor may be unknown (overloaded), and unsuitable
The execution time of each iteration is much smaller than thread
   start/stop time
Too many parallel threads would be created

So we're left with source code which implies a non-existent serialisation
constraint.

If the "for I in all..." construct were in the language, we'd be
able to say "I don't care about the order", and permitting concurrency,
even if the result weren't identical (eg when using floating point)

Numerous algorithms in simulation are "embarrassingly parallel",
but this fact is completely and deliberately obscured from compilers.
Compilers can't normally generated fine-scale threaded code because
the applications don't specify it, the languages don't support it,
and the processors don't need it.  But the technical opportunity
is real.  It won't happen until the deadlock between compilers, software
and processors is broken.
--
Adrian




^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Embedded languages based on early Ada (from "Re: Preferred OS, processor family for running embedded Ada?")
  2007-03-03 13:40                             ` Dr. Adrian Wrigley
@ 2007-03-03 15:26                               ` Jonathan Bromley
  2007-03-03 16:59                                 ` Dr. Adrian Wrigley
  2007-03-05 15:36                               ` Colin Paul Gloster
  1 sibling, 1 reply; 79+ messages in thread
From: Jonathan Bromley @ 2007-03-03 15:26 UTC (permalink / raw)


On Sat, 03 Mar 2007 13:40:16 GMT, "Dr. Adrian Wrigley"
<amtw@linuxchip.demon.co.uk.uk.uk> wrote:

>  What syntax do I use, and which
>compiler, OS and processor do I need to specify and exploit
>fine-grain concurrency?
>
>In 1987, the answers were "par", Occam, Transputer. Twenty
>years later, Ada (or VHDL, C++, C#), Linux (or Windows), Niagara
>(or Tukwila, XinC, ClearSpeed, Cell) do not offer us anything
>remotely similar.  In fact, in twenty years, things have
>got worse :(

Absolutely right.  And whose fault is that?  Not the academics,
who have understood this for decades.  Not the hardware people
like me, who of necessity must understand and exploit massive 
fine-grained parallelism (albeit with a static structure).  No, 
it's the programmer weenies with their silly nonsense about 
threads being inefficient.

Glad to have got that off my chest :-)  But it's pretty frustrating
to be told that parallel programming's time has come, when
I spent a decade and a half trying to persuade people that it
was worth even thinking about and being told that it was
irrelevant.

For the numerical-algorithms people, I suspect the problem of
inferring opportunities for parallelism is nearer to being solved
than some might imagine.  There are tools around that
can convert DSP-type algorithms (such as the FFT that's 
already been mentioned) into hardware that's inherently
parallel; there are behavioural synthesis tools that allow
you to explore the various possible parallel vs. serial 
possibilities for scheduling a computation on heterogeneous
hardware.  It's surely a small step from that to distributing 
such a computation across multiple threads or CPUs.  All 
that's needed is the will.
-- 
Jonathan Bromley, Consultant

DOULOS - Developing Design Know-how
VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services

Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK
jonathan.bromley@MYCOMPANY.com
http://www.MYCOMPANY.com

The contents of this message may contain personal views which 
are not the views of Doulos Ltd., unless specifically stated.



^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Embedded languages based on early Ada (from "Re: Preferred OS, processor family for running embedded Ada?")
  2007-03-03 15:26                               ` Jonathan Bromley
@ 2007-03-03 16:59                                 ` Dr. Adrian Wrigley
  0 siblings, 0 replies; 79+ messages in thread
From: Dr. Adrian Wrigley @ 2007-03-03 16:59 UTC (permalink / raw)


On Sat, 03 Mar 2007 15:26:35 +0000, Jonathan Bromley wrote:

> On Sat, 03 Mar 2007 13:40:16 GMT, "Dr. Adrian Wrigley"
> <amtw@linuxchip.demon.co.uk.uk.uk> wrote:
> 
>>  What syntax do I use, and which
>>compiler, OS and processor do I need to specify and exploit
>>fine-grain concurrency?
>>
>>In 1987, the answers were "par", Occam, Transputer. Twenty
>>years later, Ada (or VHDL, C++, C#), Linux (or Windows), Niagara
>>(or Tukwila, XinC, ClearSpeed, Cell) do not offer us anything
>>remotely similar.  In fact, in twenty years, things have
>>got worse :(
> 
> Absolutely right.  And whose fault is that?  Not the academics,
> who have understood this for decades.  Not the hardware people
> like me, who of necessity must understand and exploit massive 
> fine-grained parallelism (albeit with a static structure).  No, 
> it's the programmer weenies with their silly nonsense about 
> threads being inefficient.

By the way... I am a satisfied customer of yours (from 1994).

If there is any blame to share, I place it upon the language
designers who don't include the basics of concurrency (and
I include Ada, which has no parallel loops, statements or function
calls.  Nor decent pure functions).

I do hardware, processor and software design.  But I'm not
keen on trying to fix-up programming languages, compilers
and processors so they mesh better. (Unless someone pays me!)

> Glad to have got that off my chest :-)  But it's pretty frustrating
> to be told that parallel programming's time has come, when

(I'm not saying this - so don't be frustrated!  What I'm saying
is that multithreading has become "buzzword compliant" again,
so may there's an opportunity to exploit to address longstanding
technical deficiencies and rebrand Ada and/or VHDL)

> I spent a decade and a half trying to persuade people that it
> was worth even thinking about and being told that it was
> irrelevant.

Parallel programming's time hasn't quite arrived :(
But it's only 3-5 years away!  Still. (like flying cars,
fusion power and flat screens, which never seem to get
nearer. {Oh. tick off flat screens!})

> For the numerical-algorithms people, I suspect the problem of
> inferring opportunities for parallelism is nearer to being solved
> than some might imagine.  There are tools around that
> can convert DSP-type algorithms (such as the FFT that's 
> already been mentioned) into hardware that's inherently

Again, this is ages old now.  But it can't convert
C-type programs reliably and efficiently.

> parallel; there are behavioural synthesis tools that allow
> you to explore the various possible parallel vs. serial 
> possibilities for scheduling a computation on heterogeneous
> hardware.  It's surely a small step from that to distributing 
> such a computation across multiple threads or CPUs.  All 
> that's needed is the will.

A small step. Like from Apollo 11.

Once the language/software/compiler/processor deadlock is broken,
things will move rapidly.  Give it another 15 years, and we might
be half way there.

Glad to see that we're not so far apart as I thought!
--
Adrian





^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Embedded languages based on early Ada (from "Re: Preferred OS, processor family for running embedded Ada?")
  2007-03-03 14:07                               ` Dr. Adrian Wrigley
@ 2007-03-03 17:28                                 ` Pascal Obry
  2007-03-03 18:11                                   ` Dmitry A. Kazakov
  2007-03-03 21:28                                   ` Dr. Adrian Wrigley
  0 siblings, 2 replies; 79+ messages in thread
From: Pascal Obry @ 2007-03-03 17:28 UTC (permalink / raw)
  To: Dr. Adrian Wrigley

Dr. Adrian Wrigley a �crit :
> Numerous algorithms in simulation are "embarrassingly parallel",
> but this fact is completely and deliberately obscured from compilers.

Not a big problem. If the algorithms are "embarrassingly parallel" then
the jobs are fully independent. In this case that is quite simple,
create as many tasks as you have of processors. No big deal. Each task
will compute a specific job. Ada has no problem with "embarrassingly
parallel" jobs.

What I have not yet understood is that people are trying to solve, in
all cases, the parallelism at the lowest lever. Trying to parallelize an
algorithm in an "embarrassingly parallel" context is loosing precious
time. Many real case simulations have billions of those algorithm to
compute on multiple data, just create a set of task to compute in
parallel multiple of those algorithm. Easier and as effective.

In other words, what I'm saying is that in some cases ("embarrassingly
parallel" computation is one of them) it is easier to do n computations
in n tasks than n x (1 parallel computation in n tasks), and the overall
performance is better.

Pascal.

-- 

--|------------------------------------------------------
--| Pascal Obry                           Team-Ada Member
--| 45, rue Gabriel Peri - 78114 Magny Les Hameaux FRANCE
--|------------------------------------------------------
--|              http://www.obry.net
--| "The best way to travel is by means of imagination"
--|
--| gpg --keyserver wwwkeys.pgp.net --recv-key C1082595



^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Embedded languages based on early Ada (from "Re: Preferred OS, processor family for running embedded Ada?")
  2007-03-03 17:28                                 ` Pascal Obry
@ 2007-03-03 18:11                                   ` Dmitry A. Kazakov
  2007-03-03 18:31                                     ` Pascal Obry
  2007-03-03 21:28                                   ` Dr. Adrian Wrigley
  1 sibling, 1 reply; 79+ messages in thread
From: Dmitry A. Kazakov @ 2007-03-03 18:11 UTC (permalink / raw)


On Sat, 03 Mar 2007 18:28:18 +0100, Pascal Obry wrote:

> Dr. Adrian Wrigley a �crit :
>> Numerous algorithms in simulation are "embarrassingly parallel",
>> but this fact is completely and deliberately obscured from compilers.
> 
> Not a big problem. If the algorithms are "embarrassingly parallel" then
> the jobs are fully independent. In this case that is quite simple,
> create as many tasks as you have of processors. No big deal. Each task
> will compute a specific job. Ada has no problem with "embarrassingly
> parallel" jobs.
>
> What I have not yet understood is that people are trying to solve, in
> all cases, the parallelism at the lowest lever. Trying to parallelize an
> algorithm in an "embarrassingly parallel" context is loosing precious
> time. Many real case simulations have billions of those algorithm to
> compute on multiple data, just create a set of task to compute in
> parallel multiple of those algorithm. Easier and as effective.
>
> In other words, what I'm saying is that in some cases ("embarrassingly
> parallel" computation is one of them) it is easier to do n computations
> in n tasks than n x (1 parallel computation in n tasks), and the overall
> performance is better.

The idea (of PAR etc) is IMO quite opposite. It is about treating
parallelism rather as a compiler optimization problem, than as a part of
the domain. In the simplest possible form it can be illustrated on the
example of Ada's "or" and "or else." While the former is potentially
parallel, it has zero overhead compared to sequential "or else." (I don't
count the time required to evaluate the operands). If we compare it with
the overhead of creating tasks, we will see a huge difference both in terms
of CPU cycles and mental efforts.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Embedded languages based on early Ada (from "Re: Preferred OS, processor family for running embedded Ada?")
  2007-03-03 18:11                                   ` Dmitry A. Kazakov
@ 2007-03-03 18:31                                     ` Pascal Obry
  2007-03-03 20:26                                       ` Dmitry A. Kazakov
  0 siblings, 1 reply; 79+ messages in thread
From: Pascal Obry @ 2007-03-03 18:31 UTC (permalink / raw)
  To: mailbox

Dmitry A. Kazakov a �crit :

> The idea (of PAR etc) is IMO quite opposite. It is about treating
> parallelism rather as a compiler optimization problem, than as a part of
> the domain. In the simplest possible form it can be illustrated on the
> example of Ada's "or" and "or else." While the former is potentially
> parallel, it has zero overhead compared to sequential "or else." (I don't
> count the time required to evaluate the operands). If we compare it with
> the overhead of creating tasks, we will see a huge difference both in terms
> of CPU cycles and mental efforts.

I don't buy this :) You don't have to create tasks for every
computations. You put in place a writer/consumer model. A task prepare
the data and put them into a list (protected object) and you have a set
of tasks to consume those jobs. This works in many cases, requires only
creation of tasks once (not as bad as OpenMP which creates threads for
parallel computations).

Pascal.

-- 

--|------------------------------------------------------
--| Pascal Obry                           Team-Ada Member
--| 45, rue Gabriel Peri - 78114 Magny Les Hameaux FRANCE
--|------------------------------------------------------
--|              http://www.obry.net
--| "The best way to travel is by means of imagination"
--|
--| gpg --keyserver wwwkeys.pgp.net --recv-key C1082595



^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Embedded languages based on early Ada (from "Re: Preferred OS, processor family for running embedded Ada?")
  2007-03-03 18:31                                     ` Pascal Obry
@ 2007-03-03 20:26                                       ` Dmitry A. Kazakov
  0 siblings, 0 replies; 79+ messages in thread
From: Dmitry A. Kazakov @ 2007-03-03 20:26 UTC (permalink / raw)


On Sat, 03 Mar 2007 19:31:24 +0100, Pascal Obry wrote:

> Dmitry A. Kazakov a �crit :
> 
>> The idea (of PAR etc) is IMO quite opposite. It is about treating
>> parallelism rather as a compiler optimization problem, than as a part of
>> the domain. In the simplest possible form it can be illustrated on the
>> example of Ada's "or" and "or else." While the former is potentially
>> parallel, it has zero overhead compared to sequential "or else." (I don't
>> count the time required to evaluate the operands). If we compare it with
>> the overhead of creating tasks, we will see a huge difference both in terms
>> of CPU cycles and mental efforts.
> 
> I don't buy this :)

Well, maybe I don't buy it too... (:-)) Nevertheless, it is a very
challenging and intriguing idea.

> You don't have to create tasks for every computations.

(On some futuristic hardware tasks could become cheaper than memory and
arithmetic computations.)

> You put in place a writer/consumer model. A task prepare
> the data and put them into a list (protected object) and you have a set
> of tasks to consume those jobs. This works in many cases, requires only
> creation of tasks once (not as bad as OpenMP which creates threads for
> parallel computations).

Ah, but publisher/subscriber framework is itself a solution of some
problem, which is not a domain problem. If you had a distributed middleware
you would not care about publishers and subscribers. You would simply
assign/read a variable controlled by the middleware. Interlocking,
marshaling whatsoever would happen transparently.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Embedded languages based on early Ada (from "Re: Preferred OS, processor family for running embedded Ada?")
  2007-03-03 11:00                         ` Dmitry A. Kazakov
@ 2007-03-03 21:13                           ` Ray Blaak
  2007-03-05 19:01                             ` PAR (Was: Embedded languages based on early Ada) Jacob Sparre Andersen
  0 siblings, 1 reply; 79+ messages in thread
From: Ray Blaak @ 2007-03-03 21:13 UTC (permalink / raw)


"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:
> On Sat, 03 Mar 2007 01:58:35 GMT, Ray Blaak wrote:
> > I am somewhat rusty on my Ada tasking knowledge, but why can't Thing be a
> > protected object?
> 
> I tried to explain it in my previous post.
> 
> When Thing is a protected object, then the procedures and entries of,
> called from the concurrent alternatives are all mutually exclusive. This is
> not the semantics expected from PAR. Probably it would be better to rewrite
> as:

PAR only says that all of its statements run in parallel, nothing more nothing
less (e.g. equivalent to the task bodys you had around each statement before).

Those statements can themselves access synchronization and blockin controls
that affect their execution patterns.

> No. The implied semantics of PAR is such that Thing should be accessed from
> alternatives without interlocking because one *suggests* that the updates
> are mutually independent. 

The updates are independent only if their behaviour truly is independent. If
they access a shared synchronization control then by definition they are
mutually dependent.

It is not PAR that dictates this, but rather the statements themselves.

PAR would only be convenience shorthand for writing task bodies around each
statement.

> When Thing is visible from outside it should be
> blocked by PAR for everyone else. This is not the behaviour of a protected
> object. It is rather a "hierarchical" mutex.

The behaviour of a protected object is defined by its entries and how it is
used.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
rAYblaaK@STRIPCAPStelus.net                    The Rhythm has my soul.



^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Embedded languages based on early Ada (from "Re: Preferred OS, processor family for running embedded Ada?")
  2007-03-03 17:28                                 ` Pascal Obry
  2007-03-03 18:11                                   ` Dmitry A. Kazakov
@ 2007-03-03 21:28                                   ` Dr. Adrian Wrigley
  2007-03-03 22:00                                     ` Pascal Obry
  1 sibling, 1 reply; 79+ messages in thread
From: Dr. Adrian Wrigley @ 2007-03-03 21:28 UTC (permalink / raw)


On Sat, 03 Mar 2007 18:28:18 +0100, Pascal Obry wrote:

> Dr. Adrian Wrigley a �crit :
>> Numerous algorithms in simulation are "embarrassingly parallel",
>> but this fact is completely and deliberately obscured from compilers.
> 
> Not a big problem. If the algorithms are "embarrassingly parallel" then
> the jobs are fully independent. In this case that is quite simple,

They aren't independent in terms of cache use! They may also have
common subexpressions, which independent treatments re-evalutates.

> create as many tasks as you have of processors. No big deal. Each task
> will compute a specific job. Ada has no problem with "embarrassingly
> parallel" jobs.

A problem is it that it breaks the memory bandwidth budget.  This
approach is tricky with large numbers of processors.  And even more
challenging with hardware synthesis.

> What I have not yet understood is that people are trying to solve, in
> all cases, the parallelism at the lowest lever. Trying to parallelize an
> algorithm in an "embarrassingly parallel" context is loosing precious
> time.

You need to parallelise at the lowest level to take advantage of
hardware synthesis.  For normal threads a somewhat higher level
is desirable.  For multiple systems on a network, a high level
is needed.

What I want in a language is the ability to specify when things
must be evaluated sequentially, and when it doesn't matter
(even if the result of changing the order may differ).

> Many real case simulations have billions of those algorithm to
> compute on multiple data, just create a set of task to compute in
> parallel multiple of those algorithm. Easier and as effective.

Reasonable for compilers and processors as they are designed now.
Even so it can be challenging to take advantage of shared
calculations and memory capacity and bandwidth limitations.

But useless for hardware synthesis.  Or automated partitioning
software.  Or generating system diagrams from code. 

Manual partitioning into tasks and sequential code segments is
something which is not part of the problem domain, but part
of the solution domain.  It implies a multiplicity of sequentially
executing process threads.

Using concurrent statements in the source code is not the same thing
as "trying to parallelise an algorithm".  It doesn't lose any
prescious execution time.  It simply informs the reader and the
compiler that the order of certain actions isn't considered relevant.
The compiler can takes some parts of the source and convert to
a netlist for an ASIC or FPGA.  Other parts could be broken
down into threads.  Or maybe parts could be passed to separate
computer systems on a network.  Much of it could be ignored.
It is the compiler which tries to parallelise the execution.
Unlike tasks, where the programmer does try to parallelise.

Whose job is it to parallise operations?  Traditionally,
programmers try to specify exactly what sequence of operations is
to take place.  And then the compiler does its best to shuffle
things around (limited).  And the CPU tries to overlap data
fetch, calculation, address calculation by watching the
instruction sequence for concurrency opportunities.
Why do the work to force sequential operation if the
compiler and hardware are desperately trying to infer
concurrency?

> In other words, what I'm saying is that in some cases ("embarrassingly
> parallel" computation is one of them) it is easier to do n computations
> in n tasks than n x (1 parallel computation in n tasks), and the overall
> performance is better.

This is definitely the case.  And it helps explain why parallelisation
is not a job for the programmer or the hardware designer, but for
the synthesis tool, OS, processor, compiler or run-time.  Forcing
the programmer or hardware designer to hard-code a specific parallism type
(threads), and a particular partitioning, while denying the expressiveness
of a concurrent language will result in inferior flexibility and
inability to map the problem onto certain types of solution.

If all the parallelism your hardware has is a few threads then all you
need to code for is tasks.  If you want to be able to target FPGAs,
million-thread CPUs, ASICs and loosely coupled processor networks,
the Ada task model alone serves very poorly.

Perhaps mapping execution of a program onto threads or other
concurent structure is like mapping execution onto memory.
It *is* possible to manage a processor with a small, fast memory,
mapped at a fixed address range.  You use special calls to move
data to and from your main store, based on your own analysis of
how the memory access patterns will operate.  But this approach
has given way to automated caches with dynamic mapping of
memory cells to addresses.  And virtual memory.  Trying to
manage tasks "manually", based on your hunches about task
coherence and work load will surely give way to automatic
thread inference creation and management based on the interaction
of thread management hardware and OS support.  Building in
hunches about tasking to achieve parallelism can only be
a short-term solution.
 --
Adrian








^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Embedded languages based on early Ada (from "Re: Preferred OS, processor family for running embedded Ada?")
  2007-03-03 21:28                                   ` Dr. Adrian Wrigley
@ 2007-03-03 22:00                                     ` Pascal Obry
  0 siblings, 0 replies; 79+ messages in thread
From: Pascal Obry @ 2007-03-03 22:00 UTC (permalink / raw)
  To: Dr. Adrian Wrigley

Dr. Adrian Wrigley a �crit :

> If all the parallelism your hardware has is a few threads then all you
> need to code for is tasks.  If you want to be able to target FPGAs,
> million-thread CPUs, ASICs and loosely coupled processor networks,
> the Ada task model alone serves very poorly.

Granted. I was talking about traditional hardware where OpenMP is used
and I do not find this solution convincing in this context. It is true
that for massively parallel hardwares things are different. But AFAIK
massively parallel hardwares (like IBM Blue Gene) all come with a
different flavor of parallelism, I don't know if it is possible to have
a model to fit them all... I'm no expert on those anyway.

Pascal.

-- 

--|------------------------------------------------------
--| Pascal Obry                           Team-Ada Member
--| 45, rue Gabriel Peri - 78114 Magny Les Hameaux FRANCE
--|------------------------------------------------------
--|              http://www.obry.net
--| "The best way to travel is by means of imagination"
--|
--| gpg --keyserver wwwkeys.pgp.net --recv-key C1082595



^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Embedded languages based on early Ada (from "Re: Preferred OS, processor family for running embedded Ada?")
  2007-03-01 13:57                 ` Dmitry A. Kazakov
  2007-03-01 18:09                   ` Ray Blaak
  2007-03-02 11:36                   ` Dr. Adrian Wrigley
@ 2007-03-05 15:23                   ` Colin Paul Gloster
  2007-03-06  0:31                     ` Dr. Adrian Wrigley
  2 siblings, 1 reply; 79+ messages in thread
From: Colin Paul Gloster @ 2007-03-05 15:23 UTC (permalink / raw)


"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> posted on Fri, 2 Mar
   2007 17:32:26 +0100 :
"[..]

> I'm looking for something like Cilk, but even the concurrent loop
> (JPR's for I in all 1 .. n loop?) would be a help.

Maybe, just a guess, the functional decomposition rather than statements
could be more appropriate here. The alternatives would access their
arguments by copy-in and resynchronize by copy-out."


From William J. Dally in 1999 on
HTTP://CVA.Stanford.edu/people/dally/ARVLSI99.ppt#299,37,Parallel%20Software:%20Design%20Strategy
:"[..]
- many for loops (over data,not time) can be forall
[..]"
Without reading that presentation thoroughly now, I remark that Dally
seemed to be supportive of Wrigley's finely grained parallelism.



^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Embedded languages based on early Ada (from "Re: Preferred OS, processor family for running embedded Ada?")
  2007-03-03 13:40                             ` Dr. Adrian Wrigley
  2007-03-03 15:26                               ` Jonathan Bromley
@ 2007-03-05 15:36                               ` Colin Paul Gloster
  1 sibling, 0 replies; 79+ messages in thread
From: Colin Paul Gloster @ 2007-03-05 15:36 UTC (permalink / raw)


In news:pan.2007.03.03.17.00.07.159450@linuxchip.demon.co.uk.uk.uk
timestamped Sat, 03 Mar 2007 16:59:52 GMT, "Dr. Adrian Wrigley"
<amtw@linuxchip.demon.co.uk.uk.uk> posted:
"[..]
On Sat, 03 Mar 2007 15:26:35 +0000, Jonathan Bromley wrote:

[..]

> For the numerical-algorithms people, I suspect the problem of
> inferring opportunities for parallelism is nearer to being solved
> than some might imagine.  There are tools around that
> can convert DSP-type algorithms (such as the FFT that's
> already been mentioned) into hardware that's inherently

Again, this is ages old now.  But it can't convert
C-type programs reliably and efficiently.

> parallel; there are behavioural synthesis tools that allow
> you to explore the various possible parallel vs. serial
> possibilities for scheduling a computation on heterogeneous
> hardware.  It's surely a small step from that to distributing
> such a computation across multiple threads or CPUs.  All
> that's needed is the will.

[..]"


I am not aware of tools which automatically generate such parallel
implementations, though they may exist. For many algorithms a precise
implementation would be required, but for many numerical applications
in which absolute adherence is not required, are such tools so
impressive that they will replace Jacobi's method with the
Gauss-Seidel method (or something even better) without guidance?



^ permalink raw reply	[flat|nested] 79+ messages in thread

* PAR (Was: Embedded languages based on early Ada)
  2007-03-03 21:13                           ` Ray Blaak
@ 2007-03-05 19:01                             ` Jacob Sparre Andersen
  2007-03-06  2:01                               ` Dr. Adrian Wrigley
  0 siblings, 1 reply; 79+ messages in thread
From: Jacob Sparre Andersen @ 2007-03-05 19:01 UTC (permalink / raw)


Ray Blaak wrote:

> PAR would only be convenience shorthand for writing task bodies
> around each statement.

Wouldn't "pragma Parallelize (Statement_Identifier);" be a more
reasonable way to do this?  As I understand the wish is to tell the
compiler that these statements are a likely target for parallel
execution.

The compilers are of course already allowed to parallelise the
execution of statements, but hinting where it is worthwhile to try
might be more efficient.  Such a pragma will of course introduce a
discussion of whether the result of the parallel execution should be
exactly the same as the result of the sequential execution, or if it
should just be approximately the same.  The effect on a loop will also
need some consideration.

Greetings,

Jacob
-- 
I'm giving a short talk at Game Developers Conference (Mobile
Game Innovation Hunt) Monday afternoon:
          http://www.gdconf.com/conference/gdcmobile_hunt.htm



^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: Embedded languages based on early Ada (from "Re: Preferred OS, processor family for running embedded Ada?")
  2007-03-05 15:23                   ` Embedded languages based on early Ada (from "Re: Preferred OS, processor family for running embedded Ada?") Colin Paul Gloster
@ 2007-03-06  0:31                     ` Dr. Adrian Wrigley
  0 siblings, 0 replies; 79+ messages in thread
From: Dr. Adrian Wrigley @ 2007-03-06  0:31 UTC (permalink / raw)


On Mon, 05 Mar 2007 15:23:54 +0000, Colin Paul Gloster wrote:

> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> posted on Fri, 2 Mar
>    2007 17:32:26 +0100 :
> "[..]
> 
>> I'm looking for something like Cilk, but even the concurrent loop
>> (JPR's for I in all 1 .. n loop?) would be a help.
> 
> Maybe, just a guess, the functional decomposition rather than statements
> could be more appropriate here. The alternatives would access their
> arguments by copy-in and resynchronize by copy-out."
> 
> From William J. Dally in 1999 on
> HTTP://CVA.Stanford.edu/people/dally/ARVLSI99.ppt#299,37,Parallel%20Software:%20Design%20Strategy
> :"[..]
> - many for loops (over data,not time) can be forall
> [..]"
> Without reading that presentation thoroughly now, I remark that Dally
> seemed to be supportive of Wrigley's finely grained parallelism.

I hadn't seen that presentation, but a number of other key points
are made by Dally:
-------------------------------------
# Writing parallel software is easy
    * with good mechanisms

# Almost all demanding problems have ample parallelism

# Need to focus on fundamental problems
    * extracting parallelism
    * load balance
    * locality
          o load balance and locality can be covered by excess parallelism

Conclusion: We are on the threshold of the explicitly parallel era 
    *  Diminishing returns from sequential processors (ILP)
          o no alternative to explicit parallelism
    * Enabling technologies have been proven
          o interconnection networks, mechanisms, cache coherence
    * Fine-grain machines are more efficient than sequential machines

# Fine-grain machines will be constructed from multi-processor/DRAM chips
# Incremental migration to parallel software
-----------------------------------

Good to find *somebody* agrees with me!

Shame Ada isn't leading the pack :(
--
Adrian




^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: PAR (Was: Embedded languages based on early Ada)
  2007-03-05 19:01                             ` PAR (Was: Embedded languages based on early Ada) Jacob Sparre Andersen
@ 2007-03-06  2:01                               ` Dr. Adrian Wrigley
  2007-03-06  3:30                                 ` Randy Brukardt
                                                   ` (5 more replies)
  0 siblings, 6 replies; 79+ messages in thread
From: Dr. Adrian Wrigley @ 2007-03-06  2:01 UTC (permalink / raw)


On Mon, 05 Mar 2007 11:01:59 -0800, Jacob Sparre Andersen wrote:

> Ray Blaak wrote:
> 
>> PAR would only be convenience shorthand for writing task bodies
>> around each statement.

Yes.  The semantics are very similar.

> Wouldn't "pragma Parallelize (Statement_Identifier);" be a more
> reasonable way to do this?  As I understand the wish is to tell the
> compiler that these statements are a likely target for parallel
> execution.

I don't think we share this understanding.

PAR is about indicating in the source code which statement sequences
are unordered.  The idea is to make the syntax for concurrency
as simple as the syntax for sequentality. Perhaps using "par"
instead of "begin" is the way to go.  Adding "all" to the for
loop also makes a lot of sense.

If you can persuade programmers to use "par" unless they explicitly
*need* sequential execution, a great many statements will be marked
as concurrent.  Numeric codes will use "par" in the computational
core, as well as outer layers of execution.

So "par" is really about program *semantics*, not about *mechanism*
of execution.  Simply being in a "par" block is not a hint or
a recommendation that the compiler should target the code for parallel
execution.  The "par" block grants the authority to reorder
(even when results vary, whatever the reason).  And it informs the
*reader* that there is no implication of sequentality.

Ultimately, the hardware should be choosing dynamically whether
to create execution threads according to execution statistics
and resource allocation.  The instruction stream should have
potential thread creation and joining points marked. Perhaps
this can be via a concurrent call instruction ("ccall"?), which
is identical to a normal "call" when threads are scarce,
and creates a thread and continues execution at the same time
when threads are readily available.  The objective is to
have zero overhead threading at a fine grain of concurrency.

So I think the pragma Parallelize () is like the equivalent
of the "register" directive in C.  The programmer is trying to
say "I use this variable a lot, so store it in a register for me".
This approach is considered obsolete and counterproductive.
Automatic register allocation, automatic data caching
and automatic thread allocation should all be handled by the
compiler and hardware, whether or not the programmer
recommends it.  Registerization and caching work well with
existing serial code.  Automatic thread allocation is almost
impossible with existing code simply because code must
always be executed in the order given unless concurrency
is provable.  Coding with "par" is no big challenge.
Most programs will use par a lot. Only very short or exceptional
programs can't use "par".

> The compilers are of course already allowed to parallelise the
> execution of statements, but hinting where it is worthwhile to try
> might be more efficient.  Such a pragma will of course introduce a
> discussion of whether the result of the parallel execution should be
> exactly the same as the result of the sequential execution, or if it
> should just be approximately the same.

Often parallel execution will give very different results.

for I in all 2 .. 15 loop
   PutIfPrime (I);
end loop;

Produces
 2 3 5 7 11 13
with sequential operation, but
 11 13 2 3 5 7
with parallel execution.  Or indeed any other order.

The "par" says "I don't care what the order is".
One benefit is being able to continue executing the program while
multiple page faults are being serviced from disk or RAM.  Such
page faults show why parallelisation is often outside the scope
of the compiler - how does it know when the faults occur?
Program execution becomes a hierarchy of concurrent threads,
perhaps with over 1.0e10 threads available during a program's execution.

Mandating sequential execution except where the "pragma" is used
puts parallel statements at an immediate disadvantage - it makes
them seem to be second class citizens, added on afterwards
in an attempt to speed things up. Par should be "everywhere", it's
a *basic* component of program semantics - like a loop or a function call.
It's absent from programming languages is because processors can't really
take advantage of it at present, and text is inherently linear, needing no
special "seq" directive.  Graphical languages on the other hand often
imply concurrency automatically, and so have the opposite property. "par"
is not a hint.

Enough ranting for now...
--
Adrian




^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: PAR (Was: Embedded languages based on early Ada)
  2007-03-06  2:01                               ` Dr. Adrian Wrigley
@ 2007-03-06  3:30                                 ` Randy Brukardt
  2007-03-06  7:10                                   ` Ray Blaak
  2007-03-06  6:04                                 ` tmoran
                                                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 79+ messages in thread
From: Randy Brukardt @ 2007-03-06  3:30 UTC (permalink / raw)


"Dr. Adrian Wrigley" <amtw@linuxchip.demon.co.uk.uk.uk> wrote in message
news:pan.2007.03.06.02.02.27.892793@linuxchip.demon.co.uk.uk.uk...
...
> PAR is about indicating in the source code which statement sequences
> are unordered.  The idea is to make the syntax for concurrency
> as simple as the syntax for sequentality. Perhaps using "par"
> instead of "begin" is the way to go.  Adding "all" to the for
> loop also makes a lot of sense.

My $0.02 worth:

One thing I'm absolutely certain of is that "par" would never, ever appear
in Ada. That's because Ada keywords are always complete English words
(that's true in the packages, too; abbreviations are frowned upon). I admit
that "par" *is* an English word, but it doesn't have anything to do with
parallel. So, I think the syntax would be more likely something like "begin
in parallel" or the like. (Similarly, "all" in a for loop is just too small
a keywork for such a major semantic change. I think I'd prefer "parallel" to
be used somewhere in the loop syntax; but I could be proved wrong in this
instance.)

> If you can persuade programmers to use "par" unless they explicitly
> *need* sequential execution, a great many statements will be marked
> as concurrent.  Numeric codes will use "par" in the computational
> core, as well as outer layers of execution.

Yes, and a great number of programs will have become unstable. That's
because the default in Ada is for objects that are *not* safe to access in
parallel. (That's a problem for Ada tasks, too.) Only objects with pragma
Atomic and objects wrapped in protected objects can be assumed safe.

I know you've said that you would expect programmers to worry about such
things. But that's the C way of thinking about things, and that is the root
of much of the unreliably of modern software. Programmers are likely to test
their programs on some compiler and then assume that they are correct. But
when dealing with parallel execution, nothing could be further from the
truth. The next compiler or OS will make the program execute completely
differently. The only way to "test" these things is to have tools that
enforce proper behavior (no access to global objects without
synchronization, for instance).

Of course, there is no problem with purely independent subprograms. The
problem is that hardly anything is purely independent, even most independent
algorithms depend on shared data (like a database) to guide their execution,
and if something else is changing that data, there is a lot of potential for
trouble.

...
> Mandating sequential execution except where the "pragma" is used
> puts parallel statements at an immediate disadvantage - it makes
> them seem to be second class citizens, added on afterwards
> in an attempt to speed things up. Par should be "everywhere", it's
> a *basic* component of program semantics - like a loop or a function call.
> It's absent from programming languages is because processors can't really
> take advantage of it at present, and text is inherently linear, needing no
> special "seq" directive.  Graphical languages on the other hand often
> imply concurrency automatically, and so have the opposite property. "par"
> is not a hint.
>
> Enough ranting for now...

Darn, you were just getting interesting. ;-)

$0.02 cents. Ada has the needed building blocks for parallel execution,
given that it has defined what is and is not accessible in parallel. Most
other programming languages have never thought of that, or found it too
hard, or just don't care. But you also need enforcement of safe access to
global objects (global here means anything outside of the subprograms that
were called in parallel). I don't think that that would be very practical in
Ada; the result would be pretty incompatible. (Maybe you could have
procedures defined to allow parallel execution, sort of like a pure
function, but it sounds messy. And we've never had the will to properly
define Pure functions, either, that's because we couldn't decide between the
"declared Pure and user beware" and "defined and checked Pure" approaches).

What really would make sense would be a meta-compiler, that took source code
in an Ada-like language with "begin in parallel" and other needed constructs
and converted that to regular Ada code. (Parallel calls would turn into
tasks, appropriate checking would occur, etc.). But the majority of the code
would simply get rewritten into Ada - and then an Ada compiler could compile
it. Such a system would be free of Ada compatibility concerns, but wouldn't
necessarily have to give up much of Ada's power. (And, if it caught on, the
Ada step could be dropped completely.) Clearly, the meta-compiler would have
to know something about the target (how many threads are reasonable, for
instance), but not necessarily a lot. Such a system could be a lot safer
than Ada is (well, at least until you have to make a call to C...),
especially for parallel execution.

                             Randy Brukardt.





^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: PAR (Was: Embedded languages based on early Ada)
  2007-03-06  2:01                               ` Dr. Adrian Wrigley
  2007-03-06  3:30                                 ` Randy Brukardt
@ 2007-03-06  6:04                                 ` tmoran
  2007-03-06  6:59                                 ` Ray Blaak
                                                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 79+ messages in thread
From: tmoran @ 2007-03-06  6:04 UTC (permalink / raw)


> If you can persuade programmers to use "par" unless they explicitly
> *need* sequential execution, a great many statements will be marked
> as concurrent.  Numeric codes will use "par" in the computational
> core, as well as outer layers of execution.
   When "virtual memory" was new some programmers thought about their
program's access patterns and found that VM was a great simplification.
Other programmers simply said "oh, good, I can write my program to
randomly access many megabytes, though the physical memory is only
a few hundred kilobytes."  Those programs were a disaster.   Nowadays,
of course, very few people write programs that don't in fact fit inside
physical memory.  I suspect we'll see a similar thing with multi-core:
some will carefully consider what they are doing and it will be good;
others will just say "oh, good, I can multiply an m x n and an n x p
matrix using m x p threads", and it will be bad.  Meanwhile the
hardware will develop to automatically do more and more hidden
concurrency.  If programmers think of "par" as "I've carefully analyzed
this algorithm and it may be run in parallel" that's good.  If they
think of "par" as "I want you to run this in parallel", I think that
would be a bad mindset.  Explicit Ada tasks are more like the latter.
My $.02



^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: PAR (Was: Embedded languages based on early Ada)
  2007-03-06  2:01                               ` Dr. Adrian Wrigley
  2007-03-06  3:30                                 ` Randy Brukardt
  2007-03-06  6:04                                 ` tmoran
@ 2007-03-06  6:59                                 ` Ray Blaak
  2007-03-06  7:07                                 ` Ray Blaak
                                                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 79+ messages in thread
From: Ray Blaak @ 2007-03-06  6:59 UTC (permalink / raw)


"Dr. Adrian Wrigley" <amtw@linuxchip.demon.co.uk.uk.uk> writes:
> The "par" says "I don't care what the order is".

I would nitpick a tiny bit. "Any possible order" is not precisely the same as
true concurrent execution. This can be measured the the times of execution,
since an unordered yet sequential execution is the sum of the statement times,
whereas the true concurrent execution is the maximum of the statement times.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
rAYblaaK@STRIPCAPStelus.net                    The Rhythm has my soul.



^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: PAR (Was: Embedded languages based on early Ada)
  2007-03-06  2:01                               ` Dr. Adrian Wrigley
                                                   ` (2 preceding siblings ...)
  2007-03-06  6:59                                 ` Ray Blaak
@ 2007-03-06  7:07                                 ` Ray Blaak
  2007-03-06  7:22                                 ` Martin Krischik
  2007-03-06 13:18                                 ` Dr. Adrian Wrigley
  5 siblings, 0 replies; 79+ messages in thread
From: Ray Blaak @ 2007-03-06  7:07 UTC (permalink / raw)


"Dr. Adrian Wrigley" <amtw@linuxchip.demon.co.uk.uk.uk> writes:
> If you can persuade programmers to use "par" unless they explicitly
> *need* sequential execution, a great many statements will be marked
> as concurrent.  Numeric codes will use "par" in the computational
> core, as well as outer layers of execution.

I would strongly discourage this practice. Getting sequential programs correct
is hard enough. Parallel programs are vastly more difficult and subtle.

All concurrency constructs should be very explicit, clear, and robust. Ada has
the fundamentals right, if only a little verbose to specify.

This is also why I dislike the notion of special concurrency pragmas that
suggest concurrency only a possibility. It should instead of made clearly
obvious in the source, so as to aid the programmer to a maximum effect in
their reasonings about the control flow.

> Mandating sequential execution except where the "pragma" is used
> puts parallel statements at an immediate disadvantage - it makes
> them seem to be second class citizens, added on afterwards
> in an attempt to speed things up. 

It is only 2nd class because our brains our wired that way, at least in terms
of how we know how to program.

The point is to choose one way or the other, and to have the source be clear
about that decision.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
rAYblaaK@STRIPCAPStelus.net                    The Rhythm has my soul.



^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: PAR (Was: Embedded languages based on early Ada)
  2007-03-06  3:30                                 ` Randy Brukardt
@ 2007-03-06  7:10                                   ` Ray Blaak
  2007-03-06 18:05                                     ` Ray Blaak
  0 siblings, 1 reply; 79+ messages in thread
From: Ray Blaak @ 2007-03-06  7:10 UTC (permalink / raw)


"Randy Brukardt" <randy@rrsoftware.com> writes:
> One thing I'm absolutely certain of is that "par" would never, ever appear
> in Ada. That's because Ada keywords are always complete English words

This is a trivial issue. The difficult is agreeing on the semantics, on
whether the construct itself, as a syntactic convenience, is worth the
trouble. Myself I think it is, or it would encourage the exploration of
parallel program development. 

As for the keyword, "parallel" or "concurrent" would work for me.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
rAYblaaK@STRIPCAPStelus.net                    The Rhythm has my soul.



^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: PAR (Was: Embedded languages based on early Ada)
  2007-03-06  2:01                               ` Dr. Adrian Wrigley
                                                   ` (3 preceding siblings ...)
  2007-03-06  7:07                                 ` Ray Blaak
@ 2007-03-06  7:22                                 ` Martin Krischik
  2007-03-06 13:18                                 ` Dr. Adrian Wrigley
  5 siblings, 0 replies; 79+ messages in thread
From: Martin Krischik @ 2007-03-06  7:22 UTC (permalink / raw)


Dr. Adrian Wrigley schrieb:

> PAR is about indicating in the source code which statement sequences
> are unordered.  The idea is to make the syntax for concurrency
> as simple as the syntax for sequentality. Perhaps using "par"
> instead of "begin" is the way to go.  Adding "all" to the for
> loop also makes a lot of sense.

Should it not be one or the other?

begin all
end;

for all ....
end loop;

*OR*

par
end par;

for par .... loop
end loop;



Martin



^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: PAR (Was: Embedded languages based on early Ada)
  2007-03-06  2:01                               ` Dr. Adrian Wrigley
                                                   ` (4 preceding siblings ...)
  2007-03-06  7:22                                 ` Martin Krischik
@ 2007-03-06 13:18                                 ` Dr. Adrian Wrigley
  2007-03-06 18:16                                   ` Ray Blaak
  2007-03-06 23:49                                   ` Randy Brukardt
  5 siblings, 2 replies; 79+ messages in thread
From: Dr. Adrian Wrigley @ 2007-03-06 13:18 UTC (permalink / raw)


On Tue, 06 Mar 2007 02:01:33 +0000, Dr. Adrian Wrigley wrote:
<snip>

Thank you for your intelligent input on this.

Martin, Randy:

I chose "par" and "all" simply because that reflects Occam and the
previously suggested "for all" loop.

"parallel" would make a lot of sense:

parallel
   CookDinner;
   DoTaxReturn;
   BookHoliday;
end;

would allow execution in separate tasks, or sequential execution in
any order.

for I in Data'range parallel loop
   Transform (Data (I));
end loop;

would allow reordered or concurrent loop execution.

Ray:
> I would strongly discourage this practice. Getting sequential programs
> correct is hard enough. Parallel programs are vastly more difficult and
> subtle.

Sequential and parallel programs can easily exceed a programmer's
analyitical limits and comprehension.  Hardware designers are used to
thinking about concurrent operation.  Concurrency involving independent
operating units without communication is often straightfoward.
There is no communication to screw up - data are read in parallel and
combined serially.  This a very common and useful type.

In my opinion, "par" and "seq" were technically successful in Occam,
and many of the difficult and subtle problems are absent from the
above code because the concurrent units *cannot* address each other.
These parallel constructs are simpler and more restricted than Ada
tasks.  The point of "par" is that the restrictions eliminate concerns
over control flow.  The issue is checking for erroneous data flow
(eg accidental concurrent updates).  The compiler should warn when
this appears to be a hazard.

> It is only 2nd class because our brains our wired that way, at least in
> terms of how we know how to program.

But breaking down programs into sequential steps is a skill which
has to be learned.  So many other things have concurrency - mathematics,
physics, hardware.

> The point is to choose one way or the other, and to have the source be
> clear about that decision.

That is *almost* what I am saying.  Mark the obviously concurrent bits
as concurrent.  Mark the rest as sequential.  Assume the compiler will
choose true concurrenct execution sensibly.

tmoran:
> If programmers think of "par" as "I've carefully analyzed
> this algorithm and it may be run in parallel" that's good.  If they
> think of "par" as "I want you to run this in parallel", I think that
> would be a bad mindset.  Explicit Ada tasks are more like the latter.

precisely my point.  Writing code with "par" is a fairly simple
transition from sequential coding.  Once you've had a bit of practice,
writing robust parallel code is pretty easy, and most blocks are
clearly parallel or clearly not parallel.  You need to pay a bit more
attention to what is pure and what isn't. (much easier than tasks!)

Randy:
I'm a big fan of declaring functions as pure, and it is disappointing
to me that Ada still can't do this.  A function declared pure should
permit reordering of invocation and memoization etc.  It doesn't have
to be pure, technically (I support the "user beware" version), but
compilers should warn of impure behaviour - perhaps a pragma to
reject impurity would help.  Things like invocation counts or
debugging statements usually break the purity of a function.
I don't know how procedures should be addressed.

The meta-compiler approach is interesting, and rather like what I
suggested earlier about a new language, permiting fine-grain
concurrency, suitable for hardware and software, similar to
a rebranded/renamed Ada.

The problem with an Ada intermediate is that you can't easily translate
programs with "par" into a language without.  There is no
realistic work-around or simulation of the execution behaviour of par. 
Par says "create a thread if it looks appropriate at this moment".
The expectation is that this generates the appropriate concurrent
call instruction.  Maybe some major compiler hacking and special
directives could patch something together, like Cilk does for C.

One aspect of this discussion is the inaccessability of fine-grain
concurrency to either a harware developer or a software developer.
As a processor designer, I can see how to implement concurrent
calls and concurrent memory access - but doing this does not help
run any existing code.  As a programmer, I can see where my code
is logically concurrent, but the language denies me this expressiveness,
and my processor doesn't support it.  Some leadership is needed!

Thanks.
--
Adrian






^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: PAR (Was: Embedded languages based on early Ada)
  2007-03-06  7:10                                   ` Ray Blaak
@ 2007-03-06 18:05                                     ` Ray Blaak
  0 siblings, 0 replies; 79+ messages in thread
From: Ray Blaak @ 2007-03-06 18:05 UTC (permalink / raw)


Ray Blaak <rAYblaaK@STRIPCAPStelus.net> writes:

> "Randy Brukardt" <randy@rrsoftware.com> writes:
> > One thing I'm absolutely certain of is that "par" would never, ever appear
> > in Ada. That's because Ada keywords are always complete English words
> 
> This is a trivial issue. The difficult is agreeing on the semantics, on
> whether the construct itself, as a syntactic convenience, is worth the
> trouble. Myself I think it is, or it would encourage the exploration of
> parallel program development. 

*for* it would encourage...

I like the "par" construct idea.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
rAYblaaK@STRIPCAPStelus.net                    The Rhythm has my soul.



^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: PAR (Was: Embedded languages based on early Ada)
  2007-03-06 13:18                                 ` Dr. Adrian Wrigley
@ 2007-03-06 18:16                                   ` Ray Blaak
  2007-03-06 23:49                                   ` Randy Brukardt
  1 sibling, 0 replies; 79+ messages in thread
From: Ray Blaak @ 2007-03-06 18:16 UTC (permalink / raw)


"Dr. Adrian Wrigley" <amtw@linuxchip.demon.co.uk.uk.uk> writes:
> Concurrency involving independent operating units without communication is
> often straightfoward.  There is no communication to screw up - data are read
> in parallel and combined serially.  This a very common and useful type.

Well, sure. It really depends on the nature of what is being computed.

My position of treating parallel programming as something that is painful
comes about from dealing with concurrent items that do interact with each
other. It can get quite difficult to reason that things are correct, that
deadlocks and livelocks are avoided, etc.

> In my opinion, "par" and "seq" were technically successful in Occam,
> and many of the difficult and subtle problems are absent from the
> above code because the concurrent units *cannot* address each other.
> These parallel constructs are simpler and more restricted than Ada
> tasks.  The point of "par" is that the restrictions eliminate concerns
> over control flow.  The issue is checking for erroneous data flow
> (eg accidental concurrent updates).  The compiler should warn when
> this appears to be a hazard.

Occam looked fun. I don't think its simplicity would work in Ada, however.

But that is not the real point. Adding constructs like "parallel" that make
concurrency easier to express in Ada is a good idea. Programmers are free to
try and learn how to use them properly.

It then becomes a algorithm design and code review issue, rather than a
language issue, as to whether the choice of sequential vs parallel was
appropriate in any given situation.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
rAYblaaK@STRIPCAPStelus.net                    The Rhythm has my soul.



^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: PAR (Was: Embedded languages based on early Ada)
  2007-03-06 13:18                                 ` Dr. Adrian Wrigley
  2007-03-06 18:16                                   ` Ray Blaak
@ 2007-03-06 23:49                                   ` Randy Brukardt
  2007-03-07  8:59                                     ` Dmitry A. Kazakov
  1 sibling, 1 reply; 79+ messages in thread
From: Randy Brukardt @ 2007-03-06 23:49 UTC (permalink / raw)


"Dr. Adrian Wrigley" <amtw@linuxchip.demon.co.uk.uk.uk> wrote in message
news:pan.2007.03.06.13.19.20.336128@linuxchip.demon.co.uk.uk.uk...
...
> The problem with an Ada intermediate is that you can't easily translate
> programs with "par" into a language without.  There is no
> realistic work-around or simulation of the execution behaviour of par.
> Par says "create a thread if it looks appropriate at this moment".
> The expectation is that this generates the appropriate concurrent
> call instruction.  Maybe some major compiler hacking and special
> directives could patch something together, like Cilk does for C.

Yes, of course. But how, precisely, this is mapped to Ada (or even if it is
mapped to Ada) isn't particularly relevant. The reason for the suggestion
is: (1) It's relatively easy to get something running; (2) there isn't any
existing hardware or OS that can take advantage of fine-grained parallelism.
Your point is about breaking the software end of that problem, but that will
have to be executed in a conventional way for now. Maybe at some point in
the future the programs can execute with more parallelism, and at least then
the programmers would not have to change habits to do so; (3) by assuming
that a full Ada implementation is underlying, the program will be able to
use all of Ada, without having to define the semantics of that. You can
concentrate solely on the new stuff, with a fair presumption that the rest
will "just work".

> One aspect of this discussion is the inaccessability of fine-grain
> concurrency to either a harware developer or a software developer.
> As a processor designer, I can see how to implement concurrent
> calls and concurrent memory access - but doing this does not help
> run any existing code.  As a programmer, I can see where my code
> is logically concurrent, but the language denies me this expressiveness,
> and my processor doesn't support it.  Some leadership is needed!

Exactly my point; by allowing this to be written and then mapped
conventionally, programmers can get used to the idea of specifying
fine-grained parallelism (preferably with checking!), without having to
invest in all new (and thus buggy) systems. One hopes that would provide a
way to break the Gordian knot. Trying to convert all at once is never going
to work.

                                       Randy.





^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: PAR (Was: Embedded languages based on early Ada)
  2007-03-06 23:49                                   ` Randy Brukardt
@ 2007-03-07  8:59                                     ` Dmitry A. Kazakov
  2007-03-07 18:26                                       ` Ray Blaak
  0 siblings, 1 reply; 79+ messages in thread
From: Dmitry A. Kazakov @ 2007-03-07  8:59 UTC (permalink / raw)


On Tue, 6 Mar 2007 17:49:32 -0600, Randy Brukardt wrote:

> "Dr. Adrian Wrigley" <amtw@linuxchip.demon.co.uk.uk.uk> wrote in message
> news:pan.2007.03.06.13.19.20.336128@linuxchip.demon.co.uk.uk.uk...
> ...
>> One aspect of this discussion is the inaccessability of fine-grain
>> concurrency to either a harware developer or a software developer.
>> As a processor designer, I can see how to implement concurrent
>> calls and concurrent memory access - but doing this does not help
>> run any existing code.  As a programmer, I can see where my code
>> is logically concurrent, but the language denies me this expressiveness,
>> and my processor doesn't support it.  Some leadership is needed!
> 
> Exactly my point; by allowing this to be written and then mapped
> conventionally, programmers can get used to the idea of specifying
> fine-grained parallelism (preferably with checking!), without having to
> invest in all new (and thus buggy) systems.

Hmm, but checking is really the key issue here. I fail to see it in bare
PAR. It is just absent there. What would be the semantics of:

declare
   I : Integer := 0;
begin in parallel
   I := I + 1;
   I := I - 1;
end;  -- What is the postcondition here?

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: PAR (Was: Embedded languages based on early Ada)
  2007-03-07  8:59                                     ` Dmitry A. Kazakov
@ 2007-03-07 18:26                                       ` Ray Blaak
  2007-03-07 19:03                                         ` Dr. Adrian Wrigley
  2007-03-07 19:55                                         ` Dmitry A. Kazakov
  0 siblings, 2 replies; 79+ messages in thread
From: Ray Blaak @ 2007-03-07 18:26 UTC (permalink / raw)


"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:
> Hmm, but checking is really the key issue here. I fail to see it in bare
> PAR. It is just absent there. What would be the semantics of:
> 
> declare
>    I : Integer := 0;
> begin in parallel
>    I := I + 1;
>    I := I - 1;
> end;  -- What is the postcondition here?

That would be bad programming, just as if you expressed it equivalently with
expicit task bodies.

Ada already has synchronization constructs that need to be used to allow
dependent concurrent items to coordinate properly.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
rAYblaaK@STRIPCAPStelus.net                    The Rhythm has my soul.



^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: PAR (Was: Embedded languages based on early Ada)
  2007-03-07 18:26                                       ` Ray Blaak
@ 2007-03-07 19:03                                         ` Dr. Adrian Wrigley
  2007-03-07 19:55                                         ` Dmitry A. Kazakov
  1 sibling, 0 replies; 79+ messages in thread
From: Dr. Adrian Wrigley @ 2007-03-07 19:03 UTC (permalink / raw)


On Wed, 07 Mar 2007 18:26:08 +0000, Ray Blaak wrote:

> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:
>> Hmm, but checking is really the key issue here. I fail to see it in bare
>> PAR. It is just absent there. What would be the semantics of:
>> 
>> declare
>>    I : Integer := 0;
>> begin in parallel
>>    I := I + 1;
>>    I := I - 1;
>> end;  -- What is the postcondition here?
> 
> That would be bad programming, just as if you expressed it equivalently with
> expicit task bodies.

I think the issue is whether this is permitted, and whether compilers
can generally give a warning in such cases.  I think it's like
uninitialised variables - compilers often spot these and warn.
There are two problems in this case - reading and writing the same
variable in different statements, and writing to the same variable
twice. 

VHDL concurrent assignments can handle this multiple assignment very
nicely using signal resolution functions.  It would be nice if
software parallel constructs could do parallel combination,
such as "and"ing many boolean variables together, or computing
integer sums efficiently.  This would probably best be done by
an optimiser spotting specific patterns of use in serial code.

Another problem not mentioned is memory allocation.  Once you
fire off hundreds of threads, memory use may escalate unexpectedly.
Holding back some memory intensive threads may help, provided you
don't create new threads using up the last available store.
--
Adrian



^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: PAR (Was: Embedded languages based on early Ada)
  2007-03-07 18:26                                       ` Ray Blaak
  2007-03-07 19:03                                         ` Dr. Adrian Wrigley
@ 2007-03-07 19:55                                         ` Dmitry A. Kazakov
  2007-03-07 20:17                                           ` Ray Blaak
  2007-03-07 20:18                                           ` Pascal Obry
  1 sibling, 2 replies; 79+ messages in thread
From: Dmitry A. Kazakov @ 2007-03-07 19:55 UTC (permalink / raw)


On Wed, 07 Mar 2007 18:26:08 GMT, Ray Blaak wrote:

> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:
>> Hmm, but checking is really the key issue here. I fail to see it in bare
>> PAR. It is just absent there. What would be the semantics of:
>> 
>> declare
>>    I : Integer := 0;
>> begin in parallel
>>    I := I + 1;
>>    I := I - 1;
>> end;  -- What is the postcondition here?
> 
> That would be bad programming, just as if you expressed it equivalently with
> expicit task bodies.

Why is it bad programming? Consider this:

declare
   Sum : Numeric := 0.0;
begin in parallel
   Sum := Sum + Integrate (Series (1..N);
   Sum := Sum + Integrate (Series (N+1..2*N);
   Sum := Sum + Integrate (Series (2*N+1..3*N);
   ...
end;

> Ada already has synchronization constructs that need to be used to allow
> dependent concurrent items to coordinate properly.

If so, then what would be the contribution of PAR? One important
proposition is that there is no any use in PAR running absolutely
independent code. So the question arise, what *exactly* PAR does with:

1. ":=" x 2 times
2. "+"
3. "-"
4. "I" x 4 times

?

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: PAR (Was: Embedded languages based on early Ada)
  2007-03-07 19:55                                         ` Dmitry A. Kazakov
@ 2007-03-07 20:17                                           ` Ray Blaak
  2007-03-08 10:06                                             ` Dmitry A. Kazakov
  2007-03-07 20:18                                           ` Pascal Obry
  1 sibling, 1 reply; 79+ messages in thread
From: Ray Blaak @ 2007-03-07 20:17 UTC (permalink / raw)


"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:
> Why is it bad programming? Consider this:
> 
> declare
>    Sum : Numeric := 0.0;
> begin in parallel
>    Sum := Sum + Integrate (Series (1..N);
>    Sum := Sum + Integrate (Series (N+1..2*N);
>    Sum := Sum + Integrate (Series (2*N+1..3*N);
>    ...
> end;

This is also bad programming. The assignments are in parallel, overwriting
each other, and it is not clear in a given statement what initial value of Sum
is being worked with.

As shown, this example looks like it is after a sequential summation, and that
contradicts what the concurreny execution will do.

A better program would be:

 declare
    Partial1 : Numeric;
    Partial2 : Numeric;
    Partial3 : Numeric;
    Sum : Numeric;
 begin
   begin in parallel
     Partial1 := Integrate (Series (1..N);
     Partial2 := Integrate (Series (N+1..2*N);
     Partial2 := Integrate (Series (2*N+1..3*N);
   end;
   Sum := Partial1 + Partial2 + Partial3;
   ...
 end;

> > Ada already has synchronization constructs that need to be used to allow
> > dependent concurrent items to coordinate properly.
> 
> If so, then what would be the contribution of PAR? 

A syntactic convenience to tediously writing task bodies explicitly. Consider
the immediate example above written out explicitly.

> One important
> proposition is that there is no any use in PAR running absolutely
> independent code. So the question arise, what *exactly* PAR does with:
> 
> 1. ":=" x 2 times
> 2. "+"
> 3. "-"
> 4. "I" x 4 times

The short answer is nothing. PAR spawns some tasks and waits for them to
complete, nothing more or less.

Parallel programing is hard. Relying on implicit optimization is what I
recommend against, since in general it is a fiendishly complex problem as to
how to sort out the interdependencies.

Keep PAR simple, and leave it to the programmer to figure out the
synchronization.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
rAYblaaK@STRIPCAPStelus.net                    The Rhythm has my soul.



^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: PAR (Was: Embedded languages based on early Ada)
  2007-03-07 19:55                                         ` Dmitry A. Kazakov
  2007-03-07 20:17                                           ` Ray Blaak
@ 2007-03-07 20:18                                           ` Pascal Obry
  2007-03-07 20:41                                             ` Dr. Adrian Wrigley
  1 sibling, 1 reply; 79+ messages in thread
From: Pascal Obry @ 2007-03-07 20:18 UTC (permalink / raw)
  To: mailbox

Dmitry A. Kazakov a �crit :

> Why is it bad programming? Consider this:
> 
> declare
>    Sum : Numeric := 0.0;
> begin in parallel
>    Sum := Sum + Integrate (Series (1..N);
>    Sum := Sum + Integrate (Series (N+1..2*N);
>    Sum := Sum + Integrate (Series (2*N+1..3*N);
>    ...
> end;

   task type Integrate is
      entry Run (Series : in Series_Type);
      entry Result (Value : in out Numeric);
   end Integrate;

   Max : constant := 30;
   T   : array (1 .. Max) of Integrate;

   declare
      Sum : Numeric := 0.0;
      S   : Numeric;
   begin
      for K in T'Range loop
         T (K).Start (Series ((K-1) * N + 1 .. K * N));
      end loop;

      for K in S'Range loop
         T (K).Result (S);
         Sum := Sum + S;
      end loop;
   end;

Typed directly from my mail client, sorry if it is not compiling :)

I don't see the point of PAR which create dynamically threads all the
time. A waste of time! The T tasks above can be reused during all the
application lifetime if necessary.

Pascal.

-- 

--|------------------------------------------------------
--| Pascal Obry                           Team-Ada Member
--| 45, rue Gabriel Peri - 78114 Magny Les Hameaux FRANCE
--|------------------------------------------------------
--|              http://www.obry.net
--| "The best way to travel is by means of imagination"
--|
--| gpg --keyserver wwwkeys.pgp.net --recv-key C1082595



^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: PAR (Was: Embedded languages based on early Ada)
  2007-03-07 20:18                                           ` Pascal Obry
@ 2007-03-07 20:41                                             ` Dr. Adrian Wrigley
  2007-03-08  5:45                                               ` Randy Brukardt
  0 siblings, 1 reply; 79+ messages in thread
From: Dr. Adrian Wrigley @ 2007-03-07 20:41 UTC (permalink / raw)


On Wed, 07 Mar 2007 21:18:51 +0100, Pascal Obry wrote:

> I don't see the point of PAR which create dynamically threads all the
> time. A waste of time! The T tasks above can be reused during all the
> application lifetime if necessary.

???

PAR says "these statements can be done in any order, or concurrently".
It does not require threads to be created - that's entirely outside
the scope of the language.

One implementation might run the code sequentially on a CPU.
The compiler might spot code rearrangement, or common subexpressions
not guaranteed in sequential code.

Another implementation might convert simple statements into
combinatorial netlists for dynamic instantiation in an FPGA.
Complex statements would be handled another way

Another might run code sequentially if the number of instructions
were below a threshold.  Above that threshold, the statements are
run sequentially if the average execution time is below a threshold
or there are currently fewed threads than processors.

Yet another might assign a thread statically to each concurrent statement
and build a software pipeline of processes to move data.  This way,
thread creation and destruction is eliminated, and data locality
may be improved.

There are other distinct alternatives.

It's not the language construct that would waste the time, but the
inappropriate implementation.

Dmitry wrote:
> If so, then what would be the contribution of PAR? One important
> proposition is that there is no any use in PAR running absolutely
> independent code. So the question arise, what *exactly* PAR does with:

The contribution is the very lightweight syntax and semantics.
One extra keyword, to permit a wide choice of implementations,
parallel, serial, hardware, pipeline, threaded etc. 

PAR is ideal for the common case of running absolutely independent code!
--
Adrian




^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: PAR (Was: Embedded languages based on early Ada)
  2007-03-07 20:41                                             ` Dr. Adrian Wrigley
@ 2007-03-08  5:45                                               ` Randy Brukardt
  2007-03-08 10:06                                                 ` Dmitry A. Kazakov
  2007-03-08 18:08                                                 ` Ray Blaak
  0 siblings, 2 replies; 79+ messages in thread
From: Randy Brukardt @ 2007-03-08  5:45 UTC (permalink / raw)


"Dr. Adrian Wrigley" <amtw@linuxchip.demon.co.uk.uk.uk> wrote in message
news:pan.2007.03.07.20.42.03.883636@linuxchip.demon.co.uk.uk.uk...
...
> The contribution is the very lightweight syntax and semantics.
> One extra keyword, to permit a wide choice of implementations,
> parallel, serial, hardware, pipeline, threaded etc.
>
> PAR is ideal for the common case of running absolutely independent code!

But only if the code is guaranteed to be independent! Otherwise, all of
those different implementation techniques will give different results. And
relying on programmers to understand the many ways that this could go wrong
is not going to help any (or give this technique a good reputation, for that
matter).

If I was designing this sort of language feature, I would start with a very
strong set of restrictions known to prevent most trouble, and then would
look for additional things that could be allowed.

The starting point would be something like a parallel block:
      [declare in parallel
           declarative_part]
      begin in parallel
            sequence_of_statements;
      end in parallel;

where the declarative part could only include object declarations
initialized by parallel functions (new, see below), and the
sequence_of_statements could only be parallel procedure calls.

A parallel subprogram would be defined by the keyword parallel. They would
be like a normal Ada subprogram, except:
    * Access to global variables is prohibited, other than protected
objects, atomic objects, and objects of a declared parallel type. Note that
this also includes global storage pools!
    * Parameters are results can only be by-copy types, protected objects,
and objects of a declared parallel type.
These restrictions would be somewhat like those of a pure package, but would
be aimed at ensuring that only objects that are safe to be accessed
concurrently could be accessed. (It would still be possible to get in
trouble by having objects accessed in different orders in different
subprograms - which could matter in parallel execution - but I don't think
it is practical to prevent that. It's likely that the operations will need
to access some common data store, and rules that did not allow that could
not go anywhere.)

A "declared parallel type" would be a private type that had a pragma that
declared that all of it's operations were task-safe. (That's needed to
provide containers that could be used in parallel subprograms, for
instance.) Precisely what that would mean, I'll leave for some other time.
(It would be possible to survive without this, as you could try to use
protected objects and interfaces for everything. That sounds like a pain to
me...)

Humm, I think you'd actually need two separate blocks: "declare in parallel"
(and execute statements sequentially) and "begin in parallel" (with
sequential declarations). There seems to be a sequential (combining) stage
that follows the parallel part in most of these algorithms.

Anyway, food for thought...

                      Randy.







^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: PAR (Was: Embedded languages based on early Ada)
  2007-03-07 20:17                                           ` Ray Blaak
@ 2007-03-08 10:06                                             ` Dmitry A. Kazakov
  2007-03-08 18:03                                               ` Ray Blaak
  0 siblings, 1 reply; 79+ messages in thread
From: Dmitry A. Kazakov @ 2007-03-08 10:06 UTC (permalink / raw)


On Wed, 07 Mar 2007 20:17:40 GMT, Ray Blaak wrote:

> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:
>> Why is it bad programming? Consider this:
>> 
>> declare
>>    Sum : Numeric := 0.0;
>> begin in parallel
>>    Sum := Sum + Integrate (Series (1..N);
>>    Sum := Sum + Integrate (Series (N+1..2*N);
>>    Sum := Sum + Integrate (Series (2*N+1..3*N);
>>    ...
>> end;
> 
> This is also bad programming. The assignments are in parallel, overwriting
> each other, and it is not clear in a given statement what initial value of Sum
> is being worked with.

That is what PAR have to define.

> As shown, this example looks like it is after a sequential summation, and that
> contradicts what the concurreny execution will do.

No, it is a concurrent incrementing of an atomic object Sum. An equivalent
should be

protected type Accumulator is
   procedure Increment (By : Numeric);
end Accumulator;
 
>> If so, then what would be the contribution of PAR? 
> 
> A syntactic convenience to tediously writing task bodies explicitly. Consider
> the immediate example above written out explicitly.

In this form it barely would be convenient.

> The short answer is nothing. PAR spawns some tasks and waits for them to
> complete, nothing more or less.
> 
> Parallel programing is hard. Relying on implicit optimization is what I
> recommend against, since in general it is a fiendishly complex problem as to
> how to sort out the interdependencies.
> 
> Keep PAR simple, and leave it to the programmer to figure out the
> synchronization.

If the programmer should figure out the synchronization, then we are back
where we started before fine-granularity. See Pascal's example.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: PAR (Was: Embedded languages based on early Ada)
  2007-03-08  5:45                                               ` Randy Brukardt
@ 2007-03-08 10:06                                                 ` Dmitry A. Kazakov
  2007-03-10  1:58                                                   ` Randy Brukardt
  2007-03-08 18:08                                                 ` Ray Blaak
  1 sibling, 1 reply; 79+ messages in thread
From: Dmitry A. Kazakov @ 2007-03-08 10:06 UTC (permalink / raw)


On Wed, 7 Mar 2007 23:45:23 -0600, Randy Brukardt wrote:

> "Dr. Adrian Wrigley" <amtw@linuxchip.demon.co.uk.uk.uk> wrote in message
> news:pan.2007.03.07.20.42.03.883636@linuxchip.demon.co.uk.uk.uk...
> ...
>> The contribution is the very lightweight syntax and semantics.
>> One extra keyword, to permit a wide choice of implementations,
>> parallel, serial, hardware, pipeline, threaded etc.
>>
>> PAR is ideal for the common case of running absolutely independent code!
> 
> But only if the code is guaranteed to be independent! Otherwise, all of
> those different implementation techniques will give different results. And
> relying on programmers to understand the many ways that this could go wrong
> is not going to help any (or give this technique a good reputation, for that
> matter).
> 
> If I was designing this sort of language feature, I would start with a very
> strong set of restrictions known to prevent most trouble, and then would
> look for additional things that could be allowed.

Absolutely
 
> The starting point would be something like a parallel block:
>       [declare in parallel
>            declarative_part]
>       begin in parallel
>             sequence_of_statements;
>       end in parallel;
> 
> where the declarative part could only include object declarations
> initialized by parallel functions (new, see below), and the
> sequence_of_statements could only be parallel procedure calls.
> 
> A parallel subprogram would be defined by the keyword parallel. They would
> be like a normal Ada subprogram, except:
>     * Access to global variables is prohibited, other than protected
> objects, atomic objects, and objects of a declared parallel type. Note that
> this also includes global storage pools!
>     * Parameters are results can only be by-copy types, protected objects,
> and objects of a declared parallel type.
> These restrictions would be somewhat like those of a pure package, but would
> be aimed at ensuring that only objects that are safe to be accessed
> concurrently could be accessed. (It would still be possible to get in
> trouble by having objects accessed in different orders in different
> subprograms - which could matter in parallel execution - but I don't think
> it is practical to prevent that. It's likely that the operations will need
> to access some common data store, and rules that did not allow that could
> not go anywhere.)
> 
> A "declared parallel type" would be a private type that had a pragma that
> declared that all of it's operations were task-safe. (That's needed to
> provide containers that could be used in parallel subprograms, for
> instance.) Precisely what that would mean, I'll leave for some other time.
> (It would be possible to survive without this, as you could try to use
> protected objects and interfaces for everything. That sounds like a pain to
> me...)

That's the big question. Is task-safety a type property or one of the
object? The fine-granularity approach presumes that task-safety could be
hung on objects later on, i.e. sort of:

declare parallel in parallel -- (:-))
   I : Integer; -- Same as if it were declared atomic

Since Ada 83 the language moved in the opposite direction, IMO. I suppose
because there was no obvious way of making an unsafe ADT safe.

The problem with the container types (and all other more or less elaborated
types) is that fine-granularity concurrency does not work for them. That
was the reason of my original claim that PAR were worth of nothing without
a feasible concept of synchronization somehow different from protected
actions and rendezvous we have by now.

> Humm, I think you'd actually need two separate blocks: "declare in parallel"
> (and execute statements sequentially) and "begin in parallel" (with
> sequential declarations). There seems to be a sequential (combining) stage
> that follows the parallel part in most of these algorithms.

> Anyway, food for thought...

Yes, indeed. 

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: PAR (Was: Embedded languages based on early Ada)
  2007-03-08 10:06                                             ` Dmitry A. Kazakov
@ 2007-03-08 18:03                                               ` Ray Blaak
  0 siblings, 0 replies; 79+ messages in thread
From: Ray Blaak @ 2007-03-08 18:03 UTC (permalink / raw)


"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:
> On Wed, 07 Mar 2007 20:17:40 GMT, Ray Blaak wrote:
> > This is also bad programming. The assignments are in parallel, overwriting
> > each other, and it is not clear in a given statement what initial value of Sum
> > is being worked with.
> 
> That is what PAR have to define.

You are welcome to give it a shot :-). 

I am seriously interested in reading and commenting on any attempts.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
rAYblaaK@STRIPCAPStelus.net                    The Rhythm has my soul.



^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: PAR (Was: Embedded languages based on early Ada)
  2007-03-08  5:45                                               ` Randy Brukardt
  2007-03-08 10:06                                                 ` Dmitry A. Kazakov
@ 2007-03-08 18:08                                                 ` Ray Blaak
  2007-03-10  1:50                                                   ` Randy Brukardt
  1 sibling, 1 reply; 79+ messages in thread
From: Ray Blaak @ 2007-03-08 18:08 UTC (permalink / raw)


"Randy Brukardt" <randy@rrsoftware.com> writes:
> A parallel subprogram would be defined by the keyword parallel. They would
> be like a normal Ada subprogram, except:
>     * Access to global variables is prohibited, other than protected
> objects, atomic objects, and objects of a declared parallel type. Note that
> this also includes global storage pools!

I understand the reason for this restriction, but fear that it is not useful
in practice. This would prevent the use of regular library calls, unless those
libraries are pure. Is that reasonable?

Hmm, maybe those libraries can be access via calls from protected objects
only?

I am wondering if this restriction makes things too onerous for the programmer
to use the library environment they have access to, especially as compared to
normal sequential programming.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
rAYblaaK@STRIPCAPStelus.net                    The Rhythm has my soul.



^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: PAR (Was: Embedded languages based on early Ada)
  2007-03-08 18:08                                                 ` Ray Blaak
@ 2007-03-10  1:50                                                   ` Randy Brukardt
  0 siblings, 0 replies; 79+ messages in thread
From: Randy Brukardt @ 2007-03-10  1:50 UTC (permalink / raw)


"Ray Blaak" <rAYblaaK@STRIPCAPStelus.net> wrote in message
news:uvehbtxpn.fsf@STRIPCAPStelus.net...
> "Randy Brukardt" <randy@rrsoftware.com> writes:
> > A parallel subprogram would be defined by the keyword parallel. They
would
> > be like a normal Ada subprogram, except:
> >     * Access to global variables is prohibited, other than protected
> > objects, atomic objects, and objects of a declared parallel type. Note
that
> > this also includes global storage pools!
>
> I understand the reason for this restriction, but fear that it is not
useful
> in practice. This would prevent the use of regular library calls, unless
those
> libraries are pure. Is that reasonable?

I think so (see below).

> Hmm, maybe those libraries can be access via calls from protected objects
> only?

Right. And I'd expect "parallel" versions of many things to be constructed.
(Surely containers.) Intrinsics like "+" and "**" would be defined to be
"parallel". Everything in a pure package could be considered "parallel" as
well (and if a pure function was given a first class definition, they could
be too -- but only if they are checked).

I neglected the obvious restriction that a parallel routine isn't allowed to
call a non-parallel routine (just like a pure package can't depend on an
impure package). Else the restrictions could be trivially circumvented.

> I am wondering if this restriction makes things too onerous for the
programmer
> to use the library environment they have access to, especially as compared
to
> normal sequential programming.

Well, that's the big question, isn't it? But I don't think it is necessarily
impossible; in this sense it is much like SPARC, which adds a lot of
restrictions to Ada in order to get the ability to formally prove the
programs (and in a reasonable amount of time). People are willing to use
that, why not parallel restrictions? It just depends on the possible gain.

                         Randy.







^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: PAR (Was: Embedded languages based on early Ada)
  2007-03-08 10:06                                                 ` Dmitry A. Kazakov
@ 2007-03-10  1:58                                                   ` Randy Brukardt
  2007-03-10  9:11                                                     ` Dmitry A. Kazakov
  0 siblings, 1 reply; 79+ messages in thread
From: Randy Brukardt @ 2007-03-10  1:58 UTC (permalink / raw)


"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message
news:1jlhu0mtzerde.1m8xbk8idroav$.dlg@40tude.net...
> On Wed, 7 Mar 2007 23:45:23 -0600, Randy Brukardt wrote:
...
> > A "declared parallel type" would be a private type that had a pragma
that
> > declared that all of it's operations were task-safe. (That's needed to
> > provide containers that could be used in parallel subprograms, for
> > instance.) Precisely what that would mean, I'll leave for some other
time.
> > (It would be possible to survive without this, as you could try to use
> > protected objects and interfaces for everything. That sounds like a pain
to
> > me...)
>
> That's the big question. Is task-safety a type property or one of the
> object?

For Ada, it's clearly of the type except for elementary types (for which you
can declare objects atomic).

> The fine-granularity approach presumes that task-safety could be
> hung on objects later on, i.e. sort of:
>
> declare parallel in parallel -- (:-))
>    I : Integer; -- Same as if it were declared atomic

I don't believe this statement; I don't see any reason that you couldn't use
the Ada approach with fine-grained parallelism.

> Since Ada 83 the language moved in the opposite direction, IMO. I suppose
> because there was no obvious way of making an unsafe ADT safe.

Right.

> The problem with the container types (and all other more or less
elaborated
> types) is that fine-granularity concurrency does not work for them. That
> was the reason of my original claim that PAR were worth of nothing without
> a feasible concept of synchronization somehow different from protected
> actions and rendezvous we have by now.

But it surely is possible to design ADTs that *do* work with fine-grained
(and heavy-grained!) parallelism. That requires some locking (I usually use
explicit locks, because I want to control the behavior in ways that
protected objects aren't good at. The locks are safe in that I always use
controlled locks that lock when initialized and unlock when finalized, so
that they don't get held forever if an exception or abort happens). I
certainly agree you can't add that sort of stuff later: it has to be baked
in right away - it's a critical part of the design.

And if you have parallel-safe ADTs, then fine-grained parallelism should
work, and it ought to be easier to program than traditional Ada tasks, as
there wouldn't be any explicit control flow interactions. So it seems
interesting to pursue...

                             Randy.





^ permalink raw reply	[flat|nested] 79+ messages in thread

* Re: PAR (Was: Embedded languages based on early Ada)
  2007-03-10  1:58                                                   ` Randy Brukardt
@ 2007-03-10  9:11                                                     ` Dmitry A. Kazakov
  0 siblings, 0 replies; 79+ messages in thread
From: Dmitry A. Kazakov @ 2007-03-10  9:11 UTC (permalink / raw)


On Fri, 9 Mar 2007 19:58:53 -0600, Randy Brukardt wrote:

> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message
> news:1jlhu0mtzerde.1m8xbk8idroav$.dlg@40tude.net...

>> The fine-granularity approach presumes that task-safety could be
>> hung on objects later on, i.e. sort of:
>>
>> declare parallel in parallel -- (:-))
>>    I : Integer; -- Same as if it were declared atomic
> 
> I don't believe this statement; I don't see any reason that you couldn't use
> the Ada approach with fine-grained parallelism.

Because it would not be magical! (:-)) In my understanding the idea of
fine-grained parallelism is similar to instruction pipelining. I don't need
to care about it when I write an Ada program. I define the constraints in
which the compiler/CPU may move (the language semantics) and let them
figure out the path they would go. If I had to care about it, that would
not be "fine" anymore.

>> The problem with the container types (and all other more or less elaborated
>> types) is that fine-granularity concurrency does not work for them. That
>> was the reason of my original claim that PAR were worth of nothing without
>> a feasible concept of synchronization somehow different from protected
>> actions and rendezvous we have by now.
> 
> But it surely is possible to design ADTs that *do* work with fine-grained
> (and heavy-grained!) parallelism.

Sure, the language must be self-consistent, so any forms of parallelism
must be mutually compatible.

> That requires some locking (I usually use
> explicit locks, because I want to control the behavior in ways that
> protected objects aren't good at. The locks are safe in that I always use
> controlled locks that lock when initialized and unlock when finalized, so
> that they don't get held forever if an exception or abort happens). I
> certainly agree you can't add that sort of stuff later: it has to be baked
> in right away - it's a critical part of the design.

The problems with locks are:

1. Semaphores are low-level exposed to the danger deadlocks

2. Priority inversion issues

3. They rely on computationally heavy OS calls and resources

4. Controlled objects used to access locks are slow (OK, that's Ada
problem, it need not to be so, and can be fixed)

I think that for expressing "magical" parallelism one would inevitably need
some "magical" synchronization tools as well...

> And if you have parallel-safe ADTs, then fine-grained parallelism should
> work, and it ought to be easier to program than traditional Ada tasks, as
> there wouldn't be any explicit control flow interactions. So it seems
> interesting to pursue...

Yes

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 79+ messages in thread

end of thread, other threads:[~2007-03-10  9:11 UTC | newest]

Thread overview: 79+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2007-02-23  0:59 Preferred OS, processor family for running embedded Ada? Mike Silva
2007-02-23  4:41 ` Steve
2007-02-23 16:00   ` Mike Silva
2007-02-23  4:49 ` Jeffrey R. Carter
2007-02-23 13:13   ` Mike Silva
2007-02-23 13:56 ` Stephen Leake
2007-02-23 14:10   ` Mike Silva
2007-02-24 10:45     ` Stephen Leake
2007-02-24 12:27       ` Jeffrey Creem
2007-02-24 22:10         ` Dr. Adrian Wrigley
2007-02-25 13:10           ` roderick.chapman
2007-02-25 17:53             ` Jeffrey R. Carter
2007-02-25 15:08           ` Stephen Leake
2007-02-28 17:20             ` Embedded languages based on early Ada (from "Re: Preferred OS, processor family for running embedded Ada?") Colin Paul Gloster
2007-03-01  9:18               ` Jean-Pierre Rosen
2007-03-01 11:22               ` Dr. Adrian Wrigley
2007-03-01 11:47                 ` claude.simon
2007-03-01 13:57                 ` Dmitry A. Kazakov
2007-03-01 18:09                   ` Ray Blaak
2007-03-02 11:36                   ` Dr. Adrian Wrigley
2007-03-02 16:32                     ` Dmitry A. Kazakov
2007-03-03  0:00                       ` Dr. Adrian Wrigley
2007-03-03 11:00                         ` Dmitry A. Kazakov
2007-03-03 11:27                           ` Jonathan Bromley
2007-03-03 12:12                             ` Simon Farnsworth
2007-03-03 14:07                               ` Dr. Adrian Wrigley
2007-03-03 17:28                                 ` Pascal Obry
2007-03-03 18:11                                   ` Dmitry A. Kazakov
2007-03-03 18:31                                     ` Pascal Obry
2007-03-03 20:26                                       ` Dmitry A. Kazakov
2007-03-03 21:28                                   ` Dr. Adrian Wrigley
2007-03-03 22:00                                     ` Pascal Obry
2007-03-03 13:40                             ` Dr. Adrian Wrigley
2007-03-03 15:26                               ` Jonathan Bromley
2007-03-03 16:59                                 ` Dr. Adrian Wrigley
2007-03-05 15:36                               ` Colin Paul Gloster
2007-03-03  1:58                       ` Ray Blaak
2007-03-03  8:14                         ` Pascal Obry
2007-03-03 11:00                         ` Dmitry A. Kazakov
2007-03-03 21:13                           ` Ray Blaak
2007-03-05 19:01                             ` PAR (Was: Embedded languages based on early Ada) Jacob Sparre Andersen
2007-03-06  2:01                               ` Dr. Adrian Wrigley
2007-03-06  3:30                                 ` Randy Brukardt
2007-03-06  7:10                                   ` Ray Blaak
2007-03-06 18:05                                     ` Ray Blaak
2007-03-06  6:04                                 ` tmoran
2007-03-06  6:59                                 ` Ray Blaak
2007-03-06  7:07                                 ` Ray Blaak
2007-03-06  7:22                                 ` Martin Krischik
2007-03-06 13:18                                 ` Dr. Adrian Wrigley
2007-03-06 18:16                                   ` Ray Blaak
2007-03-06 23:49                                   ` Randy Brukardt
2007-03-07  8:59                                     ` Dmitry A. Kazakov
2007-03-07 18:26                                       ` Ray Blaak
2007-03-07 19:03                                         ` Dr. Adrian Wrigley
2007-03-07 19:55                                         ` Dmitry A. Kazakov
2007-03-07 20:17                                           ` Ray Blaak
2007-03-08 10:06                                             ` Dmitry A. Kazakov
2007-03-08 18:03                                               ` Ray Blaak
2007-03-07 20:18                                           ` Pascal Obry
2007-03-07 20:41                                             ` Dr. Adrian Wrigley
2007-03-08  5:45                                               ` Randy Brukardt
2007-03-08 10:06                                                 ` Dmitry A. Kazakov
2007-03-10  1:58                                                   ` Randy Brukardt
2007-03-10  9:11                                                     ` Dmitry A. Kazakov
2007-03-08 18:08                                                 ` Ray Blaak
2007-03-10  1:50                                                   ` Randy Brukardt
2007-03-05 15:23                   ` Embedded languages based on early Ada (from "Re: Preferred OS, processor family for running embedded Ada?") Colin Paul Gloster
2007-03-06  0:31                     ` Dr. Adrian Wrigley
2007-03-01 16:09                 ` Colin Paul Gloster
2007-03-01 13:23               ` Martin Thompson
2007-02-26 16:34           ` Preferred OS, processor family for running embedded Ada? Jean-Pierre Rosen
2007-02-26 21:18             ` Dr. Adrian Wrigley
2007-02-27 15:39               ` Jean-Pierre Rosen
2007-02-28 12:25                 ` Jerome Hugues
2007-02-24 19:11       ` Mike Silva
2007-02-24 13:59     ` Jacob Sparre Andersen
2007-03-01 19:32       ` Jacob Sparre Andersen
2007-03-01 20:22         ` Mike Silva

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox