comp.lang.ada
 help / color / mirror / Atom feed
* Ada lacks lighterweight-than-task parallelism
@ 2018-06-19 22:14 Dan'l Miller
  2018-06-19 22:23 ` Dan'l Miller
                   ` (4 more replies)
  0 siblings, 5 replies; 39+ messages in thread
From: Dan'l Miller @ 2018-06-19 22:14 UTC (permalink / raw)


http://www.theregister.co.uk/2018/06/18/microsoft_e2_edge_windows_10

As discussed in the article above, Microsoft is starting to unveil its formerly-secret development of what could be described as “Itanium done right“.

Whereas Itanium was VLIW jammed force-fit into an otherwise traditional ISA that had 8 concurrent sub-instructions per VLIW-instruction (and where most simultaneous multithreading [SMT] processor cores can execute up to 2 threads per processor-core on Intel, AMD, and IBM POWER processors), Microsoft's Explicit Data Graph Execution (EDGE) processor slices programs into up to 32 independent slices per thread/task.

Whereas Itanium insisted on the •compiler• at engineering-time finding 8 independent slices within a program, EDGE performs the analogue of such analysis in hardware at runtime—hence, overcoming one of Itanium's fatal flaws.  (The other fatal flaw on which Itanium was misfounded was that the compiler could not predict at engineering-time the wildly varying latencies of L1 cache versus L2 cache versus L3 cache versus local DRAM versus ccNUMA-network DRAM; EDGE appears to mitigate that as well by apparently picking slices with the same timing characteristics or compatibly-interleavable timing characteristics.)

And EDGE processor is being currently sampled at 10nm manufacturing process through Qualcomm.

And Windows 10 has been successfully ported to EDGE processor (which is starting to look like a future product announcement for some forthcoming new Microsoft device).

And viola neither Ada nor POSIX threads, on the front end, have representations of slices.  Nor do any extant Ada compilers have representation of slices on the backend—which is a true shame because slices would have been one of the enabling technologies for Itanium 18 (!) years ago.  18 years ago.  I think Rumpelstiltskin slept for less time duration than that.

And this initial era of EDGE appears to be single-core.  Imagine 2-core and 4-core and 8-core editions in the future for 64-, 96-, and 128-slice parallelism.

Ada for the Cold War in 1983 would focus on tasks as the only vehicle for parallelism.  Ada for the 21st century would also embrace/facilitate slices somehow, via some sort of locality of reference or via some sort of demarcation of independence.


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Ada lacks lighterweight-than-task parallelism
  2018-06-19 22:14 Ada lacks lighterweight-than-task parallelism Dan'l Miller
@ 2018-06-19 22:23 ` Dan'l Miller
  2018-06-20  0:03 ` Dan'l Miller
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 39+ messages in thread
From: Dan'l Miller @ 2018-06-19 22:23 UTC (permalink / raw)


On Tuesday, June 19, 2018 at 5:14:17 PM UTC-5, Dan'l Miller wrote:
> Imagine 2-core and 4-core and 8-core editions in the future for 64-, 96-, and 128-slice parallelism.

Or better yet, imagine an EDGE processor that gets the arithmetic correct better than I do on the fly:

Imagine 2-core, 4-core, 6-core, and 8-core editions in the future for 64-, 128-, 192-, and 256-slice parallelism.


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Ada lacks lighterweight-than-task parallelism
  2018-06-19 22:14 Ada lacks lighterweight-than-task parallelism Dan'l Miller
  2018-06-19 22:23 ` Dan'l Miller
@ 2018-06-20  0:03 ` Dan'l Miller
  2018-06-20  0:41 ` Lucretia
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 39+ messages in thread
From: Dan'l Miller @ 2018-06-20  0:03 UTC (permalink / raw)


On Tuesday, June 19, 2018 at 5:14:17 PM UTC-5, Dan'l Miller wrote:
>  which is a true shame because slices would have been one of the enabling technologies for
> Itanium 18 (!) years ago.  18 years ago.  I think Rumpelstiltskin slept for less time duration than that.

On 2nd thought:  Rip van Winkle


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Ada lacks lighterweight-than-task parallelism
  2018-06-19 22:14 Ada lacks lighterweight-than-task parallelism Dan'l Miller
  2018-06-19 22:23 ` Dan'l Miller
  2018-06-20  0:03 ` Dan'l Miller
@ 2018-06-20  0:41 ` Lucretia
  2018-06-20  1:36   ` Dan'l Miller
  2018-06-20  1:12 ` Shark8
  2018-06-20 12:28 ` Brian Drummond
  4 siblings, 1 reply; 39+ messages in thread
From: Lucretia @ 2018-06-20  0:41 UTC (permalink / raw)


On Tuesday, 19 June 2018 23:14:17 UTC+1, Dan'l Miller  wrote:
> http://www.theregister.co.uk/2018/06/18/microsoft_e2_edge_windows_10
> 
> As discussed in the article above, Microsoft is starting to unveil its formerly-secret development of what could be described as “Itanium done right“.

One of the casualties of the Titanic was PA-RISC, HP should resurrect and redesign PA-RISC for the 21st century.


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Ada lacks lighterweight-than-task parallelism
  2018-06-19 22:14 Ada lacks lighterweight-than-task parallelism Dan'l Miller
                   ` (2 preceding siblings ...)
  2018-06-20  0:41 ` Lucretia
@ 2018-06-20  1:12 ` Shark8
  2018-06-20  1:41   ` Dan'l Miller
  2018-06-20 12:28 ` Brian Drummond
  4 siblings, 1 reply; 39+ messages in thread
From: Shark8 @ 2018-06-20  1:12 UTC (permalink / raw)


On Tuesday, June 19, 2018 at 4:14:17 PM UTC-6, Dan'l Miller wrote:
> 
> Ada for the Cold War in 1983 would focus on tasks as the only vehicle for parallelism.  Ada for the 21st century would also embrace/facilitate slices somehow, via some sort of locality of reference or via some sort of demarcation of independence.

Don't hate on TASK!
TASK is a great construct, and particularly good for:
1) Isolating and/or interfacing both subsystems and jobs, with the possibility of taking advantage of multiprocessor capability;
2) Implementing protocols, via the ACCEPT construct; and
3) at a high-level rather than the "annotated GPU assembly" we get with (eg) CUDA/C.

As for something lightweight, we're working on that in the ARG right now:
* PARALLEL DO blocks,
* Parallel LOOPs [IIRC, it might be just FOR], and
* And some other things like operators, blocking-detection, etc.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Ada lacks lighterweight-than-task parallelism
  2018-06-20  0:41 ` Lucretia
@ 2018-06-20  1:36   ` Dan'l Miller
  2018-06-20 13:39     ` Luke A. Guest
  0 siblings, 1 reply; 39+ messages in thread
From: Dan'l Miller @ 2018-06-20  1:36 UTC (permalink / raw)


On Tuesday, June 19, 2018 at 7:41:21 PM UTC-5, Lucretia wrote:
> On Tuesday, 19 June 2018 23:14:17 UTC+1, Dan'l Miller  wrote:
> > http://www.theregister.co.uk/2018/06/18/microsoft_e2_edge_windows_10
> > 
> > As discussed in the article above, Microsoft is starting to unveil its formerly-secret development of what could be described as “Itanium done right“.
> 
> One of the casualties of the Titanic was PA-RISC, HP should resurrect and redesign PA-RISC for the 21st
> century.

HP and HPE are 2 of Microsoft's closest partners-of-allied-vision (Intel not as much nowadays), both on servers and on mobile consumer products.  I think that (despite being right on the heals of winning the multi-billion-dollar court judgement against Oracle) HPE's disinterest in continuing to board the sinking good ship Itanic occurred at precisely the same time as the rise of the secret EDGE-processor project at Microsoft.  (I need to investigate who funded the university work from which EDGE is derived.)  I think that EDGE might have HPE's post-Itanic fingerprints all over it.  We will know for certain of an HPE–Microsoft joint effort if a server version of EDGE supports technology extracted from HPE's The Machine, e.g., resistive associative RAM.

In other words, your tongue-in-cheek comment is likely what this EDGE-processor is all about for HPE, minus the PA-RISC ISA itself: a non-Intel, non-ARM, non-SPARC, non-POWER replacement to the PA-RISC/Itanium vision for servers.


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Ada lacks lighterweight-than-task parallelism
  2018-06-20  1:12 ` Shark8
@ 2018-06-20  1:41   ` Dan'l Miller
  2018-06-20  7:13     ` Dmitry A. Kazakov
  0 siblings, 1 reply; 39+ messages in thread
From: Dan'l Miller @ 2018-06-20  1:41 UTC (permalink / raw)


On Tuesday, June 19, 2018 at 8:12:02 PM UTC-5, Shark8 wrote:
> On Tuesday, June 19, 2018 at 4:14:17 PM UTC-6, Dan'l Miller wrote:
> > 
> > Ada for the Cold War in 1983 would focus on tasks as the only vehicle for parallelism.  Ada for the 21st century would also embrace/facilitate slices somehow, via some sort of locality of reference or via some sort of demarcation of independence.
> 
> Don't hate on TASK!
> TASK is a great construct, and particularly good for:
> 1) Isolating and/or interfacing both subsystems and jobs, with the possibility of taking advantage of multiprocessor capability;
> 2) Implementing protocols, via the ACCEPT construct; and
> 3) at a high-level rather than the "annotated GPU assembly" we get with (eg) CUDA/C.

I'm not saying anything negative about tasks.  I am just saying that there should be more games to play in the casino than merely one and only one.

> As for something lightweight, we're working on that in the ARG right now:
> * PARALLEL DO blocks,
> * Parallel LOOPs [IIRC, it might be just FOR], and
> * And some other things like operators, blocking-detection, etc.

Excellent!  These should produce slices for numeric computations.  I am not as sure that they alone(!) will necessarily produce usable slices regarding non-numeric arbitrary processing via deep call-trees of subprograms.


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Ada lacks lighterweight-than-task parallelism
  2018-06-20  1:41   ` Dan'l Miller
@ 2018-06-20  7:13     ` Dmitry A. Kazakov
  2018-06-20 12:03       ` Dan'l Miller
  0 siblings, 1 reply; 39+ messages in thread
From: Dmitry A. Kazakov @ 2018-06-20  7:13 UTC (permalink / raw)


On 2018-06-20 03:41, Dan'l Miller wrote:
> On Tuesday, June 19, 2018 at 8:12:02 PM UTC-5, Shark8 wrote:

> I'm not saying anything negative about tasks.  I am just saying that there should be more games to play in the casino than merely one and only one.
> 
>> As for something lightweight, we're working on that in the ARG right now:
>> * PARALLEL DO blocks,
>> * Parallel LOOPs [IIRC, it might be just FOR], and
>> * And some other things like operators, blocking-detection, etc.
> 
> Excellent!  These should produce slices for numeric computations.  I am not as sure that they alone(!) will necessarily produce usable slices regarding non-numeric arbitrary processing via deep call-trees of subprograms.

So it is not "lightweight", it is "scalar" parallelism.

The latter is quite a niche and IMO has no place in the language being 
to low-level. It better be handled in the form of hints for the compiler 
to deploy a certain form of optimization.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Ada lacks lighterweight-than-task parallelism
  2018-06-20  7:13     ` Dmitry A. Kazakov
@ 2018-06-20 12:03       ` Dan'l Miller
  2018-06-20 12:29         ` Dmitry A. Kazakov
  2018-06-21  0:17         ` Shark8
  0 siblings, 2 replies; 39+ messages in thread
From: Dan'l Miller @ 2018-06-20 12:03 UTC (permalink / raw)


On Wednesday, June 20, 2018 at 2:13:24 AM UTC-5, Dmitry A. Kazakov wrote:
> On 2018-06-20 03:41, Dan'l Miller wrote:
> It better be handled in the form of hints for the compiler 
> to deploy a certain form of optimization.

So, does Ada now or in Ada2020 have those hints?


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Ada lacks lighterweight-than-task parallelism
  2018-06-19 22:14 Ada lacks lighterweight-than-task parallelism Dan'l Miller
                   ` (3 preceding siblings ...)
  2018-06-20  1:12 ` Shark8
@ 2018-06-20 12:28 ` Brian Drummond
  2018-06-21  1:51   ` Dan'l Miller
  4 siblings, 1 reply; 39+ messages in thread
From: Brian Drummond @ 2018-06-20 12:28 UTC (permalink / raw)


On Tue, 19 Jun 2018 15:14:16 -0700, Dan'l Miller wrote:

> http://www.theregister.co.uk/2018/06/18/microsoft_e2_edge_windows_10
> 
> As discussed in the article above, Microsoft is starting to unveil its
> formerly-secret development of what could be described as “Itanium done
> right“.

wait what? ... JAN GRAY? 

(in the Further Reading section) breadcrumbs to
https://arxiv.org/abs/1803.06617

"Design productivity is still a challenge for reconfigurable
computing. It is expensive to port workloads into gates and
to endure 10**2 to 10**4 second bitstream rebuild design itera-
tions." 

( ... no kidding, but tolerable where it improves program execution times 
below 10**6 or 10**7 seconds)

so this is primarily work that emerged from the RC shadows, where for the 
past quarter century, people like JG have exploited parallelism not at 
the task level or even the "slice" level but at the gate level where that 
helps...

and where one of the chief difficulties has been the interface between 
that (unconstrained) level and the tightly constrained (single operation 
stream from the compiler, reverse engineered into OO superscalar within 
the CPU) level

and where some other efforts to smooth the way between parallelism 
domains are still ongoing...
https://www.extremetech.com/computing/269461-intel-shows-off-xeon-
scalable-gold-6138p-with-an-integrated-fpga
https://www.nextplatform.com/2018/05/24/a-peek-inside-that-intel-xeon-
fpga-hybrid-chip/
(I'm imagining on-chip PCIE links working like the old Transputer 
channels here, but streaming data to/from the bespoke hardware engine 
directly, much less overhead than I used to have, doing RC with external 
FPGA boards)

... if there are Ada dimensions here, one might be compiling Ada directly 
to hardware...

https://www.cs.york.ac.uk/ftpdir/papers/rtspapers/R:Ward:2001.ps

...which paper is only slightly weakened by the fact that his published 
example procedure is also synthesisable VHDL! in fact Xilinx XST 
synthesises that example to run in a single clock cycle ...

 ... however, at an appallingly slow clock ...

vs 732 cycles for the paper's result and an estimated 44,000 for an 80486.

Thus the Ward paper's true merit is, ironically, that it allows automatic 
extraction of a degree of sequentialism from an inherently parallel 
example; opening the way to automatic generation of faster (and maybe 
smaller) pipelined dataflow hardware. 

Which is actually quite difficult, and historically an extensively manual 
process in past RC processes - another bottleneck in addition to the 
"bitstream rebuild" times JG complains about.

so why stop at the "slice" level as EDGE does?

it makes sense if there is automatic translation (compilation at usably 
fast rates) from source to that level, AND if a large and sufficiently 
generally useful structure of slices can be implemented in ASIC without 
the time and area penalty of FPGA routing. 

One way of looking at it is to see a "slice" as a higher-level or larger 
grained FPGA LUT (which is a generalisation of one or several gates).

FPGAs have been becoming coarser grained anyway, as well as adding RAM 
Blocks and (first multiplier, then DSP primitive) blocks by the hundreds 
- because fewer more powerful primitive blocks reduce that routing (area 
+ speed) penalty.

 A few dozen BlockRams for example, configured the right way, open the 
doors to allowing stack architectures to go supercalar (eliminating huge 
problems addressing registers), though I don't know if this has ever been 
exploited.

interesting development... perhaps a logical growth from JG's involvement 
with the ultra fine grained XC6200 FPGA, where RC pretty much started.
-- Brian

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Ada lacks lighterweight-than-task parallelism
  2018-06-20 12:03       ` Dan'l Miller
@ 2018-06-20 12:29         ` Dmitry A. Kazakov
  2018-06-20 13:14           ` Mehdi Saada
  2018-06-21  0:17         ` Shark8
  1 sibling, 1 reply; 39+ messages in thread
From: Dmitry A. Kazakov @ 2018-06-20 12:29 UTC (permalink / raw)


On 2018-06-20 14:03, Dan'l Miller wrote:
> On Wednesday, June 20, 2018 at 2:13:24 AM UTC-5, Dmitry A. Kazakov wrote:
>> On 2018-06-20 03:41, Dan'l Miller wrote:
>> It better be handled in the form of hints for the compiler
>> to deploy a certain form of optimization.
> 
> So, does Ada now or in Ada2020 have those hints?

No.

But I think if Ada integrated SPARK then with things like loop 
invariants and provable purity of certain pieces of code, the compiler 
should be able decide of the loop body can be ran concurrently. It would 
be a better approach than to burden the programmer with low-level stuff, 
IMO.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Ada lacks lighterweight-than-task parallelism
  2018-06-20 12:29         ` Dmitry A. Kazakov
@ 2018-06-20 13:14           ` Mehdi Saada
  2018-06-20 13:38             ` Dmitry A. Kazakov
  0 siblings, 1 reply; 39+ messages in thread
From: Mehdi Saada @ 2018-06-20 13:14 UTC (permalink / raw)


So I guess you are in favor with deeper (and de facto) integration of compilers with static analysis tools ?


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Ada lacks lighterweight-than-task parallelism
  2018-06-20 13:14           ` Mehdi Saada
@ 2018-06-20 13:38             ` Dmitry A. Kazakov
  2018-06-20 14:01               ` Mehdi Saada
  2018-06-21  0:19               ` Shark8
  0 siblings, 2 replies; 39+ messages in thread
From: Dmitry A. Kazakov @ 2018-06-20 13:38 UTC (permalink / raw)


On 2018-06-20 15:14, Mehdi Saada wrote:
> So I guess you are in favor with deeper (and de facto) integration of compilers with static analysis tools ?

Absolutely. Not as tools. SPARK must become an integral part of Ada. 
Static analysis and defined provable semantics is major strength of Ada.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Ada lacks lighterweight-than-task parallelism
  2018-06-20  1:36   ` Dan'l Miller
@ 2018-06-20 13:39     ` Luke A. Guest
  0 siblings, 0 replies; 39+ messages in thread
From: Luke A. Guest @ 2018-06-20 13:39 UTC (permalink / raw)


Dan'l Miller <t> wrote:

> In other words, your tongue-in-cheek comment is likely what this
> EDGE-processor is all about for HPE, minus the PA-RISC ISA itself: a
> non-Intel, non-ARM, non-SPARC, non-POWER replacement to the
> PA-RISC/Itanium vision for servers.
> 

I was being serious. HP should never have been involved with titanic. If C=
wasn’t fucked over Mehdi Ali, we would’ve had PA-RISC Amiga’s bitd.


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Ada lacks lighterweight-than-task parallelism
  2018-06-20 13:38             ` Dmitry A. Kazakov
@ 2018-06-20 14:01               ` Mehdi Saada
  2018-06-20 14:32                 ` Dmitry A. Kazakov
                                   ` (2 more replies)
  2018-06-21  0:19               ` Shark8
  1 sibling, 3 replies; 39+ messages in thread
From: Mehdi Saada @ 2018-06-20 14:01 UTC (permalink / raw)


Considering how complicate writing Ada compilers already is, and seeing that there is no standard interface between those and tools - as far as I get it - it would likely make the hypothetic compilers huge and considering GNAT is the only one implementing (most) of the 2012 version, fusing tools and compilers might close for good the Ada compilers market.
Haven't you been promoting free market ?
Stop me if I made a mistake.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Ada lacks lighterweight-than-task parallelism
  2018-06-20 14:01               ` Mehdi Saada
@ 2018-06-20 14:32                 ` Dmitry A. Kazakov
  2018-06-29 22:01                   ` Randy Brukardt
  2018-06-20 15:58                 ` Niklas Holsti
  2018-06-29 21:58                 ` Randy Brukardt
  2 siblings, 1 reply; 39+ messages in thread
From: Dmitry A. Kazakov @ 2018-06-20 14:32 UTC (permalink / raw)


On 2018-06-20 16:01, Mehdi Saada wrote:
> Considering how complicate writing Ada compilers already is, and seeing that there is no standard interface between those and tools - as far as I get it - it would likely make the hypothetic compilers huge and considering GNAT is the only one implementing (most) of the 2012 version, fusing tools and compilers might close for good the Ada compilers market.
> Haven't you been promoting free market ?

Yes.

> Stop me if I made a mistake.

How adding features like low-level parallelism makes compiler smaller?

I actually want a reduced Ada. The exiting type system must be moved to 
the library level expressed in more general terms. Generics can be 
removed. Representation clauses hugely reduced to essential. Containers 
need not to be in the standard. I/O library can be hugely reduced once 
the type system fixed. All dynamic checks can be removed, replaced by 
static contracts: do this when that or raise Constraint_Error otherwise. 
etc.

Essential are only the type system, compilation units, tasking and 
statical analysis support.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Ada lacks lighterweight-than-task parallelism
  2018-06-20 14:01               ` Mehdi Saada
  2018-06-20 14:32                 ` Dmitry A. Kazakov
@ 2018-06-20 15:58                 ` Niklas Holsti
  2018-06-29 21:58                 ` Randy Brukardt
  2 siblings, 0 replies; 39+ messages in thread
From: Niklas Holsti @ 2018-06-20 15:58 UTC (permalink / raw)


On 18-06-20 17:01 , Mehdi Saada wrote:
> Considering how complicate writing Ada compilers already is,

A formal definition of the language, as discussed in the "Ada successor" 
thread, should make it a little easier.

> and seeing that there is no standard interface between those
> and tools

There is ASIS:

https://en.wikipedia.org/wiki/Ada_Semantic_Interface_Specification

-- 
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
       .      @       .

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Ada lacks lighterweight-than-task parallelism
  2018-06-20 12:03       ` Dan'l Miller
  2018-06-20 12:29         ` Dmitry A. Kazakov
@ 2018-06-21  0:17         ` Shark8
  1 sibling, 0 replies; 39+ messages in thread
From: Shark8 @ 2018-06-21  0:17 UTC (permalink / raw)


On Wednesday, June 20, 2018 at 6:03:31 AM UTC-6, Dan'l Miller wrote:
> On Wednesday, June 20, 2018 at 2:13:24 AM UTC-5, Dmitry A. Kazakov wrote:
> > On 2018-06-20 03:41, Dan'l Miller wrote:
> > It better be handled in the form of hints for the compiler 
> > to deploy a certain form of optimization.
> 
> So, does Ada now or in Ada2020 have those hints?

That's a hard question to answer.
Ada as it is now, could certainly have them -- the Standard allows implementation defined pragmas and aspects -- indeed, the advent of CUDA was [IMO] a very big lost opportunity. Had nVidia used Ada + pragmas & aspects instead of C + [essentially] embedded-GPU-assembly-directives it would have done a *LOT* to boost Ada's reputation in parallel (it used to have a good one) as well as spur on development of making GPU-enabled tasks [fairly] automatic. (Randy disagrees w/ me on this.)

The "tasklets" (parallel block/for/etc) that are in Ada2020 do a lot of that, and will probably be useful for compilers that are GPU-code enabled.


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Ada lacks lighterweight-than-task parallelism
  2018-06-20 13:38             ` Dmitry A. Kazakov
  2018-06-20 14:01               ` Mehdi Saada
@ 2018-06-21  0:19               ` Shark8
  2018-06-21  9:09                 ` Dmitry A. Kazakov
  1 sibling, 1 reply; 39+ messages in thread
From: Shark8 @ 2018-06-21  0:19 UTC (permalink / raw)


On Wednesday, June 20, 2018 at 7:38:11 AM UTC-6, Dmitry A. Kazakov wrote:
> On 2018-06-20 15:14, Mehdi Saada wrote:
> > So I guess you are in favor with deeper (and de facto) integration of compilers with static analysis tools ?
> 
> Absolutely. Not as tools. SPARK must become an integral part of Ada. 
> Static analysis and defined provable semantics is major strength of Ada.
> 
Agreed; this is why I'm pushing for a heavily integrated IDE with my IDE-proposal.
I'd be quite interested to hear your thoughts on the subject.


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Ada lacks lighterweight-than-task parallelism
  2018-06-20 12:28 ` Brian Drummond
@ 2018-06-21  1:51   ` Dan'l Miller
  2018-06-21 10:22     ` Brian Drummond
  0 siblings, 1 reply; 39+ messages in thread
From: Dan'l Miller @ 2018-06-21  1:51 UTC (permalink / raw)


On Wednesday, June 20, 2018 at 7:28:24 AM UTC-5, Brian Drummond wrote:
> On Tue, 19 Jun 2018 15:14:16 -0700, Dan'l Miller wrote:
> 
> > http://www.theregister.co.uk/2018/06/18/microsoft_e2_edge_windows_10
> > 
> > As discussed in the article above, Microsoft is starting to unveil its
> > formerly-secret development of what could be described as “Itanium done
> > right“.
> 
> wait what? ... JAN GRAY? 
> 
> (in the Further Reading section) breadcrumbs to
> https://arxiv.org/abs/1803.06617

Interesting insight that I will investigate further, as what you point out might also be useful on Altera FPGAs embedded into Intel processors, which might end up being a competitor to EDGE as coming years unfold.  (Indeed, Intel's purchase of Altera might have been the stimulus at Microsoft to pursue the apparent forthcoming commercialization of EDGE in the first place; the years would align about right).


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Ada lacks lighterweight-than-task parallelism
  2018-06-21  0:19               ` Shark8
@ 2018-06-21  9:09                 ` Dmitry A. Kazakov
  2018-06-21 14:42                   ` Shark8
  0 siblings, 1 reply; 39+ messages in thread
From: Dmitry A. Kazakov @ 2018-06-21  9:09 UTC (permalink / raw)


On 2018-06-21 02:19, Shark8 wrote:
> On Wednesday, June 20, 2018 at 7:38:11 AM UTC-6, Dmitry A. Kazakov wrote:
>> On 2018-06-20 15:14, Mehdi Saada wrote:
>>> So I guess you are in favor with deeper (and de facto) integration of compilers with static analysis tools ?
>>
>> Absolutely. Not as tools. SPARK must become an integral part of Ada.
>> Static analysis and defined provable semantics is major strength of Ada.
>>
> Agreed; this is why I'm pushing for a heavily integrated IDE with my IDE-proposal.
> I'd be quite interested to hear your thoughts on the subject.

Why do you think that an IDE is important here? (Not that I will use 
emacs, ever! (:-))

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Ada lacks lighterweight-than-task parallelism
  2018-06-21  1:51   ` Dan'l Miller
@ 2018-06-21 10:22     ` Brian Drummond
  0 siblings, 0 replies; 39+ messages in thread
From: Brian Drummond @ 2018-06-21 10:22 UTC (permalink / raw)


On Wed, 20 Jun 2018 18:51:44 -0700, Dan'l Miller wrote:

> On Wednesday, June 20, 2018 at 7:28:24 AM UTC-5, Brian Drummond wrote:
>> On Tue, 19 Jun 2018 15:14:16 -0700, Dan'l Miller wrote:
>> 
>> > http://www.theregister.co.uk/2018/06/18/microsoft_e2_edge_windows_10
>> > 
>> > As discussed in the article above, Microsoft is starting to unveil
>> > its formerly-secret development of what could be described as
>> > “Itanium done right“.
>> 
>> wait what? ... JAN GRAY?
>> 
>> (in the Further Reading section) breadcrumbs to
>> https://arxiv.org/abs/1803.06617
> 
> Interesting insight that I will investigate further, as what you point
> out might also be useful on Altera FPGAs embedded into Intel processors,
> which might end up being a competitor to EDGE as coming years unfold. 
> (Indeed, Intel's purchase of Altera might have been the stimulus at
> Microsoft to pursue the apparent forthcoming commercialization of EDGE
> in the first place; the years would align about right).

Won't be a direct (similar tech) competitor, unless Altera are keeping 
quiet about some coarse grained replacement for the FPGA slice, like 
those little "Edge" cores.  But interesting to watch...

(Intel and Altera go way back, I think Inel was Altera's original fab 
back in the EPROM FPGA days, as well as being their "second 
source" (quite how that worked, I'm not sure :-)

-- Brian


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Ada lacks lighterweight-than-task parallelism
  2018-06-21  9:09                 ` Dmitry A. Kazakov
@ 2018-06-21 14:42                   ` Shark8
  2018-06-21 15:55                     ` Dan'l Miller
  2018-06-21 16:06                     ` Dmitry A. Kazakov
  0 siblings, 2 replies; 39+ messages in thread
From: Shark8 @ 2018-06-21 14:42 UTC (permalink / raw)


On Thursday, June 21, 2018 at 3:09:26 AM UTC-6, Dmitry A. Kazakov wrote:
> On 2018-06-21 02:19, Shark8 wrote:
> > On Wednesday, June 20, 2018 at 7:38:11 AM UTC-6, Dmitry A. Kazakov wrote:
> >> On 2018-06-20 15:14, Mehdi Saada wrote:
> >>> So I guess you are in favor with deeper (and de facto) integration of compilers with static analysis tools ?
> >>
> >> Absolutely. Not as tools. SPARK must become an integral part of Ada.
> >> Static analysis and defined provable semantics is major strength of Ada.
> >>
> > Agreed; this is why I'm pushing for a heavily integrated IDE with my IDE-proposal.
> > I'd be quite interested to hear your thoughts on the subject.
> 
> Why do you think that an IDE is important here? (Not that I will use 
> emacs, ever! (:-))
> 
Integration.
I don't mean "IDE" like the industry has come to think of it (editor + button-to-call-external-tool). / Think more like R-1000 on steroids, not GPS.

The integration that you're proposing isn't going to be happening on this revision of the language, BUT we can integrate it as closely as possible in the form of an IDE, acting /as if/ there were a requirement for such integration. (And indeed, the Ada spec requires a degree of this WRT static analysis currently, so it certainly is possible to integrate these things.)

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Ada lacks lighterweight-than-task parallelism
  2018-06-21 14:42                   ` Shark8
@ 2018-06-21 15:55                     ` Dan'l Miller
  2018-06-27 11:49                       ` Marius Amado-Alves
  2018-06-21 16:06                     ` Dmitry A. Kazakov
  1 sibling, 1 reply; 39+ messages in thread
From: Dan'l Miller @ 2018-06-21 15:55 UTC (permalink / raw)


On Thursday, June 21, 2018 at 9:42:44 AM UTC-5, Shark8 wrote:
> On Thursday, June 21, 2018 at 3:09:26 AM UTC-6, Dmitry A. Kazakov wrote:
> > On 2018-06-21 02:19, Shark8 wrote:
> > > On Wednesday, June 20, 2018 at 7:38:11 AM UTC-6, Dmitry A. Kazakov wrote:
> > >> On 2018-06-20 15:14, Mehdi Saada wrote:
> > >>> So I guess you are in favor with deeper (and de facto) integration of compilers with static
> > >>> analysis tools ?
> > >>
> > >> Absolutely. Not as tools. SPARK must become an integral part of Ada.
> > >> Static analysis and defined provable semantics is major strength of Ada.
> > >>
> > > Agreed; this is why I'm pushing for a heavily integrated IDE with my IDE-proposal.
> > > I'd be quite interested to hear your thoughts on the subject.
> > 
> > Why do you think that an IDE is important here? (Not that I will use 
> > emacs, ever! (:-))
> > 
> Integration.
> I don't mean "IDE" like the industry has come to think of it (editor + button-to-call-external-tool). /
> Think more like R-1000 on steroids, not GPS.
> 
> The integration that you're proposing isn't going to be happening on this revision of the language, BUT
> we can integrate it as closely as possible in the form of an IDE, acting /as if/ there were a requirement for
> such integration. (And indeed, the Ada spec requires a degree of this WRT static analysis currently, so it
> certainly is possible to integrate these things.)

In technology, if you name it & define it & then reify it then you own* it, even if the name existed before with a different definition and/or the definition existed before with a different name.  There are numerous examples strewn throughout computer history, of which here are 3:

1) RAII in C++ world is attributed to Boost community, but as much as 7 years prior it was commonly called “allocation in only constructors; deallocation in only destructors” but Marshall Kline et. al. did not reduce the mantra to a catchy acronym or initialism.

2) The term microcomputer was commonplace prior to the release of the IBM Personal Computer in 1981.  Nowadays we hardly hear the word microcomputer to refer to a PC, other than noting the Micro- of Microsoft refers to microcomputers.

3) The way of storing compiled Ada objects in R1000 also existed in Multics for PL/I a decade prior to R1000.  Then it was later reprised again a half-decade later than R1000 for C++ via ObjectStore.

* not in the patent or trademark sense, but in the mindshare & fame sense.  Well, on 2nd thought, actually mindshare and marketshare can lend itself well to establishing trademark in legal-world.

Shark8, you should pull open the thesaurus and name/define a next-generation replacement term for IDE.  Something like holistic development environment (HDE).  Indeed, there exists an archaic spelling of holistic that de-emphasises any misinterpretation of holistic as having anything to do with hole or holy**:  wholistic, emphasizing wholeness.  Plus wholistic would have very few collisions on Bing/Google searches.  Wholistic development environment (WDE).

A wise technology leader once told me:  if you need to explain yourself in embattled debate (e.g., no, •this• IDE, not •that• IDE), then you have already lost the argument.

** New-age people misinterpret the root-word of holistic as holy, not as whole from before the time that English's spelling was fully settled.


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Ada lacks lighterweight-than-task parallelism
  2018-06-21 14:42                   ` Shark8
  2018-06-21 15:55                     ` Dan'l Miller
@ 2018-06-21 16:06                     ` Dmitry A. Kazakov
  2018-06-22 17:06                       ` Shark8
  1 sibling, 1 reply; 39+ messages in thread
From: Dmitry A. Kazakov @ 2018-06-21 16:06 UTC (permalink / raw)


On 2018-06-21 16:42, Shark8 wrote:
> On Thursday, June 21, 2018 at 3:09:26 AM UTC-6, Dmitry A. Kazakov wrote:
>> On 2018-06-21 02:19, Shark8 wrote:
>>> On Wednesday, June 20, 2018 at 7:38:11 AM UTC-6, Dmitry A. Kazakov wrote:
>>>> On 2018-06-20 15:14, Mehdi Saada wrote:
>>>>> So I guess you are in favor with deeper (and de facto) integration of compilers with static analysis tools ?
>>>>
>>>> Absolutely. Not as tools. SPARK must become an integral part of Ada.
>>>> Static analysis and defined provable semantics is major strength of Ada.
>>>>
>>> Agreed; this is why I'm pushing for a heavily integrated IDE with my IDE-proposal.
>>> I'd be quite interested to hear your thoughts on the subject.
>>
>> Why do you think that an IDE is important here? (Not that I will use
>> emacs, ever! (:-))
>>
> Integration.
> I don't mean "IDE" like the industry has come to think of it (editor + button-to-call-external-tool). / Think more like R-1000 on steroids, not GPS.
> 
> The integration that you're proposing isn't going to be happening on this revision of the language, BUT we can integrate it as closely as possible in the form of an IDE, acting /as if/ there were a requirement for such integration. (And indeed, the Ada spec requires a degree of this WRT static analysis currently, so it certainly is possible to integrate these things.)

I see what you mean. I think it would be difficult to do without proper 
integration. We need contracts properly inherited, e.g. in the case of 
exceptions, as well as all sorts of conditional contracts, this if that, 
I am OK if he is OK etc. Doing this outside the compiler would require 
lots of helper files gathering information from the parser, compiler, 
prover and redistributing it in all possible directions. My fear is that 
it will be impossible to handle. GPS, gprconfig, gprbuild already 
generate a dozen of such files, promptly to stumble upon. Recently I had 
a persistent compiler crash. Project cleanup didn't help. Only when I 
manually deleted all files except the sources it worked again.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Ada lacks lighterweight-than-task parallelism
  2018-06-21 16:06                     ` Dmitry A. Kazakov
@ 2018-06-22 17:06                       ` Shark8
  2018-06-22 18:53                         ` Dmitry A. Kazakov
  0 siblings, 1 reply; 39+ messages in thread
From: Shark8 @ 2018-06-22 17:06 UTC (permalink / raw)


On Thursday, June 21, 2018 at 10:07:01 AM UTC-6, Dmitry A. Kazakov wrote:
> On 2018-06-21 16:42, Shark8 wrote:
> > On Thursday, June 21, 2018 at 3:09:26 AM UTC-6, Dmitry A. Kazakov wrote:
> >> On 2018-06-21 02:19, Shark8 wrote:
> > Integration.
> > I don't mean "IDE" like the industry has come to think of it (editor + button-to-call-external-tool). / Think more like R-1000 on steroids, not GPS.
> > 
> > The integration that you're proposing isn't going to be happening on this revision of the language, BUT we can integrate it as closely as possible in the form of an IDE, acting /as if/ there were a requirement for such integration. (And indeed, the Ada spec requires a degree of this WRT static analysis currently, so it certainly is possible to integrate these things.)
> 
> I see what you mean. I think it would be difficult to do without proper 
> integration. We need contracts properly inherited, e.g. in the case of 
> exceptions, as well as all sorts of conditional contracts, this if that, 
> I am OK if he is OK etc. Doing this outside the compiler would require 
> lots of helper files gathering information from the parser, compiler, 
> prover and redistributing it in all possible directions. My fear is that 
> it will be impossible to handle. GPS, gprconfig, gprbuild already 
> generate a dozen of such files, promptly to stumble upon. Recently I had 
> a persistent compiler crash. Project cleanup didn't help. Only when I 
> manually deleted all files except the sources it worked again.

GPS, gprconfig, gprbuild, etc are irrelevant, I'm not saying we start with extant tools and methodologies. Besides the "dozens of such files" are only indicative of the lack of integration between these tools. (Arguably you could make a highly-integrated system that did use files intermediately between its tools; however, even if they were initially as a set of integrated tools, unless there was a central library handling these file-types, they would accumulate ad hock changes/improvements and thus have unbalanced development: much like dialects in real-world/natural-language.) -- Plus, as you saw these dozens of files require management.


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Ada lacks lighterweight-than-task parallelism
  2018-06-22 17:06                       ` Shark8
@ 2018-06-22 18:53                         ` Dmitry A. Kazakov
  0 siblings, 0 replies; 39+ messages in thread
From: Dmitry A. Kazakov @ 2018-06-22 18:53 UTC (permalink / raw)


On 2018-06-22 19:06, Shark8 wrote:
> On Thursday, June 21, 2018 at 10:07:01 AM UTC-6, Dmitry A. Kazakov wrote:
>> On 2018-06-21 16:42, Shark8 wrote:
>>> On Thursday, June 21, 2018 at 3:09:26 AM UTC-6, Dmitry A. Kazakov wrote:
>>>> On 2018-06-21 02:19, Shark8 wrote:
>>> Integration.
>>> I don't mean "IDE" like the industry has come to think of it (editor + button-to-call-external-tool). / Think more like R-1000 on steroids, not GPS.
>>>
>>> The integration that you're proposing isn't going to be happening on this revision of the language, BUT we can integrate it as closely as possible in the form of an IDE, acting /as if/ there were a requirement for such integration. (And indeed, the Ada spec requires a degree of this WRT static analysis currently, so it certainly is possible to integrate these things.)
>>
>> I see what you mean. I think it would be difficult to do without proper
>> integration. We need contracts properly inherited, e.g. in the case of
>> exceptions, as well as all sorts of conditional contracts, this if that,
>> I am OK if he is OK etc. Doing this outside the compiler would require
>> lots of helper files gathering information from the parser, compiler,
>> prover and redistributing it in all possible directions. My fear is that
>> it will be impossible to handle. GPS, gprconfig, gprbuild already
>> generate a dozen of such files, promptly to stumble upon. Recently I had
>> a persistent compiler crash. Project cleanup didn't help. Only when I
>> manually deleted all files except the sources it worked again.
> 
> GPS, gprconfig, gprbuild, etc are irrelevant, I'm not saying we start with extant tools and methodologies. Besides the "dozens of such files" are only indicative of the lack of integration between these tools. (Arguably you could make a highly-integrated system that did use files intermediately between its tools; however, even if they were initially as a set of integrated tools, unless there was a central library handling these file-types, they would accumulate ad hock changes/improvements and thus have unbalanced development: much like dialects in real-world/natural-language.) -- Plus, as you saw these dozens of files require management.

OK, it is possible to do, theoretically, but it would be a huge project 
and GCC is still relevant.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Ada lacks lighterweight-than-task parallelism
  2018-06-21 15:55                     ` Dan'l Miller
@ 2018-06-27 11:49                       ` Marius Amado-Alves
  0 siblings, 0 replies; 39+ messages in thread
From: Marius Amado-Alves @ 2018-06-27 11:49 UTC (permalink / raw)


Just call it ADE (Awesome Development Environment).


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Ada lacks lighterweight-than-task parallelism
  2018-06-20 14:01               ` Mehdi Saada
  2018-06-20 14:32                 ` Dmitry A. Kazakov
  2018-06-20 15:58                 ` Niklas Holsti
@ 2018-06-29 21:58                 ` Randy Brukardt
  2 siblings, 0 replies; 39+ messages in thread
From: Randy Brukardt @ 2018-06-29 21:58 UTC (permalink / raw)


"Mehdi Saada" <00120260a@gmail.com> wrote in message 
news:64a526cb-e6d5-44a6-b446-5b652ebe60ca@googlegroups.com...
>Considering how complicate writing Ada compilers already is, and
>seeing that there is no standard interface between those and tools - as
>far as I get it - it would likely make the hypothetic compilers huge and
>considering GNAT is the only one implementing (most) of the 2012
>version, fusing tools and compilers might close for good the Ada
>compilers market.
>Haven't you been promoting free market ?
>Stop me if I made a mistake.

For what it's worth, I agree (somewhat) with Dmitry. Not so much about SPARK 
(the reliance on outside logic provers seems unnecessary for most work), but 
more like CodePeer. CodePeer is one of a variety of static analysis tools 
that are based on compiler optimization technology. It seems to me that that 
such technology should be an intergral part of an Ada compiler's optimizer 
(a lot of it exists anyway). In part because anything it can prove will also 
allow making smaller/faster code (for instance, proving that a check cannot 
fail also means that the compiler need not generate code for that check).

This does imply a need for Ada-specific optimization (which might be an 
issue in a GCC/LLVM environment - although I think it can be dealt with).

Anyone have a few extra million $$$ laying around so I can build this?? ;-)

                                     Randy.

P.S. The next Janus/Ada release, scheduled for late July, will have a small 
corner of that capability implemented. There'll be a blog entry on 
rrsoftware.com once the compiler details are settled. 


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Ada lacks lighterweight-than-task parallelism
  2018-06-20 14:32                 ` Dmitry A. Kazakov
@ 2018-06-29 22:01                   ` Randy Brukardt
  2018-06-29 22:15                     ` Dmitry A. Kazakov
  0 siblings, 1 reply; 39+ messages in thread
From: Randy Brukardt @ 2018-06-29 22:01 UTC (permalink / raw)


"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message 
news:pgdoip$1ntl$1@gioia.aioe.org...
...
> How adding features like low-level parallelism makes compiler smaller?

Nothing low-level about parallel loops. They're in fact a higher-level 
construct than an Ada task, since they're checked (by default) for safety 
problems and any tasking used is implicit. "Parallel" is mainly a hint to 
the compiler that parallelism can be used (there's almost no case today 
where a compiler could automatically use parallel code, as there would be a 
substantial risk of that code being slower than sequential code would be).

                                         Randy.


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Ada lacks lighterweight-than-task parallelism
  2018-06-29 22:01                   ` Randy Brukardt
@ 2018-06-29 22:15                     ` Dmitry A. Kazakov
  2018-06-29 22:47                       ` Randy Brukardt
  0 siblings, 1 reply; 39+ messages in thread
From: Dmitry A. Kazakov @ 2018-06-29 22:15 UTC (permalink / raw)


On 2018-06-30 00:01, Randy Brukardt wrote:
> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message
> news:pgdoip$1ntl$1@gioia.aioe.org...
> ...
>> How adding features like low-level parallelism makes compiler smaller?
> 
> Nothing low-level about parallel loops. They're in fact a higher-level
> construct than an Ada task, since they're checked (by default) for safety
> problems and any tasking used is implicit.

That is not a definition of higher-level [abstraction] to me. Parallel 
loop is a loop.

> "Parallel" is mainly a hint to
> the compiler that parallelism can be used (there's almost no case today
> where a compiler could automatically use parallel code, as there would be a
> substantial risk of that code being slower than sequential code would be).

Why does it need such hints?

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Ada lacks lighterweight-than-task parallelism
  2018-06-29 22:15                     ` Dmitry A. Kazakov
@ 2018-06-29 22:47                       ` Randy Brukardt
  2018-06-30  8:41                         ` Dmitry A. Kazakov
  0 siblings, 1 reply; 39+ messages in thread
From: Randy Brukardt @ 2018-06-29 22:47 UTC (permalink / raw)


"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message 
news:ph6b2l$1cta$1@gioia.aioe.org...
> On 2018-06-30 00:01, Randy Brukardt wrote:
...
>> "Parallel" is mainly a hint to
>> the compiler that parallelism can be used (there's almost no case today
>> where a compiler could automatically use parallel code, as there would be 
>> a
>> substantial risk of that code being slower than sequential code would 
>> be).
>
> Why does it need such hints?

One reason is that a loop is defined to iterate operations in a specific 
order. For instance, in
for I in 1..5 loop -- I takes on the values of 1, 2, 3, 4, and 5, in that 
order. The ability of a compiler to parallelize in such a case then requires 
determining all of the following:
(1) That there are enough iterations in order to make parallelizing 
worthwhile;
(2) That there is enough code in each iteration in order to make 
parallelizing worthwhile;
(3) That there is no interaction between iterations;
(4) That there is no use of global variables.

With "parallel", the compiler knows that the programmer has requested 
unordered, parallel iteration, so it only needs make safety checks and 
create the appropriate implementation.

Otherwise, it can't make the optimization without knowing all of the above 
(having simple loops run very slowly because they were parallelized 
unnecessarily is not going to be accepted by many). Indeed, you pretty much 
can only do it when the number of iterations is static and the body of the 
iteration doesn't contain any external calls -- and the number of iterations 
is relatively large.

It's clearly possible that sometime in the future, we'll have new hardware 
and operating systems that would make the overhead small enough that such 
optimizations could be done automatically in enough cases to make it worth 
it. But that's fine; the situation then would be similar to that for 
"inline" -- not 100% necessary but still a useful hint to the compiler.

                                        Randy.


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Ada lacks lighterweight-than-task parallelism
  2018-06-29 22:47                       ` Randy Brukardt
@ 2018-06-30  8:41                         ` Dmitry A. Kazakov
  2018-06-30 15:43                           ` Brad Moore
  0 siblings, 1 reply; 39+ messages in thread
From: Dmitry A. Kazakov @ 2018-06-30  8:41 UTC (permalink / raw)


On 2018-06-30 00:47, Randy Brukardt wrote:
> "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message
> news:ph6b2l$1cta$1@gioia.aioe.org...
>> On 2018-06-30 00:01, Randy Brukardt wrote:
> ...
>>> "Parallel" is mainly a hint to
>>> the compiler that parallelism can be used (there's almost no case today
>>> where a compiler could automatically use parallel code, as there would be
>>> a
>>> substantial risk of that code being slower than sequential code would
>>> be).
>>
>> Why does it need such hints?
> 
> One reason is that a loop is defined to iterate operations in a specific
> order. For instance, in
> for I in 1..5 loop -- I takes on the values of 1, 2, 3, 4, and 5, in that
> order. The ability of a compiler to parallelize in such a case then requires
> determining all of the following:
> (1) That there are enough iterations in order to make parallelizing
> worthwhile;
> (2) That there is enough code in each iteration in order to make
> parallelizing worthwhile;
> (3) That there is no interaction between iterations;
> (4) That there is no use of global variables.
> 
> With "parallel", the compiler knows that the programmer has requested
> unordered, parallel iteration, so it only needs make safety checks and
> create the appropriate implementation.

There should be unordered types in the meaning that the compiler is 
allowed to choose any order. Like enumeration type which has no specific 
order. It must be a property of the type you iterate through, not a 
property of the loop.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Ada lacks lighterweight-than-task parallelism
  2018-06-30  8:41                         ` Dmitry A. Kazakov
@ 2018-06-30 15:43                           ` Brad Moore
  2018-07-01  9:46                             ` Dmitry A. Kazakov
  0 siblings, 1 reply; 39+ messages in thread
From: Brad Moore @ 2018-06-30 15:43 UTC (permalink / raw)


I don't think that would fit into the language very well, and I dont think this is particularly appealing to me, though at one time I was suggesting that we could apply aspects such as parallel and other controls to a locally declared subtype, we moved away from that idea. We have "reverse" syntax for loops already which specifies an order. It would be inconsistent to sometimes specify the order on the loop, and sometimes somewhere else. It also seems like it would be messy for ADT's such as containers. Sometimes you want to iterate through a type sequentially, for example when the work to be performed per iteration is very small and not worth doing in parallel, and sometimes you want to specify parallel execution, if you know that there is significant processing required for each iteration. You might have a container with many elements. It sounds like you'd have to declare two container objects, (one parallel, and one sequential), and then copy all the elements from one to the other whenever you wanted to switch from sequential to parallel iteration or vice versa, when really you only want or need a single container object.

Similarly, you might want to iterate through an enumeration with a particular order for one loop, and a different order for another loop. It seems to be better to apply the hint to the loop construct rather than the iterator type, which is also consistent with other approaches such as in OpenMP and Cilk.

Brad Moore

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Ada lacks lighterweight-than-task parallelism
  2018-06-30 15:43                           ` Brad Moore
@ 2018-07-01  9:46                             ` Dmitry A. Kazakov
  2018-07-02 13:13                               ` Marius Amado-Alves
  0 siblings, 1 reply; 39+ messages in thread
From: Dmitry A. Kazakov @ 2018-07-01  9:46 UTC (permalink / raw)


On 2018-06-30 17:43, Brad Moore wrote:
> I don't think that would fit into the language very well, and I dont think this is particularly appealing to me, though at one time I was suggesting that we could apply aspects such as parallel and other controls to a locally declared subtype, we moved away from that idea. We have "reverse" syntax for loops already which specifies an order. It would be inconsistent to sometimes specify the order on the loop, and sometimes somewhere else.

I would say that semantically "reverse" applies to a set (range is a 
set) rather than to the loop. If sets were first-class:

    for I in reverse (Ordered_Set) loop

> It also seems like it would be messy for ADT's such as containers. Sometimes you want to iterate through a type sequentially, for example when the work to be performed per iteration is very small and not worth doing in parallel, and sometimes you want to specify parallel execution, if you know that there is significant processing required for each iteration. You might have a container with many elements. It sounds like you'd have to declare two container objects, (one parallel, and one sequential), and then copy all the elements from one to the other whenever you wanted to switch from sequential to parallel iteration or vice versa, when really you only want or need a single container object.

I would declare different enumerators for the container. It is a good 
practice to be able to enumerate a container by a key or by position or 
by something else (e.g. by code points for strings).

> Similarly, you might want to iterate through an enumeration with a particular order for one loop, and a different order for another loop. It seems to be better to apply the hint to the loop construct rather than the iterator type, which is also consistent with other approaches such as in OpenMP and Cilk.

To me that is still a property of the enumerated object rather than of 
the loop. Otherwise you must hard-wire implementation into the loop 
instead of the set/index/mapping type. It is container's business, IMO.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Ada lacks lighterweight-than-task parallelism
  2018-07-01  9:46                             ` Dmitry A. Kazakov
@ 2018-07-02 13:13                               ` Marius Amado-Alves
  2018-07-02 15:05                                 ` Dmitry A. Kazakov
  0 siblings, 1 reply; 39+ messages in thread
From: Marius Amado-Alves @ 2018-07-02 13:13 UTC (permalink / raw)


"To me that is still a property of the enumerated object rather than of the loop." (Kasakov)

Hmmm... are there not cases where the *same* "unordered" object sustains two (separated) loops, loop1 truly "unordered" but loop2 with more complex processing requiring *order*?

(Hence loop1 parallelizeable but loop2 not.)


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Ada lacks lighterweight-than-task parallelism
  2018-07-02 13:13                               ` Marius Amado-Alves
@ 2018-07-02 15:05                                 ` Dmitry A. Kazakov
  2018-07-02 16:01                                   ` Marius Amado-Alves
  0 siblings, 1 reply; 39+ messages in thread
From: Dmitry A. Kazakov @ 2018-07-02 15:05 UTC (permalink / raw)


On 2018-07-02 15:13, Marius Amado-Alves wrote:
> "To me that is still a property of the enumerated object rather than of the loop." (Kasakov)
> 
> Hmmm... are there not cases where the *same* "unordered" object sustains two (separated) loops, loop1 truly "unordered" but loop2 with more complex processing requiring *order*?

An ordered set or a set of a stochastic/independent order is iterated in 
the only way it offers. Iteration is an interface to a set or container, 
no magic, just a set of operations like 'First, 'Succ etc.

Answering the question, if the same object must be iterated differently 
then, obviously to me, it must offer two different interfaces for doing 
so. Interfaces can be views as in the case of "in reverse", and so could 
be an interface for ad-hoc parallel loops.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Ada lacks lighterweight-than-task parallelism
  2018-07-02 15:05                                 ` Dmitry A. Kazakov
@ 2018-07-02 16:01                                   ` Marius Amado-Alves
  2018-07-02 16:48                                     ` Dmitry A. Kazakov
  0 siblings, 1 reply; 39+ messages in thread
From: Marius Amado-Alves @ 2018-07-02 16:01 UTC (permalink / raw)


" ... if the same object must be iterated differently it must offer two different interfaces for doing so" (K.)

Absolutely, and I've done that multiple times. Then 'Parallelizeable would be a property of the iterator--not of the object. A simple Boolean attribute indicating to the compiler "parallelize if you can" or "dont." (Defaulting to False I guess.)

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Ada lacks lighterweight-than-task parallelism
  2018-07-02 16:01                                   ` Marius Amado-Alves
@ 2018-07-02 16:48                                     ` Dmitry A. Kazakov
  0 siblings, 0 replies; 39+ messages in thread
From: Dmitry A. Kazakov @ 2018-07-02 16:48 UTC (permalink / raw)


On 2018-07-02 18:01, Marius Amado-Alves wrote:
> " ... if the same object must be iterated differently it must offer two different interfaces for doing so" (K.)
> 
> Absolutely, and I've done that multiple times. Then 'Parallelizeable would be a property of the iterator--not of the object. A simple Boolean attribute indicating to the compiler "parallelize if you can" or "dont." (Defaulting to False I guess.)

No, there should be no any iterators in the first place. In a properly 
designed language you need no helper types.

And it is certainly not a Boolean flag. A property is something which 
can be proved to be true. One property is existence of random 
independent evaluation of the access. It is a property of the given set 
type (or mapping/container type).

Another property is absence of cross effects of the loop body 
evaluation, which depends on the properties of operations and objects 
that appear in the body.

None of these are properties of the loop itself, so it is strange to tie 
this to the loop statement. That would be a promise the language cannot 
hold unless it can prove it holding. If it can, and the only way to know 
is the contracts of the objects involved, then it would need no further 
hints.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de


^ permalink raw reply	[flat|nested] 39+ messages in thread

end of thread, other threads:[~2018-07-02 16:48 UTC | newest]

Thread overview: 39+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-06-19 22:14 Ada lacks lighterweight-than-task parallelism Dan'l Miller
2018-06-19 22:23 ` Dan'l Miller
2018-06-20  0:03 ` Dan'l Miller
2018-06-20  0:41 ` Lucretia
2018-06-20  1:36   ` Dan'l Miller
2018-06-20 13:39     ` Luke A. Guest
2018-06-20  1:12 ` Shark8
2018-06-20  1:41   ` Dan'l Miller
2018-06-20  7:13     ` Dmitry A. Kazakov
2018-06-20 12:03       ` Dan'l Miller
2018-06-20 12:29         ` Dmitry A. Kazakov
2018-06-20 13:14           ` Mehdi Saada
2018-06-20 13:38             ` Dmitry A. Kazakov
2018-06-20 14:01               ` Mehdi Saada
2018-06-20 14:32                 ` Dmitry A. Kazakov
2018-06-29 22:01                   ` Randy Brukardt
2018-06-29 22:15                     ` Dmitry A. Kazakov
2018-06-29 22:47                       ` Randy Brukardt
2018-06-30  8:41                         ` Dmitry A. Kazakov
2018-06-30 15:43                           ` Brad Moore
2018-07-01  9:46                             ` Dmitry A. Kazakov
2018-07-02 13:13                               ` Marius Amado-Alves
2018-07-02 15:05                                 ` Dmitry A. Kazakov
2018-07-02 16:01                                   ` Marius Amado-Alves
2018-07-02 16:48                                     ` Dmitry A. Kazakov
2018-06-20 15:58                 ` Niklas Holsti
2018-06-29 21:58                 ` Randy Brukardt
2018-06-21  0:19               ` Shark8
2018-06-21  9:09                 ` Dmitry A. Kazakov
2018-06-21 14:42                   ` Shark8
2018-06-21 15:55                     ` Dan'l Miller
2018-06-27 11:49                       ` Marius Amado-Alves
2018-06-21 16:06                     ` Dmitry A. Kazakov
2018-06-22 17:06                       ` Shark8
2018-06-22 18:53                         ` Dmitry A. Kazakov
2018-06-21  0:17         ` Shark8
2018-06-20 12:28 ` Brian Drummond
2018-06-21  1:51   ` Dan'l Miller
2018-06-21 10:22     ` Brian Drummond

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox