From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,FREEMAIL_FROM autolearn=unavailable autolearn_force=no version=3.4.4 X-Received: by 10.66.43.41 with SMTP id t9mr11924364pal.12.1466932148358; Sun, 26 Jun 2016 02:09:08 -0700 (PDT) X-Received: by 10.157.29.3 with SMTP id m3mr534598otm.2.1466932148315; Sun, 26 Jun 2016 02:09:08 -0700 (PDT) Path: eternal-september.org!reader01.eternal-september.org!reader02.eternal-september.org!news.eternal-september.org!mx02.eternal-september.org!feeder.eternal-september.org!usenet.blueworldhosting.com!feeder01.blueworldhosting.com!peer03.iad.highwinds-media.com!news.highwinds-media.com!feed-me.highwinds-media.com!jk6no2633609igb.0!news-out.google.com!o189ni484ith.0!nntp.google.com!r1no2654198ige.0!postnews.google.com!glegroupsg2000goo.googlegroups.com!not-for-mail Newsgroups: comp.lang.ada Date: Sun, 26 Jun 2016 02:09:08 -0700 (PDT) In-Reply-To: Complaints-To: groups-abuse@google.com Injection-Info: glegroupsg2000goo.googlegroups.com; posting-host=78.192.88.225; posting-account=21X1fwoAAABfSGdxRzzAXr3Ux_KE3tHr NNTP-Posting-Host: 78.192.88.225 References: <58b78af5-28d8-4029-8804-598b2b63013c@googlegroups.com> User-Agent: G2/1.0 MIME-Version: 1.0 Message-ID: <1e32c714-34cf-4828-81fc-6b7fd77e4532@googlegroups.com> Subject: Re: RFC: Prototype for a user threading library in Ada From: Hadrien Grasland Injection-Date: Sun, 26 Jun 2016 09:09:08 +0000 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Received-Bytes: 4952 X-Received-Body-CRC: 1953267902 Xref: news.eternal-september.org comp.lang.ada:30946 Date: 2016-06-26T02:09:08-07:00 List-Id: Le dimanche 26 juin 2016 05:09:25 UTC+2, Randy Brukardt a =C3=A9crit=C2=A0: > "Hadrien Grasland" wrote in message... > ... > >> I'd like to understand better the motivations for these features, so i= f=20 > >> you > >> (or anyone else) wants to try to explain them to me, feel free. (But k= eep=20 > >> in > >> mind that I tend to be hard to convince of anything these days, so don= 't > >> bother if you're going to give up easily. ;-) > > > >See above. Being able to easily write highly concurrent code is of limit= ed=20 > >use > >if said code ends up running with terrible performance because modern OS= s > >are not at all optimized for this kind of workload. We shouldn't need to > >worry about how our users' OS kernels are setup, and user threading and > >coroutines are a solution to this problem. >=20 > Only if you want to make the user work even harder than ever. Not necessarily so, good higher-level abstractions can help here. However, = I definitely agree with your following point: > It seems to me that the problem is with the "typical" Ada implementation= =20 > more than with the expressiveness of features, when it comes to highly=20 > parallel implementations. Mapping tasks directly to OS threads only works= if=20 > the number of tasks is small. So if it hurts when you do that, then DON'T= DO=20 > THAT!! :-) Yes, the problem could be solved at the Ada implementation level. That woul= d also help greatly with the abstraction side of things, as "natural" Ada a= bstractions could be made to work as expected (see below). However, any code which relies on this implementation characteristic would = then become unportable, unless the standard also imposes that all implement= ation follow this path. Would that really be a reasonable request to make? > There's no reason for any particular mapping of Ada tasks to OS threads. = I=20 > agree with you that the best plan is most likely having a number of threa= ds=20 > roughly the same as the number of cores (although that could vary for a= =20 > highly I/O intensive task). Ada already exposes ways to map tasks to core= s,=20 > and that clearly could be extended slightly to manage the tasking system'= s=20 > mapping of threads to tasks. >=20 > I use Ada because I want it to prevent or detect all of my programming=20 > mistakes before I have to debug something. (I HATE debugging!). I'd like = to=20 > extend that to parallel programming to the extent possible. I don't want = to=20 > be adding major new features that will make that harder. An Ada implementation which would want to make the life of concurrent progr= ammers easier could do the following things: 1/ Keep the amount of OS threads low (about the amount of CPU cores, a bit = more for I/O), and map tasks to threads in an 1:N fashion. 2/ Make sure that any Ada feature which blocks tasks does the right thing b= y switching to another task and taking care of waking up the blocked task l= ater, instead of just blocking the underlying OS thread. 3/ Make sure that the Ada standard library implementation behaves in a simi= larly sensible way, by replacing blocking system calls with nonblocking alt= ernatives. That is essentially what the Go programming language designers designed the= ir tasking model, so it is possible to do it in a newly created programming= language/implementation. But how hard would it be to retrofit it inside an= existing Ada implementation? This I could not tell.