From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,FREEMAIL_FROM autolearn=unavailable autolearn_force=no version=3.4.4 X-Received: by 10.236.216.8 with SMTP id f8mr1466968yhp.58.1418821585384; Wed, 17 Dec 2014 05:06:25 -0800 (PST) X-Received: by 10.140.30.118 with SMTP id c109mr22127qgc.15.1418821585367; Wed, 17 Dec 2014 05:06:25 -0800 (PST) Path: eternal-september.org!reader01.eternal-september.org!reader02.eternal-september.org!news.eternal-september.org!mx02.eternal-september.org!feeder.eternal-september.org!usenet.blueworldhosting.com!feeder01.blueworldhosting.com!border2.nntp.dca1.giganews.com!nntp.giganews.com!n8no313372qaq.0!news-out.google.com!r1ni59qat.1!nntp.google.com!n8no313370qaq.0!postnews.google.com!glegroupsg2000goo.googlegroups.com!not-for-mail Newsgroups: comp.lang.ada Date: Wed, 17 Dec 2014 05:06:25 -0800 (PST) In-Reply-To: Complaints-To: groups-abuse@google.com Injection-Info: glegroupsg2000goo.googlegroups.com; posting-host=185.30.132.97; posting-account=hya6vwoAAADTA0O27Aq3u6Su3lQKpSMz NNTP-Posting-Host: 185.30.132.97 References: <455d0987-734a-4505-bb39-37bfd1a2cc6b@googlegroups.com> User-Agent: G2/1.0 MIME-Version: 1.0 Message-ID: Subject: =?ISO-8859-1?Q?Re=3A_GNAT=A0and_Tasklets?= From: vincent.diemunsch@gmail.com Injection-Date: Wed, 17 Dec 2014 13:06:25 +0000 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Xref: news.eternal-september.org comp.lang.ada:24068 Date: 2014-12-17T05:06:25-08:00 List-Id: Hello Brad, > I dont think this is accurate, as creating tasks in Ada generally serves= =20 > a different purpose than adding improved parallelism. Tasks are useful=20 > constructs for creating independent concurrent activities. It is a way=20 > of breaking an application into separate independent logical executions= =20 > that separate concerns, improving the logic and understanding of a=20 > program. Parallelism on the other hand is only about making the program= =20 > execute faster. If the parallelism does not do that, it fails to serve=20 > its purpose. I am rather surprised that you made a distinction between creating tasks and parallelism. I agree that the goal of parallelism is to increase CPU usage and therefore make the program run faster. For me creating tasks is the Ada way of implementing parallelism. And it is a sound way of doing it = since compilers, as far as I know, are not really able to find automaticaly= parallelism in a program. Moreover using things like state machines to create parallelism is to complex for a programmer and needs the use of a dedicated langage. So tasks are fine. > So the availability of a parallelism library shouldn't really affect the= =20 > way one structures their program into a collection of tasks. > I find such a library is useful when one ones to improve the execution=20 > time of one/some of the tasks in the application where performance is=20 > not adequate. Tasks and parallelism libraries can complement each other= =20 > to achieve the best of both worlds. I am sorry to disagree : the very existence of a parallelism Library shows the inability of the current Ada technology to do deal directly with parall= elism inside the Ada langage. I realy think this is due to the weakness of curren= t compilers, but if there are also problems inside the langage they should = be addressed (like the Ravenscar restriction that allowed predictable taski= ng, or special constructs to express parallelism, or "aspects" to indicate that a = task should be run on a GPU...). Since should be only a few features of Ada 202X= . > My understanding is that GNAT generally maps tasks to OS threads on a=20 > one to one basis, but as others have pointed out, there may be=20 > configurations where other mappings are also available. I could understand that a Library level task (i.e. a task declared immediat= ely in a package that is at lirary level) be mapped to an OS thread, but a simple local task should definetly not. And even that is a simplification s= ince as you pointed, there is often no use to create more kernel threads than th= e=20 number of available CPU. =20 > My understanding also is that at one time, GNAT had an implementation=20 > built on top of FSU threads developed at Florida State University, by=20 > Ted Baker. This implementation ran all tasks under one OS thread.=20 > [...] The FSU=20 > thread implementation gives you concurrency by allowing tasks to execute= =20 > independently from each other, using some preemptive scheduling model to= =20 > shift the processor between the multiple tasks of an application. The solution of all tasks under one kernel thread is good for monoprocessor= s, and since User Level threads are lightweight compared to Kernel threads,= it was acceptable to map a task to a thread. But with multiple cores, we need all tasks running on a pool of kernel thre= ads, one thread per core. And I suppose that when multicores came, it has b= een considered easier to drop the FSU implementation and simply map one tas= k to a kernel thread. But doing this is an oversimplification that gives po= or performances for pure parallel computing, and gave rise to the need of p= arallelism Library ! (Not to mention the problem of GPU that are commonly u= sed for highly demanding computations and are not supported by GNAT... ) What we need now is a new implementation of tasking in GNAT, able to treat local tasks as jobs. Regards, Vincent