From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-0.9 required=5.0 tests=BAYES_00,FORGED_GMAIL_RCVD, FREEMAIL_FROM autolearn=no autolearn_force=no version=3.4.4 X-Google-Thread: 103376,c08aa0f01f894da6 X-Google-NewGroupId: yes X-Google-Attributes: gida07f3367d7,domainid0,public,usenet X-Google-Language: ENGLISH,ASCII Path: g2news1.google.com!postnews.google.com!z39g2000yqz.googlegroups.com!not-for-mail From: Shark8 Newsgroups: comp.lang.ada Subject: Re: GPUs and CUDA Date: Wed, 13 Jul 2011 09:44:25 -0700 (PDT) Organization: http://groups.google.com Message-ID: <4196e026-18d3-435f-a72a-8872dc5367d6@z39g2000yqz.googlegroups.com> References: <09ad3bbb-b0a3-438b-9263-b2cb49098e5c@glegroupsg2000goo.googlegroups.com> <4e1b4333$0$6583$9b4e6d93@newsspool3.arcor-online.net> NNTP-Posting-Host: 24.230.151.194 Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable X-Trace: posting.google.com 1310577873 7241 127.0.0.1 (13 Jul 2011 17:24:33 GMT) X-Complaints-To: groups-abuse@google.com NNTP-Posting-Date: Wed, 13 Jul 2011 17:24:33 +0000 (UTC) Complaints-To: groups-abuse@google.com Injection-Info: z39g2000yqz.googlegroups.com; posting-host=24.230.151.194; posting-account=lJ3JNwoAAAAQfH3VV9vttJLkThaxtTfC User-Agent: G2/1.0 X-Google-Web-Client: true X-Google-Header-Order: HUALESNKRC X-HTTP-UserAgent: Mozilla/5.0 (Windows NT 6.1; rv:5.0) Gecko/20100101 Firefox/5.0,gzip(gfe) Xref: g2news1.google.com comp.lang.ada:20196 Date: 2011-07-13T09:44:25-07:00 List-Id: On Jul 12, 12:59=A0am, Anatoly Chernyshev wrote: > On Jul 11, 10:38=A0pm, Georg Bauhaus > wrote: > > > Wouldn't a multicore PC with additional Graphics processors be > > a candidate for an Ada 2012 Ravenscar multicore runtime? > > I'm not sure about that, taking into account that Ada's original > purpose is embedded systems, which are often missing GPUs whatsoever. > Yet, I believe, GPU's cores are not quite the same as CPU's, even in > terms of synchronization and communication. > > >Err... I think "binding" is perhaps a horrid idea. (Raise your hand if > >you just want a light wrapper around C-headers.) > >The way to go, IMO, is to allow a specialized pragma to indicate to > >the compiler that such and such subprogram is intended for > >distribution to the GPUs for parallel work. > >This would allow something like > > Pragma CUDA( Ralph ); > > I guess, in this particular case, the binding or port is better. The > suggested pragma makes the language (or at least compiler) hardware- > dependend, which I, personally, would hate to see. Yet that is exactly what implementation-defined pragmas are supposed to allow. And it needn't force the program to be implementation-dependent; consider: Task Type Some_Task [...]; Procedure X is Tasks : Array (1..20) of Some_Task; Pragma Multiprocessor(Construct =3D> Tasks, Processors =3D> 4, Cores =3D> 4); begin Null; -- We're finished with the procedure when all tasks are finished. end X; A compiler which is written for a single-processor computer is free to ignore the pragma Multiprocessor... and yet this will still work. (Though maybe not as-typed, I didn't try to compiler it, it might need a level of indirection.) In the particular case of CUDA I've always thought it would have been better for them to specialize an Ada compiler rather than a C compiler simply because the TASK construct is very well-suited to such parallelisms; they could have had some error/complexity-checking which would cause the compiler to error-out should the TASKs tagged with the CUDA held things which would be impossible or onerous to implement. > Yet, the IT's > hardware sector is much more agile than Ada's standards, so that today > we have CUDA, and tomorrow it will be replaced by who-knows-what... True.