comp.lang.ada
 help / color / mirror / Atom feed
From: Shark8 <onewingedshark@gmail.com>
Subject: Re: GPUs and CUDA
Date: Wed, 13 Jul 2011 09:44:25 -0700 (PDT)
Date: 2011-07-13T09:44:25-07:00	[thread overview]
Message-ID: <4196e026-18d3-435f-a72a-8872dc5367d6@z39g2000yqz.googlegroups.com> (raw)
In-Reply-To: c09e6d8e-662e-4670-ae2e-0ad2fea99081@h38g2000pro.googlegroups.com

On Jul 12, 12:59 am, Anatoly Chernyshev <achernys...@gmail.com> wrote:
> On Jul 11, 10:38 pm, Georg Bauhaus <rm.dash-bauh...@futureapps.de>
> wrote:
>
> > Wouldn't a multicore PC with additional Graphics processors be
> > a candidate for an Ada 2012 Ravenscar multicore runtime?
>
> I'm not sure about that, taking into account that Ada's original
> purpose is embedded systems, which are often missing GPUs whatsoever.
> Yet, I believe, GPU's cores are not quite the same as CPU's, even in
> terms of synchronization and communication.
>
> >Err... I think "binding" is perhaps a horrid idea. (Raise your hand if
> >you just want a light wrapper around C-headers.)
> >The way to go, IMO, is to allow a specialized pragma to indicate to
> >the compiler that such and such subprogram is intended for
> >distribution to the GPUs for parallel work.
> >This would allow something like
> > Pragma CUDA( Ralph );
>
> I guess, in this particular case, the binding or port is better. The
> suggested pragma makes the language (or at least compiler) hardware-
> dependend, which I, personally, would hate to see.

Yet that is exactly what implementation-defined pragmas are supposed
to allow.
And it needn't force the program to be implementation-dependent;
consider:

Task Type Some_Task [...];

Procedure X is
  Tasks : Array (1..20) of Some_Task;
  Pragma Multiprocessor(Construct => Tasks, Processors => 4, Cores =>
4);
begin
Null; -- We're finished with the procedure when all tasks are
finished.
end X;

A compiler which is written for a single-processor computer is free to
ignore the pragma Multiprocessor...
and yet this will still work. (Though maybe not as-typed, I didn't try
to compiler it, it might need a level of indirection.)

In the particular case of CUDA I've always thought it would have been
better for them to specialize an Ada compiler rather than a C compiler
simply because the TASK construct is very well-suited to such
parallelisms; they could have had some error/complexity-checking which
would cause the compiler to error-out should the TASKs tagged with the
CUDA held things which would be impossible or onerous to implement.

> Yet, the IT's
> hardware sector is much more agile than Ada's standards, so that today
> we have CUDA, and tomorrow it will be replaced by who-knows-what...

True.



  reply	other threads:[~2011-07-13 16:44 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-07-08 21:20 GPUs and CUDA Rego, P.
2011-07-08 23:18 ` anon
2011-07-11 17:45   ` Anatoly Chernyshev
2011-07-11 18:38     ` Georg Bauhaus
2011-07-12  5:59       ` Anatoly Chernyshev
2011-07-13 16:44         ` Shark8 [this message]
2011-07-13 14:52       ` anon
2011-07-11 23:28     ` Shark8
2011-07-12  2:02       ` Rego, P.
2011-07-12 10:26         ` ldries46
2011-07-12 18:53           ` Gene
2011-07-12 23:49           ` Shark8
2011-07-13 15:15             ` ldries46
2011-07-12  2:01     ` Rego, P.
2011-07-11 17:51   ` Nicholas Collin Paul de Glouceſter
replies disabled

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox