* GPUs and CUDA @ 2011-07-08 21:20 Rego, P. 2011-07-08 23:18 ` anon 0 siblings, 1 reply; 15+ messages in thread From: Rego, P. @ 2011-07-08 21:20 UTC (permalink / raw) Does Ada have any compiler with support to development under GPUs? CUDA looks to be very promissing and development in Ada on it could be useful. ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: GPUs and CUDA 2011-07-08 21:20 GPUs and CUDA Rego, P. @ 2011-07-08 23:18 ` anon 2011-07-11 17:45 ` Anatoly Chernyshev 2011-07-11 17:51 ` Nicholas Collin Paul de Glouceſter 0 siblings, 2 replies; 15+ messages in thread From: anon @ 2011-07-08 23:18 UTC (permalink / raw) It seams that Graphics is not a major subsystem for the Ada maintainers. Maybe that will change for for now the answers are No and No! You might find some Graphic links in the GNAT Ada for Java. But for the most part Ada is Text only, with a few third parties binding packages for SDL and openGL graphic engines. Plus, at this time Ada's RTS is still "Concurrent", so, no to CUDA as well. In <09ad3bbb-b0a3-438b-9263-b2cb49098e5c@glegroupsg2000goo.googlegroups.com>, "Rego, P." <pvrego@gmail.com> writes: >Does Ada have any compiler with support to development under GPUs? CUDA looks to be very promissing and development in Ada on it could be useful. ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: GPUs and CUDA 2011-07-08 23:18 ` anon @ 2011-07-11 17:45 ` Anatoly Chernyshev 2011-07-11 18:38 ` Georg Bauhaus ` (2 more replies) 2011-07-11 17:51 ` Nicholas Collin Paul de Glouceſter 1 sibling, 3 replies; 15+ messages in thread From: Anatoly Chernyshev @ 2011-07-11 17:45 UTC (permalink / raw) On Jul 9, 3:18 am, a...@att.net wrote: > It seams that Graphics is not a major subsystem for the Ada maintainers. > Maybe that will change for for now the answers are No and No! > > You might find some Graphic links in the GNAT Ada for Java. But for > the most part Ada is Text only, with a few third parties binding > packages for SDL and openGL graphic engines. > Well, the most attractive application of CUDA is not the graphics, but parallel computing using modern graphic chips. The performance gain for the parallelizable programs, which can use CUDA, is pretty significant on a PC. I was looking into using it myself, and eagerly waiting when somebody comes up with the Ada binding. ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: GPUs and CUDA 2011-07-11 17:45 ` Anatoly Chernyshev @ 2011-07-11 18:38 ` Georg Bauhaus 2011-07-12 5:59 ` Anatoly Chernyshev 2011-07-13 14:52 ` anon 2011-07-11 23:28 ` Shark8 2011-07-12 2:01 ` Rego, P. 2 siblings, 2 replies; 15+ messages in thread From: Georg Bauhaus @ 2011-07-11 18:38 UTC (permalink / raw) On 7/11/11 7:45 PM, Anatoly Chernyshev wrote: > Well, the most attractive application of CUDA is not the graphics, but > parallel computing using modern graphic chips. The performance gain > for the parallelizable programs, which can use CUDA, is pretty > significant on a PC. I was looking into using it myself, and eagerly > waiting when somebody comes up with the Ada binding. Wouldn't a multicore PC with additional Graphics processors be a candidate for an Ada 2012 Ravenscar multicore runtime? ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: GPUs and CUDA 2011-07-11 18:38 ` Georg Bauhaus @ 2011-07-12 5:59 ` Anatoly Chernyshev 2011-07-13 16:44 ` Shark8 2011-07-13 14:52 ` anon 1 sibling, 1 reply; 15+ messages in thread From: Anatoly Chernyshev @ 2011-07-12 5:59 UTC (permalink / raw) On Jul 11, 10:38 pm, Georg Bauhaus <rm.dash-bauh...@futureapps.de> wrote: > Wouldn't a multicore PC with additional Graphics processors be > a candidate for an Ada 2012 Ravenscar multicore runtime? I'm not sure about that, taking into account that Ada's original purpose is embedded systems, which are often missing GPUs whatsoever. Yet, I believe, GPU's cores are not quite the same as CPU's, even in terms of synchronization and communication. >Err... I think "binding" is perhaps a horrid idea. (Raise your hand if >you just want a light wrapper around C-headers.) >The way to go, IMO, is to allow a specialized pragma to indicate to >the compiler that such and such subprogram is intended for >distribution to the GPUs for parallel work. >This would allow something like > Pragma CUDA( Ralph ); I guess, in this particular case, the binding or port is better. The suggested pragma makes the language (or at least compiler) hardware- dependend, which I, personally, would hate to see. Yet, the IT's hardware sector is much more agile than Ada's standards, so that today we have CUDA, and tomorrow it will be replaced by who-knows-what... ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: GPUs and CUDA 2011-07-12 5:59 ` Anatoly Chernyshev @ 2011-07-13 16:44 ` Shark8 0 siblings, 0 replies; 15+ messages in thread From: Shark8 @ 2011-07-13 16:44 UTC (permalink / raw) On Jul 12, 12:59 am, Anatoly Chernyshev <achernys...@gmail.com> wrote: > On Jul 11, 10:38 pm, Georg Bauhaus <rm.dash-bauh...@futureapps.de> > wrote: > > > Wouldn't a multicore PC with additional Graphics processors be > > a candidate for an Ada 2012 Ravenscar multicore runtime? > > I'm not sure about that, taking into account that Ada's original > purpose is embedded systems, which are often missing GPUs whatsoever. > Yet, I believe, GPU's cores are not quite the same as CPU's, even in > terms of synchronization and communication. > > >Err... I think "binding" is perhaps a horrid idea. (Raise your hand if > >you just want a light wrapper around C-headers.) > >The way to go, IMO, is to allow a specialized pragma to indicate to > >the compiler that such and such subprogram is intended for > >distribution to the GPUs for parallel work. > >This would allow something like > > Pragma CUDA( Ralph ); > > I guess, in this particular case, the binding or port is better. The > suggested pragma makes the language (or at least compiler) hardware- > dependend, which I, personally, would hate to see. Yet that is exactly what implementation-defined pragmas are supposed to allow. And it needn't force the program to be implementation-dependent; consider: Task Type Some_Task [...]; Procedure X is Tasks : Array (1..20) of Some_Task; Pragma Multiprocessor(Construct => Tasks, Processors => 4, Cores => 4); begin Null; -- We're finished with the procedure when all tasks are finished. end X; A compiler which is written for a single-processor computer is free to ignore the pragma Multiprocessor... and yet this will still work. (Though maybe not as-typed, I didn't try to compiler it, it might need a level of indirection.) In the particular case of CUDA I've always thought it would have been better for them to specialize an Ada compiler rather than a C compiler simply because the TASK construct is very well-suited to such parallelisms; they could have had some error/complexity-checking which would cause the compiler to error-out should the TASKs tagged with the CUDA held things which would be impossible or onerous to implement. > Yet, the IT's > hardware sector is much more agile than Ada's standards, so that today > we have CUDA, and tomorrow it will be replaced by who-knows-what... True. ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: GPUs and CUDA 2011-07-11 18:38 ` Georg Bauhaus 2011-07-12 5:59 ` Anatoly Chernyshev @ 2011-07-13 14:52 ` anon 1 sibling, 0 replies; 15+ messages in thread From: anon @ 2011-07-13 14:52 UTC (permalink / raw) Under the CUDA system the parallel code is written in C and can be called by C, JAVA, Ruby, etc. And can include Ada with use of "pragma Import" statement. But why use another hosting language, because it is only required to load and start execution of the parallel code, and to handle the "Windows" or "X-Windows" environment for the program. Ada has third party packages for some windows environment, but most programming for Ada do not use them. For Learning parallel, in my opinion I would use the CUDA C compiler to create both parallel code and the hosting code. For the hosting code, just use an sample code and alter it to fit you needs. But the main problem in using CUDA is the memory is to small for the parallel design. It mostly setup for graphics. In <4e1b4333$0$6583$9b4e6d93@newsspool3.arcor-online.net>, Georg Bauhaus <rm.dash-bauhaus@futureapps.de> writes: >On 7/11/11 7:45 PM, Anatoly Chernyshev wrote: > >> Well, the most attractive application of CUDA is not the graphics, but >> parallel computing using modern graphic chips. The performance gain >> for the parallelizable programs, which can use CUDA, is pretty >> significant on a PC. I was looking into using it myself, and eagerly >> waiting when somebody comes up with the Ada binding. > >Wouldn't a multicore PC with additional Graphics processors be >a candidate for an Ada 2012 Ravenscar multicore runtime? > ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: GPUs and CUDA 2011-07-11 17:45 ` Anatoly Chernyshev 2011-07-11 18:38 ` Georg Bauhaus @ 2011-07-11 23:28 ` Shark8 2011-07-12 2:02 ` Rego, P. 2011-07-12 2:01 ` Rego, P. 2 siblings, 1 reply; 15+ messages in thread From: Shark8 @ 2011-07-11 23:28 UTC (permalink / raw) On Jul 11, 12:45 pm, Anatoly Chernyshev <achernys...@gmail.com> wrote: > On Jul 9, 3:18 am, a...@att.net wrote: > > > It seams that Graphics is not a major subsystem for the Ada maintainers. > > Maybe that will change for for now the answers are No and No! > > > You might find some Graphic links in the GNAT Ada for Java. But for > > the most part Ada is Text only, with a few third parties binding > > packages for SDL and openGL graphic engines. > > Well, the most attractive application of CUDA is not the graphics, but > parallel computing using modern graphic chips. The performance gain > for the parallelizable programs, which can use CUDA, is pretty > significant on a PC. I was looking into using it myself, and eagerly > waiting when somebody comes up with the Ada binding. Err... I think "binding" is perhaps a horrid idea. (Raise your hand if you just want a light wrapper around C-headers.) The way to go, IMO, is to allow a specialized pragma to indicate to the compiler that such and such subprogram is intended for distribution to the GPUs for parallel work. As per http://archive.adaic.com/standards/83lrm/html/lrm-02-08.html#2.8 Named associations are, however, only possible if the argument identifiers are defined. A name given in an argument must be either a name visible at the place of the pragma or an identifier specific to the pragma. [...] A pragma that is not language-defined has no effect if its identifier is not recognized by the (current) implementation. Furthermore, a pragma (whether language-defined or implementation- defined) has no effect if its placement or its arguments do not correspond to what is allowed for the pragma. The region of text over which a pragma has an effect depends on the pragma. This would allow something like Pragma CUDA( Ralph ); to indicate that the subprogram "Ralph" is to be run on the GPUs; and it would also allow non-CUDA enabled compilers to compile the source regardless of whether or not their compiler actually knows about CUDA. ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: GPUs and CUDA 2011-07-11 23:28 ` Shark8 @ 2011-07-12 2:02 ` Rego, P. 2011-07-12 10:26 ` ldries46 0 siblings, 1 reply; 15+ messages in thread From: Rego, P. @ 2011-07-12 2:02 UTC (permalink / raw) >This would allow something like > Pragma CUDA( Ralph ); >to indicate that the subprogram "Ralph" is to be run on the GPUs; and >it would also allow non-CUDA enabled compilers to compile the source >regardless of whether or not their compiler actually knows about CUDA. That's the idea. It would be very good. ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: GPUs and CUDA 2011-07-12 2:02 ` Rego, P. @ 2011-07-12 10:26 ` ldries46 2011-07-12 18:53 ` Gene 2011-07-12 23:49 ` Shark8 0 siblings, 2 replies; 15+ messages in thread From: ldries46 @ 2011-07-12 10:26 UTC (permalink / raw) |"Rego, P." schreef in bericht news:783efc18-a581-4ae2-bde4-04afa3568ff8@glegroupsg2000goo.googlegroups.com... | |>This would allow something like |> Pragma CUDA( Ralph ); |>to indicate that the subprogram "Ralph" is to be run on the GPUs; and |>it would also allow non-CUDA enabled compilers to compile the source |>regardless of whether or not their compiler actually knows about CUDA. | |That's the idea. It would be very good. At this moment I don't know if this is a good idea, because CUDA is a language developped only for NVidia GPU's. One of the purposes of Ada I think is that a program can always be used on another computer with only the a simple recompilation. I would like a Pragma GPU( Ralph ) that can check which GPU is used and if that GPU has a CUDA like environment available. L. Dries ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: GPUs and CUDA 2011-07-12 10:26 ` ldries46 @ 2011-07-12 18:53 ` Gene 2011-07-12 23:49 ` Shark8 1 sibling, 0 replies; 15+ messages in thread From: Gene @ 2011-07-12 18:53 UTC (permalink / raw) On Jul 12, 6:26 am, "ldries46" <bertus.dr...@planet.nl> wrote: > |"Rego, P." schreef in berichtnews:783efc18-a581-4ae2-bde4-04afa3568ff8@glegroupsg2000goo.googlegroups.com... > | > |>This would allow something like > |> Pragma CUDA( Ralph ); > |>to indicate that the subprogram "Ralph" is to be run on the GPUs; and > |>it would also allow non-CUDA enabled compilers to compile the source > |>regardless of whether or not their compiler actually knows about CUDA. > | > |That's the idea. It would be very good. > > At this moment I don't know if this is a good idea, because CUDA is a > language developped only for NVidia GPU's. One of the purposes of Ada I > think is that a program can always be used on another computer with only the > a simple recompilation. I would like a Pragma GPU( Ralph ) that can check > which GPU is used and if that GPU has a CUDA like environment available. > > L. Dries Indeed, GPUs are highly specialized processors that excel mostly at certain kinds of floating point computations with limited branching behaviors. Compiler tricks to find good mappings for general code onto GPUs are very much a current topic of research. Nvidia's CUDA implementation relies on language extensions to C and Fortran buit into GCC. I don't think it would be useful or even possible to map GPU-style threads into Ada tasks. So getting access to CUDA primitives from Ada looks like a tough problem, certainly more than adding a Pragma and tweaking the GNAT code generator. On the other hand, there are many numerical libraries already rewritten and tuned to exploit CUDA. Ada bindings for these would be very useful and straingthforwrd to build. ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: GPUs and CUDA 2011-07-12 10:26 ` ldries46 2011-07-12 18:53 ` Gene @ 2011-07-12 23:49 ` Shark8 2011-07-13 15:15 ` ldries46 1 sibling, 1 reply; 15+ messages in thread From: Shark8 @ 2011-07-12 23:49 UTC (permalink / raw) On Jul 12, 5:26 am, "ldries46" <bertus.dr...@planet.nl> wrote: > |"Rego, P." schreef in berichtnews:783efc18-a581-4ae2-bde4-04afa3568ff8@glegroupsg2000goo.googlegroups.com... > | > |>This would allow something like > |> Pragma CUDA( Ralph ); > |>to indicate that the subprogram "Ralph" is to be run on the GPUs; and > |>it would also allow non-CUDA enabled compilers to compile the source > |>regardless of whether or not their compiler actually knows about CUDA. > | > |That's the idea. It would be very good. > > At this moment I don't know if this is a good idea, because CUDA is a > language developped only for NVidia GPU's. One of the purposes of Ada I > think is that a program can always be used on another computer with only the > a simple recompilation. I would like a Pragma GPU( Ralph ) that can check > which GPU is used and if that GPU has a CUDA like environment available. > > L. Dries True; I was thinking more on the side of "what NVidia should have done" in making their compiler. The CUDA compiler is, after all, a a C-compiler with a bunch of [nasty] add-ons. ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: GPUs and CUDA 2011-07-12 23:49 ` Shark8 @ 2011-07-13 15:15 ` ldries46 0 siblings, 0 replies; 15+ messages in thread From: ldries46 @ 2011-07-13 15:15 UTC (permalink / raw) "Shark8" schreef in bericht news:98b32073-e1ef-4e97-80f8-ce068fba2738@h12g2000vbx.googlegroups.com... On Jul 12, 5:26 am, "ldries46" <bertus.dr...@planet.nl> wrote: > |"Rego, P." schreef in > berichtnews:783efc18-a581-4ae2-bde4-04afa3568ff8@glegroupsg2000goo.googlegroups.com... > | > |>This would allow something like > |> Pragma CUDA( Ralph ); > |>to indicate that the subprogram "Ralph" is to be run on the GPUs; and > |>it would also allow non-CUDA enabled compilers to compile the source > |>regardless of whether or not their compiler actually knows about CUDA. > | > |That's the idea. It would be very good. > > At this moment I don't know if this is a good idea, because CUDA is a > language developped only for NVidia GPU's. One of the purposes of Ada I > think is that a program can always be used on another computer with only > the > a simple recompilation. I would like a Pragma GPU( Ralph ) that can check > which GPU is used and if that GPU has a CUDA like environment available. > > L. Dries |done" in making their compiler. |The CUDA compiler is, after all, a a C-compiler with a bunch of |[nasty] add-ons. Perhaps this discussions must be continued on another forum. The best way to import GPU calculations in a regular program would be lijk graphics with an OpenGPU library to which you can link from any language. :. Dries ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: GPUs and CUDA 2011-07-11 17:45 ` Anatoly Chernyshev 2011-07-11 18:38 ` Georg Bauhaus 2011-07-11 23:28 ` Shark8 @ 2011-07-12 2:01 ` Rego, P. 2 siblings, 0 replies; 15+ messages in thread From: Rego, P. @ 2011-07-12 2:01 UTC (permalink / raw) Yes, I was looking to packages to implement paralell computation for finite elements calculations for very big sparse matrices (something like 300x300x300). C has several implementations for CUDA, but surely it lacks real-time Ada facilities, mainly in access and control of multi-process GPU constraints. Guess that it could be very useful to have implemented in Ada (and even better native implementation). ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: GPUs and CUDA 2011-07-08 23:18 ` anon 2011-07-11 17:45 ` Anatoly Chernyshev @ 2011-07-11 17:51 ` Nicholas Collin Paul de Glouceſter 1 sibling, 0 replies; 15+ messages in thread From: Nicholas Collin Paul de Glouceſter @ 2011-07-11 17:51 UTC (permalink / raw) On July 8th, 2011, anon@ATT.net sent: |---------------------------------------------------------------------------| |"It seams that Graphics is not a major subsystem for the Ada maintainers. | |Maybe that will change for for now the answers are No and No! | | | |You might find some Graphic links in the GNAT Ada for Java. But for | |the most part Ada is Text only, with a few third parties binding | |packages for SDL and openGL graphic engines. | | | |Plus, at this time Ada's RTS is still "Concurrent", so, no to CUDA as well.| | | | | |In | |<09ad3bbb-b0a3-438b-9263-b2cb49098e5c@glegroupsg2000goo.googlegroups.com>, | |"Rego, P." <pvrego@gmail.com> writes: | |>Does Ada have any compiler with support to development under GPUs? | |CUDA looks to be very promissing and development in Ada on it could be | |useful." | |---------------------------------------------------------------------------| Concurrent Computer Corporation did not have a binding for its Ada95 compiler to its CUDA hardware when I asked in July 2010 (almost a year ago). ^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2011-07-13 16:44 UTC | newest] Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2011-07-08 21:20 GPUs and CUDA Rego, P. 2011-07-08 23:18 ` anon 2011-07-11 17:45 ` Anatoly Chernyshev 2011-07-11 18:38 ` Georg Bauhaus 2011-07-12 5:59 ` Anatoly Chernyshev 2011-07-13 16:44 ` Shark8 2011-07-13 14:52 ` anon 2011-07-11 23:28 ` Shark8 2011-07-12 2:02 ` Rego, P. 2011-07-12 10:26 ` ldries46 2011-07-12 18:53 ` Gene 2011-07-12 23:49 ` Shark8 2011-07-13 15:15 ` ldries46 2011-07-12 2:01 ` Rego, P. 2011-07-11 17:51 ` Nicholas Collin Paul de Glouceſter
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox