From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.4 X-Google-Thread: 103376,5164ccc41905b2d0 X-Google-NewGroupId: yes X-Google-Attributes: gida07f3367d7,domainid0,public,usenet X-Google-Language: ENGLISH,ASCII-7-bit X-Received: by 10.66.232.198 with SMTP id tq6mr216867pac.2.1362744457898; Fri, 08 Mar 2013 04:07:37 -0800 (PST) Path: q9ni5584pba.1!nntp.google.com!Xl.tags.giganews.com!border1.nntp.dca.giganews.com!nntp.giganews.com!local2.nntp.dca.giganews.com!news.giganews.com.POSTED!not-for-mail NNTP-Posting-Date: Fri, 08 Mar 2013 06:07:37 -0600 Date: Fri, 08 Mar 2013 07:07:37 -0500 From: "Peter C. Chapin" User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130221 Thunderbird/17.0.3 MIME-Version: 1.0 Newsgroups: comp.lang.ada Subject: Re: Ada and OpenMP References: <87k3pjht79.fsf@ludovic-brenta.org> In-Reply-To: Message-ID: X-Usenet-Provider: http://www.giganews.com X-Trace: sv3-udG4MY1fbPGibAW8EXnhN4HIkG5hLTY9KPZ6ENdhKO0C3VA40ZmdnS/an1DYHGr8MFQaNOA2oGkQ3rj!tt4HJFI4q0y0zrHvVDTP0GJNczu4Fm7M59Ii6orz/JPEtK9a9HaAGl1chFJ3Whs= X-Complaints-To: abuse@giganews.com X-DMCA-Notifications: http://www.giganews.com/info/dmca.html X-Abuse-and-DMCA-Info: Please be sure to forward a copy of ALL headers X-Abuse-and-DMCA-Info: Otherwise we will be unable to process your complaint properly X-Postfilter: 1.3.40 X-Original-Bytes: 3093 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Date: 2013-03-08T07:07:37-05:00 List-Id: On 03/07/2013 10:31 PM, Randy Brukardt wrote: > But (based on the rest of your note) isn't "fine-grained parallelism". You > called a bunch of expensive library functions in the loop, and thus your > actual computations are large enough that the mechanism would work well. But > so would have an arrangement like Paraffin (with a bit more code > rearrangement). Yes, "fine-grained" was probably a poor choice of words. Yet splitting loops, at any level, into parallel tasks is tedious to do manually. I think this is where you're coming from when you say it's something the compiler should be doing. I don't disagree with you. My post was in response to an early question wondering if OpenMP style parallelism was really necessary in a language with explicit tasking. My answer is "yes," but if it's implemented by having the compiler parallelize things automatically, then all the better. Either way the programmer doesn't have to roll out the machinery of tasking to manually parallelize loops. From your post you are optimistic about the capabilities of current and upcoming compilers and you would know more about that than I do. Indeed, I look forward to using such compilers and would be happy to provide whatever hints they need via aspects, or annotations, or pragmas or whatever might be necessary. Alas when I wrote the program I mentioned before the necessary compiler technology to do automatic loop parallelization was not available to me. Hence OpenMP. Maybe OpenMP is obsolete or maybe it's not. After all the pragmas it uses (in the C world) are a form of annotation. They may not be the best way to do things, especially in Ada, but ultimately it seems like the programmer would have to be involved in some way. Peter