From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.4 X-Google-Thread: a07f3367d7,8060bed478dfa9cd X-Google-Attributes: gida07f3367d7,public,usenet X-Google-NewGroupId: yes X-Google-Language: ENGLISH,ASCII-7-bit X-Received: by 10.180.97.198 with SMTP id ec6mr8399736wib.5.1356837414530; Sat, 29 Dec 2012 19:16:54 -0800 (PST) Path: i11ni337243wiw.0!nntp.google.com!feeder1.cambriumusenet.nl!feed.tweaknews.nl!85.12.40.131.MISMATCH!xlned.com!feeder3.xlned.com!news.astraweb.com!border6.a.newsrouter.astraweb.com!border4.nntp.ams.giganews.com!border2.nntp.ams.giganews.com!border3.nntp.ams.giganews.com!border1.nntp.ams.giganews.com!nntp.giganews.com!newsreader4.netcologne.de!news.netcologne.de!fu-berlin.de!uni-berlin.de!individual.net!not-for-mail From: Niklas Holsti Newsgroups: comp.lang.ada Subject: Re: Ada 2012 automatic concurrency? Date: Thu, 27 Dec 2012 21:06:29 +0200 Organization: Tidorum Ltd Message-ID: References: Mime-Version: 1.0 X-Trace: individual.net x3YyfkxhG+ndBGZwxM6CBA1er3YX1fCIx51G6s+rqbFEGkMIOQMU7yCPzNO0Oz0Ff5 Cancel-Lock: sha1:fKBSnyqE7SzeelALS5OiNq8K5lE= User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:17.0) Gecko/17.0 Thunderbird/17.0 In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Date: 2012-12-27T21:06:29+02:00 List-Id: On 12-12-27 07:57 , charleshixsn@earthlink.net wrote: > On Wednesday, December 26, 2012 2:55:11 PM UTC-8, Niklas Holsti wrote: >> On 12-12-25 19:26 , charleshixsn@earthlink.net wrote: >> >>> In the RM I read (page 209): NOTES 1 Concurrent task ex... >> >> >> I believe that the second sentence in your quote has a more general >> meaning: that an Ada compiler is allowed to generate code that uses >> several processors or cores in parallel just to implement a logically >> sequential (single-task) computation. For example, if your program >> contains the two statements: >> >> x := a + b; >> y := a - b; >> >> and the target computer has two processors or cores capable of addition >> and subtraction, the Ada compiler may generate code that uses these two >> processors in parallel, one to compute a+b and the other to compute a-b. >> This is allowed, since the semantic effects are the same as when using >> one processor for both computations (assuming that none of these >> variables is declared "volatile"). >> >> So I don't think that your use or non-use of protected types (or tasks) >> is relevant to this RM sentence. >> > The use of protected types would be to ensure that the sections were independent. I don't see how protected operations would help, in fact I think that they hinder automatic parallelization. A protected operation is allowed to access and modify global variables, as well as its parameters and the components of the protected object. For example, consider two operations, A and B, and a task that calls them in sequence, like this: A; B; Whether A and B are protected operations or not, the compiler would have to look into their internals to discover if they access and/or modify the same variables. If A and B are ordinary operations, not protected ones, the compiler can (at least conceptually) in-line the calls into the calling task, and could then analyze the data dependencies and possibly arrange for parallel execution of independent parts of the operations, or even of the whole calls. However, if A and B are protected operations, while they can (in principle) be in-lined in the same way, the in-lined code will include the PO locking and unlocking code, which is very likely to access variables that the compiler should consider as "volatile", which means that it would be difficult for the compiler to interleave any code in-lined from A with code in-lined from B (because it must preserve the order of volatile accesses). The compiler would then have less opportunities for parallelization than when the operations are not protected. >> For large-grain parallelization, you should use tasks. I don't know of >> any Ada compiler, including GNAT, that does automatic large-grain >> parallelization. > Thank you. That was the answer I was expecting, if not the one > I was hoping for. Tasks it is. > > IIUC, there shouldn't be more than about twice as many tasks as > cores. Is this about right? That depends... If the tasks are logically different, I would use whatever number of tasks make the design clearest. This would be using concurrency or parallelism for logical purposes, not (mainly) for performance increases. If the tasks are all the same, as for example when you have a number of identical "worker" tasks that grab and execute "jobs" from some pool of "jobs to do", I would use a total number of tasks that makes the average number of "ready" tasks about equal to the number of cores (or a bit more, for margin). For example, if each worker task typically spends half the time working, and half the time waiting for something, the number of workers should be about twice the number of cores. (You should not, of course, count as "waiting" the time that worker tasks spend waiting for new jobs to work on.) Usually, the amount of waiting in tasks is variable and dynamic, which means that the number of "ready" tasks fluctuates and will, with the above rule of thumb, sometimes be less than the number of cores, so some cores will be idle when that happens. If you are very anxious to avoid occasional idle cores, you can of course increase the number of tasks to skew the distribution of the number of "ready" tasks to higher numbers, making idle cores less likely. But this comes at the cost of more overhead in task scheduling, task stack space consumption, and perhaps in data-cache turnover. -- Niklas Holsti Tidorum Ltd niklas holsti tidorum fi . @ .