From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.4 X-Google-Thread: 103376,10d1a90a699d6cfc,start X-Google-Attributes: gid103376,public X-Google-Language: ENGLISH,ASCII-7-bit Path: g2news1.google.com!news1.google.com!news.glorb.com!npeer.de.kpn-eurorings.net!npeer1.kpn.DE!news.uni-stuttgart.de!not-for-mail From: Stefan Bellon Newsgroups: comp.lang.ada Subject: Ada tasking question Date: Wed, 18 Apr 2007 20:13:07 +0200 Organization: Comp.Center (RUS), U of Stuttgart, FRG Message-ID: <20070418201307.18a85fd9@cube.tz.axivion.com> Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Trace: infosun2.rus.uni-stuttgart.de 1176919948 3761 129.69.226.23 (18 Apr 2007 18:12:28 GMT) X-Complaints-To: news@news.uni-stuttgart.de NNTP-Posting-Date: Wed, 18 Apr 2007 18:12:28 +0000 (UTC) X-Newsreader: Sylpheed-Claws 2.6.0 (GTK+ 2.8.20; i486-pc-linux-gnu) X-URL: http://www.axillion.de/ Xref: g2news1.google.com comp.lang.ada:15099 Date: 2007-04-18T20:13:07+02:00 List-Id: Hi all! Although I am quite familiar with Ada, tasking is quite new to me, so please bear with me with my question. ;-) I have a set of buckets where I do have to do some processing on them. This processing can be done in parallel for each bucket. The results of the processing however are accumulated into another data structure. At present I have task types for the processing of the buckets, an access type to the task type and basically just do: declare type Task_Access is access all Task_Type; My_Task : Task_Access; begin for I in Buckets'Range loop My_Task := new Task_Type'(Buckets (I)); end loop; end; The result data structure is a protected object with an entry Add that adds the processing result to the container and which is called by the task body. All is well so far. However the number of buckets may be quite large and I have the feeling that the context switching which is needed for say 1000 or more tasks eats up the gain of the parallelism. At least in my test cases I do not gain anything at all on a Core Duo system. Therefore I have the idea of just starting N tasks in parallel (where N can be specified by the user e.g. according to the number of CPU cores of the machine) instead of tasks for all buckets at once. Starting N tasks, then waiting for them to get all finished and only then starting the next N tasks is not difficult. But how would I do it, so that there are always N tasks running (apart of course when everything has been processed) and that a new tasks is starting on the next bucket as soon as a task on a previous bucket has finished? Any ideas are very welcome! -- Stefan Bellon