From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.4 X-Google-Thread: 103376,10d1a90a699d6cfc X-Google-Attributes: gid103376,public X-Google-Language: ENGLISH,ASCII-7-bit Path: g2news1.google.com!news1.google.com!border1.nntp.dca.giganews.com!nntp.giganews.com!newsfeed00.sul.t-online.de!t-online.de!inka.de!rz.uni-karlsruhe.de!news.uni-stuttgart.de!not-for-mail From: Stefan Bellon Newsgroups: comp.lang.ada Subject: Re: Ada tasking question Date: Thu, 19 Apr 2007 00:57:38 +0200 Organization: Comp.Center (RUS), U of Stuttgart, FRG Message-ID: <20070419005738.692eeeb6@cube.tz.axivion.com> References: <20070418201307.18a85fd9@cube.tz.axivion.com> <462688E3.6050105@nowhere.com> Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Trace: infosun2.rus.uni-stuttgart.de 1176937020 26785 129.69.226.21 (18 Apr 2007 22:57:00 GMT) X-Complaints-To: news@news.uni-stuttgart.de NNTP-Posting-Date: Wed, 18 Apr 2007 22:57:00 +0000 (UTC) X-Newsreader: Sylpheed-Claws 2.6.0 (GTK+ 2.8.20; i486-pc-linux-gnu) X-URL: http://www.axillion.de/ Xref: g2news1.google.com comp.lang.ada:15106 Date: 2007-04-19T00:57:38+02:00 List-Id: Leif Holmgren wrote: > If my memory does not fail me the advice to use an array here is > double good. > > First of all you don't need to bother with the nitty gritty details > of dynamic allocation yourself. > > Secondly and perhaps most important Ada will handle the > synchronization of task termination for you automatically. It will > not allow the array to go out of scope before all the tasks are > completed. Yes, I realized this after I read Randy's posting. This idea is just excellent. > Years ago I implemented such a system (using dynamically allocated > tasks) and it worked very well. By doing as suggested you will gain > maximum performance even if the buckets take different time to > process. Well, here I seem to have still problems. "In the beginning", it seems to work very well. Debug output shows that the tasks do work and fetch new jobs and so on. But as time goes by, some tasks get "lazy" and I have no idea why. They do not terminate, but do not fetch new jobs although there are still open buckets to process. The task body looks (very shortened) like this: type Buckets_Array is array (Natural range <>) of Bucket_Item; type Buckets_Access is access all Buckets_Array; task type Bucket_Worker (Buckets : Buckets_Access) is entry Set_Task_No (No : in Integer); -- Just for debugging end Bucket_Worker; task body Bucket_Worker is Task_No : Integer; Current : Integer; begin accept Set_Task_No (No : in Integer) do Task_No := No; end Set_Task_No; Debug ("Task" & Task_No'Img & ": starting"); Bucket_Jobs.Next (Current); while Current in Buckets.all'Range loop Debug ("Task" & Task_No'Img & ": bucket" & Current'Img) -- Do the work. Bucket_Jobs.Next (Current); end loop; Debug ("Task" & Task_No'Img & ": finished"); end Bucket_Worker; The initialization and starting of the tasks looks like this: Bucket_Jobs.Init (My_Buckets'First); declare Compare_Tasks : array (1 .. Num_CPUs) of Bucket_Worker (Buckets => My_Buckets'Unrestricted_Access); begin for I in Compare_Tasks'Range loop Compare_Tasks (I).Set_Task_No (I); end loop; end; And Bucket_Jobs is the protected object as posted in the followup to Jeffrey Carter. The observation of the tasks that get lazy over the time can be made by two ways: First, the debug output shows a lot of activity for _all_ four tasks in the beginning, but at the end, only one task fetches new jobs although the others have not finished and there are buckets left to process. The second way of observing is by looking at the processor usage with something like top. In the beginning it is quite over 100 % and all CPUs are used. Later one it drops and only one CPU is used. I have no explanation for this behaviour, but perhaps you can spot something which is fundamentally wrong in my code? -- Stefan Bellon