From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-0.3 required=5.0 tests=BAYES_00, REPLYTO_WITHOUT_TO_CC autolearn=no autolearn_force=no version=3.4.4 X-Google-Thread: 103376,3f60acc31578c72b X-Google-Attributes: gid103376,public X-Google-Language: ENGLISH,ASCII-7-bit Path: g2news1.google.com!news2.google.com!news3.google.com!border1.nntp.dca.giganews.com!nntp.giganews.com!newsfeed00.sul.t-online.de!newsfeed01.sul.t-online.de!t-online.de!newsfeed.arcor.de!news.arcor.de!not-for-mail From: "Dmitry A. Kazakov" Subject: Re: question about tasks, multithreading and multi-cpu machines Newsgroups: comp.lang.ada User-Agent: 40tude_Dialog/2.0.14.1 MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Reply-To: mailbox@dmitry-kazakov.de Organization: cbb software GmbH References: <1rz7na7qggm7p.1jcvc91svb4pd.dlg@40tude.net> Date: Fri, 17 Mar 2006 20:10:17 +0100 Message-ID: NNTP-Posting-Date: 17 Mar 2006 20:10:15 MET NNTP-Posting-Host: 7f7d2d5e.newsread4.arcor-online.net X-Trace: DXC=;OncU4`27:jCUhGd:LVK2e:ejgIfPPlddjW\KbG]kaMheDdmR4DE3RlA=3W7ShL@RlWRXZ37ga[7jjTA67ckJ=Xe0l`D5Z;IBCm X-Complaints-To: usenet-abuse@arcor.de Xref: g2news1.google.com comp.lang.ada:3409 Date: 2006-03-17T20:10:15+01:00 List-Id: On Fri, 17 Mar 2006 15:23:37 +0100, Maciej Sobczak wrote: > Dmitry A. Kazakov wrote: > >> If you >> don't or don't care, then it is an optimization issue. > > In which case someone somewhere needs to translate it into source code, > unless you have very smart parallerizing compiler. I'm addressing the > source code aspect of the whole. I prefer to leave that to the compiler. After all, you don't manually optimize programs for pipelines of concrete CPU. At some point you say, OK, that's my abstraction level, I don't care what's going beneath, because it is economically unreasonable. >> I don't think that Occam's style of concurrency could be an viable >> alternative. Mapping concurrency onto control flow statements is OK, and >> Ada has this as accept and select statements > > Yes, but accept and select are statements that operate with the > concurrency that is already there. The problem is how to introduce this > concurrency in the first place, without resorting to polluting the final > solution with irrelevant entities. I'm not sure what you mean. But, fundamentally, concurrency cannot be described in terms of a Turing Machine. It is incomputable. You have to postulate it. >> The code like above is very difficult to reuse. Let you want to >> extend it. How can you add something into each of two execution paths? > > In the same way as you add something into each branch of the If statement. Absolutely, because both are too low-level. You can extend a polymorphic subprogram. >> Without code modification? > > So what about the If statement? :) > Can you extend the branches of If without code modification? :| One way is how extend constructors and destructors. >> Let I call some subprograms from each of the >> execution paths, how could they communicate? > > Using objects that are dedicated for this, ehem, task. OK, but then, the concept of light-weight concurrency you have described, is incomplete. You need something else to make it usable. Ada has now protected objects as well, though, tasks were complete without them. >> What about safety? > > I don't see anything that would make it less safe than two separate task > bodies. Take the code already presented by Jean-Pierre Rosen: > > declare > task T1; > task body T1 is > begin > A := 7; > end T1; > > task T2; > task body T2 is > begin > B := 8; > end T2; > begin > null; > end; > > My *equivalent* example would be: > > begin > A := 7; > with > B := 8; > end; > > (were's the Ada readability where you need it? ;) ) Well, it is like to claim that *ptr++ is more readable. Yes it is, if the program is three lines long. Ada's variant turns better when concurrent paths become more complicated. Further advantage is that Ada's way is modular from the start. Your model will sooner or later end up with: begin Do_This; with Do_That; end; not much different from: X : Do_This; -- Task Y : Do_That; -- Another task begin null; end; > From the safety point of view, what makes my example worse than the one > above it? What makes it less safe? Concurrent bodies are visually decoupled. They don't have common context. Nothing forces them to be in the same package. It feels safer. (:-)) >> They would >> definitely access some common data? > > Probably yes, probably not - depends on the actual problem to be solved. > Let's say that yes, there is some shared data. What makes the data > sharing more difficult/unsafe than in the case of two separate task bodies? Tasks have contracted interfaces, I mean rendezvous. This is the normal way for tasks to communicate each other. It can be to some extent analyzed statically, provided, programmers do not misuse it. For an Occam-like paradigm there is no any contracts stated. Most likely programmers will use shared variables. >> If they spin for a lock, then what was >> the gain of concurrency? > > What is the gain of concurrency in the presence of locks with two > separate task bodies? > This issue is completely orthogonal to the way the concurrency is > expressed. You have the same problems and the same solutions no matter > what is the syntax used to introduce concurrency. I meant if the construct begin ... with ... end is atomic. I remotely remember that Occam had channels to communicate between scheduling items. >> If they don't; how to maintain such sort of code >> in a large complex system? > > The difference between separate task bodies and the support for > concurrency on the level of control statement is that the former can be > *always* built on top of the latter. I don't think so. Consider: T1 starts T2 and then completes before T2. > The other way round is not true. If you allow functional decomposition it will. >> How can I rewrite it for n-processors (n is >> unknown in advance)? > > With the help of asynchronous blocks, for example (I've mentioned about > them in the article on my web page). > As said above, more structured solutions can be always build on top of > less structured ones. In particular, it would be quite straightforward > to build a (generic?) package for this purpose, that would internally be > implemented with the help of concurrent control structures - this is > always possible. What I want is the possibility to express concurrency > on the level that does not require me to use any new entity, if this new > entity does not emerge by itself in the problem analysis. It is OK, but it is only one side of truth. And also, for the case, where the application domain is orthogonal to concurrency, I'd prefer no any statements. I'd like to specify some constraints and let the compiler to derive the threads it wants. -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de