From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,fd0ee7c9be011576 X-Google-Attributes: gid103376,public X-Google-ArrivalTime: 2001-03-07 13:42:07 PST Newsgroups: comp.lang.ada Path: supernews.google.com!sn-xit-03!supernews.com!logbridge.uoregon.edu!news.maxwell.syr.edu!feed2.onemain.com!feed1.onemain.com!uunet!dca.uu.net!ash.uu.net!world!bobduff From: Robert A Duff Subject: Re: Ada Annex E (Just curious :-) Sender: bobduff@world.std.com (Robert A Duff) Message-ID: Date: Wed, 7 Mar 2001 21:39:19 GMT References: <3AA29386.E60A686D@linuxchip.demon.co.uk> <980ekl$p4h$1@nh.pace.co.uk> <3AA43C58.105B970D@linuxchip.demon.co.uk> <3AA5DB24.9270F510@acm.org> Organization: The World Public Access UNIX, Brookline, MA X-Newsreader: Gnus v5.3/Emacs 19.34 Xref: supernews.google.com comp.lang.ada:5516 Date: 2001-03-07T21:39:19+00:00 List-Id: Jeffrey Carter writes: > One can argue that Annex E is completely superfluous. One could have a > compiler that targeted a distributed system, and converted all intertask > communications (rendezvous and protected operations) into network > messages. A program for a distributed system could look exactly like a > program for a multiprocessor, which would make porting from one to the > other much simpler. One can argue that, but I don't think it's true. ;-) I implemented such a beast for Ada 83. You could put tasks on different nodes of a distributed system. You could control where they ran, or let the system choose (some sort of primitive load balancing). Rendezvous was implemented in terms of message passing across the network. But we had to "cheat" a little bit. Above, you mention "rendezvous and protected operations" as the intertask communications mechanisms. But there's another one: shared variables. In Ada (83 and 95) it is perfectly legitimate for two tasks to refer to the same variable (so long as they synchronize properly, perhaps using rendezvous). But the compiler can't tell which variables are shared. This pretty much implies that all tasks must share the same address space. (You *could* implement distributed shared memory in software, but it would be intolerably slow. And you would have to pay that price even for variables local to a single task (or node), because the compiler can't tell which variables are shared). Our solution was to make it the programmer's problem: We allocated each variable in the address space of the task that elaborated it. If some other task in some other node were to refer to that variable, it just wouldn't work (it would refer to a bogus address, and get junk data or segmentation fault or whatever). So you couldn't just take any random Ada program and make it distributed. You had to design your program carefully to avoid all use of shared variables. I don't remember what we did with heap data. We considered having a pragma that would indicate which variables are shared. The obvious name would be "pragma Shared", but Ada 83 already (annoyingly) defined pragma Shared to mean something different. Anyway, we never got around to implementing that. Another more minor problem had to do with timed and conditional entry calls. Which node does the timing? And what does it mean for an entry to be "immediately" ready to go, when it takes a non-trivial amount of time to find out (you have to send a message and receive a reply). These questions had fairly obvious answers, but again, the programmer would have to design the program with distribution in mind, because timing across a distributed system with multiple clocks isn't quite as simple as in a single shared-memory, single-clock situation. So I think Annex E is really a better solution to the problem. You (and the compiler) can control whether there is any shared data, and if so, which variables. And there is no timed RPC (although you can wrap an RPC in a select-then-abort, I suppose). If I were designing a language from scratch, I think I would have three kinds of object: constants (which can be shared by copying them to everywhere they're needed), variables that are local to the task that elaborated their declaration, and "shared" variables, that can be referenced by more than one task. This would require marking procedures syntactically, if they are called by any "other" task. In such a language, I think your idea makes sense: let tasks run in shared memory, or in a distributed way, depending on what the programmer wants. - Bob