From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-0.8 required=5.0 tests=BAYES_00,INVALID_DATE autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,6844bd29eefb033e,start X-Google-Attributes: gid103376,public X-Google-ArrivalTime: 1995-01-20 03:08:52 PST Path: nntp.gmd.de!newsserver.jvnc.net!nntpserver.pppl.gov!princeton!udel!gatech!howland.reston.ans.net!news.sprintlink.net!pipex!uknet!hrc63!gmrc.gecm.com!valiant!bill From: bill@valiant (R.A.L Williams) Newsgroups: comp.lang.ada Subject: Multiple instances of run-time sys? Date: 20 Jan 1995 11:08:52 GMT Organization: GEC-Marconi Research Centre Message-ID: <3fo5k4$su2@miranda.gmrc.gecm.com> NNTP-Posting-Host: valiant.gmrc.gecm.com X-Newsreader: TIN [version 1.2 PL1] Date: 1995-01-20T11:08:52+00:00 List-Id: In 1993 the US Navy dept. issued a series of reports on 'Advanced Avionics Architecture and Technology' (sorry, there doesn't seem to be any reference number) in which, among other things, they seem to be pushing the following concepts: 1. Software reuse, presumably at library *and* application level. 2. Modular, as opposed to federated, architectures. Studies in Europe are also coming up with the same recommendations and this is something which we need to take seriously. For those who are not familiar with the distinction between modular and federated architectures here is a brief description of my view of them: A Federated structure employs intelligent sensors which feed data to a (probably centralised) main processor. The main processor has relatively low I/O data rates because the intelligent sensors do sensible preprocessing. This seems like a sensible architecture until you start adding additional hardware for redundancy to improve the reliability. The intelligent sensors will need a disproportionately high amount of extra hardware and the system cost goes up. A Modular structure uses dumb(er) sensors which feed (nearly) raw data to a more powerful main processor. When redundancy is added to the complete system this usually works out cheaper because the main processor has better opportunities for sharing resources. Sorry, that was just a very quick thumbnail sketch. The key point is that to take advantage of the Modular architecture we need, in general to be able to support both process migration and per-cpu multi-processing to allow reconfiguration and optimal use of resources. Now, finally, I can get to the point of this post! Suppose we have a CPU (one of many in the system) which is required to run, say, two COTS applications, almost certainly from different vendors (eg. radar processing and nav. processing). Now, both of these are written in Ada (because we always obey the mandates don't we), but, quite possibly, developed using different compilers. So, what happens about the run-time systems for these applications? Most run-time systems for embedded applications make the following assumptions about their environment: - Their app. is the only one running on the CPU - They are supplying all hardware interfaces to the board, ie. initialisation, interrupt dispatch etc. As I see it we have a number of alternatives: 1. Write an operating system that provides a 'virtual machine' for each app. and its run-time to run in. 2. Find a compiler that allows multiple applications and reconfiguration and then recompile all apps with this compiler. 3. Write our own run-time system which allows the same as 2. There may be more. I don't know of anywhere, at present, where the technology for any of my alternatives exists. Does anyone have any suggestions? Bill Williams