From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-0.8 required=5.0 tests=BAYES_00,INVALID_DATE, MSGID_SHORT autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,61ddb0bc8c0ee98,start X-Google-Attributes: gid103376,public X-Google-ArrivalTime: 1994-11-04 06:34:39 PST Path: bga.com!news.sprintlink.net!pipex!uknet!hrc63!mrcu!bill From: bill@uk.co.gec-mrc (R.A.L Williams) Newsgroups: comp.lang.ada Subject: Ada and embedded system reconfiguration Message-ID: <5697@valiant> Date: 4 Nov 94 14:34:39 GMT Organization: GEC-Marconi Research Centre, Great Baddow, UK X-Newsreader: TIN [version 1.2 PL1] Date: 1994-11-04T14:34:39+00:00 List-Id: Recent rumblings in the procurement departments of both DoD and MoD are indicating that the long term future for military procurement programmes is in reconfigurable modular systems. (See for example the reports from the A3P programme, the ASAAC programme, the US Navy Advanced Avionics Architecture programme etc. etc..... sorry, I don't have good references to these, can anyone else help?). A related move is towards the greater adoption of software reuse and COTS code. I've been thinking about the practicalities of implementing these techniques and I have a number of points that I'd like to start a discussion on. Now, my understanding of reconfiguration in modular systems goes something like this: A platform has one main computer system which runs ALL (well, nearly all - safety critical is probably out, but the coffee vending machine is definitely in) software applications. Normally this approach would be regarded as madness, but here's where the clever bit comes in. The main system is based on a distributed architecture with sufficient redundancy built in that, in the event of a hardware failure a reallocation of tasks to processors can take place and system availability is maintained *transparently* to the operators. The cost benefit of this approach is that instead of taking a platform off-line to repair a single subsystem, the repair can wait until scheduled maintenance takes place. The savings in overall life-cycle costs are thought to be enormous. Naturally, sufficient power and I/O bandwidth must be provided to make this a feasible design. Note that this is *not* the same as putting the redundancy into each application, the so-called 'federated' architecture approach. If you do the MTBF sums it generally turns out that the modular approach uses less hardware/power/space/weight than the federal approach. Let's look at this system architecture from the point of view of the system integrator and see how Ada implementations may fit in. The system integrator has been presented with a collection of software packages, mostly from different companies, some of it legacy or COTS code, and his task is to make all of these work together on a reconfigurable system. Now, I can see a number of potential problems, specifically with Ada code, but I see no reason why many of them wouldn't also occur in other languages: 1. 'Implementation Defined ...' problems. Given that the system integrator can specify a target processor to all the contractors how compatible are the internal representations generated by different compilers (don't forget legacy, COTS and reused code) ? If there are incompatibilities then it seems that the system integrator may need to specify a compiler as well. What does this do to s/w reuse? 2. The Ada Run-time system structure can cause problems. My experience of Ada cross compilers is that the code and the run-time are targetted at a SINGLE application running on a SINGLE processor. The run-time ties itself in to the interrupt vector table and a whole host of target specific bits. Now, I suppose that we could write our 'distributed operating system', which will be necessary to make this reconfiguration work, to provide each application with a virtual machine. I'm not sure how efficient this would be and how far-ranging it would need to be. UNIX, for example, provides a virtual machine for each process, but that machine has no notion, in general, of 'real-time' or interrupts. The 80386 CPU, on the other hand, has a virtual machine mode, but that works by emulating a 'subset' machine. I'm not aware of any systems where the full architecture of the target is apparently avaialable to every virtual machine. Does anyone have any idea of the pros and cons of this? 3. Another way of dealing with the Ada run-time might be to simply rewrite it. The snag here is that, unless we have mandated a specific compiler for the platform, we may have to rewrite several. Another snag, which is not a technical problem, but may be a contractual one is compiler validation. As I understand it, cross compilers are validated to run on a specific hardware target with, by implication, a specific run-time system. Now either of the two approaches to integrating multiple applications would, it seems to me, invalidate the compiler. Now, that doesn't stop the system working--but it might stop the customer paying for it!! I haven't had a chance (or time!) to play with it yet, but Ada9x certainly looks like a great improvement on Ada83. In particular the Systems, Real-time and Distributed programming annexes are a move in the right direction; but do thay go far enough to support reconfiguration, and what will the performance hit be? Of course, there are no 'magic bullets'. Other languages: C, C++, Eiffel etc. will all suffer to some degree from the problems outlined above (except, of course, validation!) So there we have it: Are ADA, reconfiguration and software reuse mutuall exclusive? Discuss. Bill Williams ----------------------------------------------------------------------- My employer disclaims any responsibility for my diseased ramblings