From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-0.3 required=5.0 tests=BAYES_00,LOTS_OF_MONEY, REPLYTO_WITHOUT_TO_CC autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,885dab3998d28a4 X-Google-Attributes: gid103376,public From: Alan Brain Subject: Re: Ariane 5 failure Date: 1996/10/02 Message-ID: <32531A6F.6EDB@dynamite.com.au> X-Deja-AN: 186654647 references: <96100112290401@psavax.pwfl.com> content-type: text/plain; charset=us-ascii organization: @Home mime-version: 1.0 reply-to: aebrain@dynamite.com.au newsgroups: comp.lang.ada x-mailer: Mozilla 3.0 (Win16; I) Date: 1996-10-02T00:00:00+00:00 List-Id: Marin David Condic, 407.796.8997, M/S 731-93 wrote: > > Ken Garlington writes: > >I really need to change jobs. It sounds so much simpler to build > >software for ground-based PCs, where you don't have to worry about the > >weight, power requirements, heat dissipation, physical size, > >vulnerability to EMI/radiation/salt fog/temperature/etc. of your system. > > The particular system I was talking about was for a Submarine. Very tight constraints indeed, on power (it was a diesel sub), physical size (had to fit in a torpedo hatch), heat dissipation (a bit), vulnerability to 100% humidity, salt, chlorine etc etc. Been there, Done that, Got the T-shirt. I'm a Software Engineer who works mainly in Systems. Or maybe a Systems Engineer with a hardware bias. Regardless, in the initial Systems Engineering phase, when one gets all the HWCIs and CSCIs defined, it is only good professional practice to build in plenty of slack. If the requirement is to fit in a 21" hatch, you DON'T design something that's 20.99999" wide. If you can, make it 16", 18 at max. It'll probably grow. Similarly, if you require a minimum of 25 MFlops, make sure there's a growth path to at least 100. It may well be less expensive and less risky to build a chip factory to make a faster CPU than to lose a rocket, or a sub due to software failure that could have been prevented. Usually such ridiculously extreme measures are not neccessary. The Hardware guys bitch about the cost-per-CPU going through the roof. Heck, it could cost $10 million. But if it saves 2 years of Software effort, that's a net saving of $90 million. (All numbers are representative ie plucked out of mid-air, and as you USAians say, Your Mileage May Vary) > I personally like the part about "performance is below theoretical > expectations". Where I live, I have a 5 millisecond loop which > *must* finish in 5 milliseconds. If it runs in 7 milliseconds, we > will fail to close the loop in sufficient time to keep valves from > "slamming into stops", causing them to break, rendering someone's > billion dollar rocket and billion dollar payload "unserviceable". > In this business, that's what *we* mean by "performance is below > theoretical expectations" and why runtime checks which seem > "trivial" to most folks can mean the difference between having a > working system and having an interesting exercise in computer > science which isn't going to go anywhere. In this case, "theoretical expectations" for a really tight 5 MuSec loop should be less than 1 MuSec. Yes, I'm dreaming. OK, 3 MuSec, that's my final offer. For the vast majority of cases, if your engineering is closer to the edge than that, it'll cost big bucks to fix the over-runs you always get. Typical example: I had a big bun-fight with project management about a hefty data transfer rate required for a broadband sonar. They wanted to hand-code the lot in assembler, as the requirements were really, really tight. No time for any of this range-check crap, the data was always good. I eventually threw enough of a professional tantrum to wear down even a group of German Herr Professor Doktors, and we did it in Ada-83. If only as a first pass, to see what the rate really would be. The spec called for 160 MB/Sec. First attempt was 192 MB/Sec, and after some optimisation, we got over 250. After the hardware flaws were fixed (the ones the "un-neccessary" range-bound checking detected ) this was above 300. Now that's too close for my druthers, but even 161 I could live with. Saved maybe 16 months on the project, about 100 people at $15K a month. After the transfer, the data really was trustworthy - which saved a lot of time downstream on the applications in debug time. Note that even with (minor) hardware flaws, the system still worked. Note also that by paying big $ for more capable hardware than strictly neccessary, you can save bigger $ on the project. Many projects spend many months and many $ Million to fix, by hacking, Kludging, and sheer Genius what a few lousy $100K of extra hardware cost would make un-neccessary. A good software engineer in the Risk-management team, and on the Systems Engineering early on, one with enough technical nous in hardware to know what's feasible, enough courage to cost the firm millions in initial costs, and enough power to make it stick, that's what's neccessary. I've seen it; it works. But it's been tried less than a dozen times in 15 years in my experience :( ---------------------- <> <> How doth the little Crocodile | Alan & Carmel Brain| xxxxx Improve his shining tail? | Canberra Australia | xxxxxHxHxxxxxx _MMMMMMMMM_MMMMMMMMM ---------------------- o OO*O^^^^O*OO o oo oo oo oo By pulling Maerklin Wagons, in 1/220 Scale