From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-0.4 required=5.0 tests=AC_FROM_MANY_DOTS,BAYES_00 autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 107f24,582dff0b3f065a52 X-Google-Attributes: gid107f24,public X-Google-Thread: 1014db,582dff0b3f065a52 X-Google-Attributes: gid1014db,public X-Google-Thread: 103376,bc1361a952ec75ca X-Google-Attributes: gid103376,public X-Google-Thread: 109fba,582dff0b3f065a52 X-Google-Attributes: gid109fba,public X-Google-ArrivalTime: 2001-08-06 08:05:04 PST Path: archiver1.google.com!newsfeed.google.com!newsfeed.stanford.edu!newsfeeds.belnet.be!news.belnet.be!psinet-eu-nl!psiuk-p4!uknet!psiuk-n!news.pace.co.uk!nh.pace.co.uk!not-for-mail From: "Marin David Condic" Newsgroups: comp.lang.ada,comp.lang.c,comp.lang.c++,comp.lang.functional Subject: Re: How Ada could have prevented the Red Code distributed denial of service attack. Date: Mon, 6 Aug 2001 10:59:20 -0400 Organization: Posted on a server owned by Pace Micro Technology plc Message-ID: <9kmbc9$e64$1@nh.pace.co.uk> References: <9k9if8$rn3$1@elf.eng.bsdi.com> <3B687EDF.9359F3FC@mediaone.net> <5267be60.0108021911.7d8fe4@posting.google.com> NNTP-Posting-Host: 136.170.200.133 X-Trace: nh.pace.co.uk 997109961 14532 136.170.200.133 (6 Aug 2001 14:59:21 GMT) X-Complaints-To: newsmaster@news.cam.pace.co.uk NNTP-Posting-Date: 6 Aug 2001 14:59:21 GMT X-Priority: 3 X-MSMail-Priority: Normal X-Newsreader: Microsoft Outlook Express 5.50.4522.1200 X-MimeOLE: Produced By Microsoft MimeOLE V5.50.4522.1200 Xref: archiver1.google.com comp.lang.ada:11372 comp.lang.c:72462 comp.lang.c++:80320 comp.lang.functional:7343 Date: 2001-08-06T14:59:21+00:00 List-Id: "Tor Rustad" wrote in message news:cOHa7.3620$e%4.107328@news3.oke.nextra.no... > > Hmm...in fact for that reason, I have always assumed that in extremely > critical systems, you simply use independent design teams (and programmers) > to develop the second unit, and *not* just duplicate the first unit (which > must have the same identical bugs or flaws). > That is *extremely* expensive to do and doesn't guarantee that you won't still have problems. Yes, identical software on both sides allows for a common-mode failure. But if you had different software on both sides, you could still have common-mode failure for the duplicate processors. Use different processors? Then you can still have common-mode failure because of common requirements or common design decisions. Don't forget that by having duplicate design efforts you also have doubled the chances that someone introduces an error into the design. Failure of one side can be just as disasterous as failure of both sides depending on the failure mode. In fact these sort of systems have been built, but I don't know that they have any significantly better reliability than dual-redundant duplicate designs. > IMHO, using 16-bit integers, is also a design issue. If there was strict > performance requirements to be met, well then why not use a faster > programming language, where the programmers perhaps could afford to use > 32-bit integers? Even in non-critical systems, I do think that many > programmers try to take future demands into consideration and make sure > their SW works not only according to current specs. Since they know that > somebody else may later change other parts of the system, possibly without > re-testing all the SW again. > Ada is just as fast as any other programming language. That's not the issue. If you've ever worked with this sort of embedded system you might have an appreciation for the fact that the *real* limit was the speed of the processor. And that you sometimes can't change it for a whole variety of reasons. You are often trying to squeeze a whole bunch of functionality into very tight timing loops and you really bend every effort to get the optimal performance out of it and if your compiler was generating inefficient code for this, you'd dip into assembler. I wouldn't question what the designers did in terms of optimization or selection of numeric sizes, etc., because there is no indication that they did anything wrong here. They knew they had a timing problem. They optimized their code to solve it. They did an analysis to make sure it was correct. They selected appropriate accommodations in the event of failures. It worked flawlessly in the anticipated environment. Could they have come up with a better design? Almost certainly. Given enough cubic dollars and an eternity in which to do it, they probably could have come up with something more efficient, less likely to fail, etc. But in real world engineering that is often not possible. They came up with something that was "good enough" for the job at hand. The problem arose when their device was used in an application for which it wasn't designed. > > Not only wasn't it a programming bug, I wouldn't even call it a design > > bug, since hardware failure would have been the correct presumption > > based on the Ariane 4 trajectory data. It was an untested, > > unjustified re-use bug. > > To re-use an old design or not, is also design descision IMHO. Not testing > it, is a really bad descision, perhaps the biggest one in this sad story. > Well, true enough. But remember that they were basically taking an off-the-shelf product and bolting it on to a new application. This was hardly the fault of the original design engineers. It was the fault of bad management decisions in deciding to "reuse an old design" that had a proven track record and *assuming* that it would work correctly in the new application. MDC -- Marin David Condic Senior Software Engineer Pace Micro Technology Americas www.pacemicro.com Enabling the digital revolution e-Mail: marin.condic@pacemicro.com Web: http://www.mcondic.com/