From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.5-pre1 (2020-06-20) on ip-172-31-74-118.ec2.internal X-Spam-Level: X-Spam-Status: No, score=0.8 required=3.0 tests=BAYES_50 autolearn=ham autolearn_force=no version=3.4.5-pre1 Date: Sat, 16 Jan 1993 10:07:23 -0600 (CST) From: PETCHER@OTTAWA.dseg.ti.com (What? Me Ada?) Subject: Unit testing, etc. Message-ID: <930116100723.204058bc@OTTAWA.DSEG.TI.COM> List-Id: > My personal feeling is that they are limiting their understanding > of unit testing to the "white box" methods of checking branch and > statement coverage (as opposed to "black box" methods to demonstrate > that the unit accurately fulfills the requirements it was designed to > perform). > My personal feeling aside, there probably is some legitimacy to > their insight. I would like to receive more thoughts, pro and con on > this topic from others who are or have been involved in Ada > development. I go along with your personal feelings. Until we come up with a language so good that we can feed it a design specification and it spits out code (as well as design corrections) unit testing will continue to be one of the most important steps in software development. It's not much different in Ada than in any other language except once the implementer has gotten the unit through the compiler there is less chance the bugs found will be coding errors. This in no way implies there are no errors at all. Regardless the cause of errors they're always cheaper to find and correct in unit test than during integration. Particularly cheaper than finding them by having an aircraft slam into the side of a mountain. >> A virus could be sent out to search and destroy Excel 3.0 >> and replace it with version 4.0. Work products could be >> similarly managed. A virus could be sent out to search >> and destroy all old versions of a document and replace them >> with the latest. We will need a virus stop at the door >> of the archive library of course. > This can easily be accomplished with a centralized program which inspects > each appropriate computer on the network... > ...Using a virus exposes the user to risk (what if the virus accidentally > deletes the wrong files on the wrong computers) beyond that of a > centralized > program, because virii by their nature self-replicate and usually don't > carry enough information with them to identify which machines it should > travel to and when it should die. I think the main problem here is one of terminology. Back before the days of destructive viruses, at least to the extent of their prevalence today, certain Unix hackers came up with programs designed to spread out over a network and divide up executaion of an application among multiple nodes. These things were called worms. Unlike a virus, a worm is a stand-alone executable program, not needing to attach itself to another program so as to fool a user into executing it. Nowadays, the term "worm" has gotten slumped in with "virus" as being synonymous with "evil." However, it would be a worm in the original sense that one might use as a vehicle for software updates and such. Malcolm Petcher Petcher@m2000.dseg.ti.com