From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,c9629eba26884d78 X-Google-Attributes: gid103376,public X-Google-ArrivalTime: 2003-08-09 00:39:08 PST Path: archiver1.google.com!news1.google.com!newsfeed.stanford.edu!headwall.stanford.edu!newshub.sdsu.edu!elnk-nf2-pas!newsfeed.earthlink.net!wn14feed!worldnet.att.net!204.127.198.203!attbi_feed3!attbi.com!rwcrnsc53.POSTED!not-for-mail Message-ID: <3F34A510.2060002@attbi.com> From: "Robert I. Eachus" User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.0.2) Gecko/20021120 Netscape/7.01 X-Accept-Language: en-us, en MIME-Version: 1.0 Newsgroups: comp.lang.ada Subject: Re: signature like constructions References: <3F337798.2030008@attbi.com> <0o57jvsu8svaarn54n1j7js0casiclfqhb@4ax.com> <3F33FBD5.5010109@attbi.com> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit NNTP-Posting-Host: 66.31.71.243 X-Complaints-To: abuse@comcast.net X-Trace: rwcrnsc53 1060414747 66.31.71.243 (Sat, 09 Aug 2003 07:39:07 GMT) NNTP-Posting-Date: Sat, 09 Aug 2003 07:39:07 GMT Organization: Comcast Online Date: Sat, 09 Aug 2003 07:39:07 GMT Xref: archiver1.google.com comp.lang.ada:41269 Date: 2003-08-09T07:39:07+00:00 List-Id: Alexander Kopilovitch wrote: > Do you mean that the "Paradise may be regained"... if not for average programmer > then at least for an average star -:) ? That is, that by training yourself > in that direction, and keeping your nerve, it is possible to follow even > rapidly and rather randomly changing requirements with proper documentation > and analysis in real time, inside a not very friendly environment? No, I mean that I have worked directly on projects that could not have been completed without rigorous control over the (evolving) requirements. You have to have religion, or something. One of my favorite examples was a project actually written in Ada, but that is almost a detail. The original schedule called for a three year development cycle, in C and assembler. Two years in, that group admitted total failure. I'll get back to why in a minute. Another group volunteered to take over the project and do it in 11 months to meet the original schedule--if they could use Ada and modern software development methods. They got the go ahead on a plan which called for six months of requirements analysis and design, two months of detailed design, two months of coding and one of testing. At the end of detailed design, eight months in, they decided that a significant portion of the original design need to be scrapped and redone. They finished coding a week behind schedule, but completed the formal testing in two days--the test suite has been developed as part of the entire process, and coded by a separate test group within the project. The manager in charge called it management by heart attack. It is one thing to logically accept that starting coding too early will be a disaster, but to finally start coding in the ninth month of an eleven month project can stress a manager out. But it works. In fact the rest of the operating system that it was part of slipped three months, so that all of the features that had been postponed to version two were in there when it released. But they actually formally did keep those additions as a second version in the configuration management system, and tell marketing that they would get it all when release two was ready but nothing before then. That allowed the first version kept bug free during beta test under version control so they didn't get involved in fighting fires. I said I would get back to why the first group failed. They actually had a "working" system at one point. But it had some major bugs, and every bug fix created more bugs than it fixed. The system as a whole had too much coupling. In other words this subsystem was too big for the project management tools and methods they were using. As for the Ada version? The interfaces between the various modules and sub-components were all defined in less than one thousand lines of Ada package specifications. All of the bodies, except one that got near 650 lines were under five-hundred lines long. Each package came with a test program for that package, and was unit tested before being put under version control. Is it any wonder that the system ran the first time with only minor errors left to fix. (There were 17 bugs found in system testing. Fourteen of them were misspellings, poor punctuation or missing accents in the foreign language messages. The program had a localization package that supported US, British, French, German, Spanish, and Italian. Most of the complaints during the beta testing was about Quebecois usage from French testers. Release 2 made French Canadian a separate localization file. ;-) > Do you mean that the common defensive strategy against probable but unpredictable > changes of requirements -- that is, keeping yourself not too near to them, and > more relying upon agreed tools (which seem less vulnerable for spontaneuos > customer's or management's dances) -- is not the best strategy, and that it is > really possible to chase a moving/jumping problem, all the time at near distance, > without significant delay? No. If you don't understand the real requirements, you will get blind sided again and again during development. But if you do the work to "develop" the requirements, you simply won't have that problem. Another project which was actually for the same OS release as the project above, resulted in an even better example. It was a menu subsystem to provide a consistent look and feel to all of the office automation packages. (This was for the Honeywell DPS6, Mod 400 release 3.) There was a two-day review to decide if the product was ready for release. This was more than just a did it meet the requirements review. At these reviews, marketing could and did on occasion sent the software "back to the drawing boards. Not because it did not meet the agreed requirements but because marketing thought they couldn't sell it in the current market environment. During the first day the marketing team realized that it would be nice if there was a standard option just for backing out of transactions, as well as the key that took you to the top of the current menu hierarchy. This wasn't considered a show stopper, but marketing wanted an estimate on the cost of adding the feature. The review finished early on the second day, and they were going over action items. The manager of the project was able to tell them that the changes had been made, a new version compiled overnight, and was now on its way to the Distribution and Test group for their blessing. In the meantime, it had been installed on one of the test machines if they wanted to check it out. You might think that the developers had anticipated this change request and had it all designed in. But that wasn't how it worked. Since the main requirement for the subsystem was to present a consistent interface to the user, the design of the system was a rule based engine that generated the screens. The change to the code was three lines, because the system did model the problem space. Something like: if In_Transaction then Bind_Key(F6, Exit_Transaction); end if; (It was only three lines because there was already a "cancel transaction" entry in the localization files. But if the new messages had to be added to the localization files, it still would not have been a big deal.) You can see how designing the system from the getgo to model the problem space meant that there were not hundreds of different menu templates that needed to be changed. Since the REAL requirement was for a consistent user interface, designing the system this way meant that consistency was automatic. > You have seen the miracles being used more that in one or two cases -- so you > are lucky. I think that most of us feel that the miracles, if happen, will be > most probably betrayed. That's why "Adding an understanding of ISAs and machine > architecture to that is very rare" indeed. And I have also seen disasters where this sort of discipline was necessary due to the scope of the project, but was not put in place. In fact I may finally get around to publishing that paper. (Basically how to tell when the project is dead beyond all hope of recovery from analyzing the reports in the bug tracking system. The earlier you can determine that a project will fail, the less money you waste.) As for the miracle approach, on one very large project, I had to determine if the money should be spent at all. Relatively quickly I determined that the success or failure of the radar upgrade would depend on whether a particular problem could be solved. We called it the formation flight problem, but it could be a lot of airliners on the same course between beacons at the same speed, or a dozen other sorts of coincidences. The proposed solution was to partition the target detections into clusters then resolve each cluster separately. Of course this would have to be done in real-time in a few milliseconds before the next set of pulse doppler data arrived. At the end of six months I couldn't prove that the partitioning could be done in the time available, or that it couldn't. But I did find a way to solve the problem without partitioning the data, so the contract was awarded. -- "As far as I'm concerned, war always means failure." -- Jacques Chirac, President of France "As far as France is concerned, you're right." -- Rush Limbaugh