From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.5-pre1 (2020-06-20) on ip-172-31-74-118.ec2.internal X-Spam-Level: X-Spam-Status: No, score=0.0 required=3.0 tests=BAYES_20,MSGID_SHORT autolearn=no autolearn_force=no version=3.4.5-pre1 Date: 18 Nov 91 23:05:52 GMT From: agate!spool.mu.edu!tulane!uno.edu!JNCS@ucbvax.Berkeley.EDU Subject: Re: Software Engineering Education Message-ID: <9886@cs.tulane.edu> List-Id: In article <20600125@inmet>, ryer@inmet.camb.inmet.com writes: > 25% - Quality (correctness) of the executable program > 25% - Quality of the written design document > 25% - Quality of the written test plan and procedures > 25% - Quality of the user documentation > >I thought this was the most intelligent approach I'd ever heard. Do any >of you educators have a better idea? Is this done in other universities? > >Mike Ryer >Intermetrics When I teach software development courses of any level I break the grade on the following aspects : i. Documentation 2. Format of code 3. Design of algorithm 4. Implementation 5. Completeness of implementation 6. Correctness 7. Output Their weight is based on level of course and complexity of problem at hand. This grading supports development of code that can be read and maintained, vs code that works and produces a correct answer. Usually, correctness and output do not exceed on more than 2o-25%. Thus there may be program which are acceptable but may not produce correct output; acceptability is determined by the other factors. I feel I must mentioned that Implementation considers the proper (not just correct) use of control structures, and abstraction tools as provided by Ada. (Example, use of For when it should be a While, procedures where functions , use of constants, type definitions, etc...) Jaime Nino Computer Science Department University of New Orleans New Orleans, La 70148