From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,808505c9db7d5613 X-Google-Attributes: gid103376,public From: dewar@merv.cs.nyu.edu (Robert Dewar) Subject: Re: Testing teaching belief? Date: 1996/11/20 Message-ID: #1/1 X-Deja-AN: 197694192 references: <32723F6A.54A3@dtek.chalmers.se> <56b275$6k4@felix.seas.gwu.edu> <56paj4$bu0$1@goanna.cs.rmit.edu.au> <56rbmm$kc8@felix.seas.gwu.edu> <56thaj$3v$1@goanna.cs.rmit.edu.au> organization: New York University newsgroups: comp.lang.ada Date: 1996-11-20T00:00:00+00:00 List-Id: With regard to Richard's experimental design, it is missing a HUGE element. Students learn from the teacher as well as the text book. There is no way that you can normalize the teaching effect in a case like this, without doing hundreds of parallel experiments. Even then, there is an asymmetric quality that may fundamentally bias your results. Those who think that it is important to emphasize keywords with upper case letters have a particular view of how to teach a language. I for example find that emphasizing keywords as important is totally unnecessary, I regard that as a minor syntactic detail, and my teaching of any programming language does not focus on the syntax. This means that the whole business of upper case keywords may reflect a fundamental difference in approach and style, and I don't see how to normalize that. Yes, you could try to minimize the effects by having Feldman teach from a lower case keyword book and still furiously emphasize the keywords, and having Dewar teach from an upper case keyword book, using the style of underplaying syntactic emphasis. Or you could have Feldman teach from both books, promising not to let the fact that he finds one book preferable influence what he says. But in either case, the outcome would more likely depend on the exact extent to which the teachers managed to neutralize other factors, something that you cannot measure easily, if at all. The call for objective evaluation in teaching methodologies is of course a common one, but that does not make it easy to meet. Rather than focus on such a small issue, why not address the more interesting issue. How can you show that teaching Ada is or is not better than teaching C++ to first year students? That's also a very hard question, but more interesting to answer! Here again the trouble is that you get different inputs from the teachers. Those teaching Ada tend to me more enthusiastic than those teaching C. That's because generally you won't see Ada adopted as a first teaching language unless at least one faculty member is enthusiastic about Ada, but you will see C++ adopted by default with no one enthusiastic about it. Even if you tried to find two equally enthusiastic teachers, you would find that you were more likely to be measuring skill and enthusiasm of the teachers than inherent pedagogical value of the language vehicle.