From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,808505c9db7d5613 X-Google-Attributes: gid103376,public From: ok@goanna.cs.rmit.edu.au (Richard A. O'Keefe) Subject: Testing teaching belief? Date: 1996/11/20 Message-ID: <56thaj$3v$1@goanna.cs.rmit.edu.au> X-Deja-AN: 197536530 references: <32723F6A.54A3@dtek.chalmers.se> <56b275$6k4@felix.seas.gwu.edu> <56paj4$bu0$1@goanna.cs.rmit.edu.au> <56rbmm$kc8@felix.seas.gwu.edu> organization: Comp Sci, RMIT, Melbourne, Australia nntp-posting-user: ok newsgroups: comp.lang.ada Date: 1996-11-20T00:00:00+00:00 List-Id: I wrote >>I can't speak for anyone else, but you can silence me *completely* on >>this topic and convert me to your style *by showing me the experimental >>evidence*. It should be a fairly straightforward experiment to perform. >>(Although double-blind is clearly out of the question...) mfeldman@seas.gwu.edu (Michael Feldman) replied: >Well, it would be pretty hard, actually, because you'd need two >otherwise-identical versions of all the materials, including the book - >one with my keyword style, the other with lowercase. In a controlled >study (I've done some...) one needs to be careful to hold everything >constant but the variable you're measuring. It's a nice idea, but >in this case it's logistically intractable. >Can we put this issue to bed, finally? I have changed the subject line, because I now think we have reached something we can agree on and that is worth discussing further. What I think we can agree on is that - it is better to have evidence that what we do in teaching works - evidence about what some teachers for a different language have fashionably done (especially when books *not* following that fashion seem to be as common as ones that don't) is evidence about what _teachers have done_, not about what _students have learned_ or _how well_ they learned it - perhaps the things most urgently in need of experimental testing are the prejudices we _share_, rather than the prejudices we dispute, because at least the latter undergo rational/critical discussion. - if it were economically feasible to perform such a test on this or any other topic, it would be worth doing. But there is something here that we disagree about. I believe that it should be fairly straightforward to perform this experiment, and Feldman believes it would be "pretty hard". I hope that the only reason we disagree about this is that it never occurred to me that subjects in such a test would be using the publisher-bound version of a book. What I have in mind is taking the source form of all materials the students see, which would be (La)TeX for the book, or possibly SGML .adb and .ads files for the code .html files for other on-line documentation and automatically generating the two variants from the common sources. With respect to the book, I would expect the markup to look something like this: Ada word (La)TeX SGML? (suggest better use!) keyword \kw{keyword} keyword Attribute \atb{Attribute} Attribute Variable \var{Variable} Variable Type \typ{Type} Type Subprog \fun{Subprog} Subprog Package \pkg{Package} Package which one would certainly want to do so that attributes, types, and subprograms could be automatically indexed. From a source with markup like this, it is easy to automatically produce a variant with whatever capitalisation and looks you want. If one of the variants happens to correspond to the published version of a book, the students using that variant are still given a locally printed and bound version of the material. (If we did that here, it would probably be CHEAPER for the students than buying an overseas book.) With respect to the compilable sources, programs like aimap and my own a2h already manage this translation in everything except the comments, and putting markup in the comments in the master version of the sources from which the student-visible sources are derived is tedious rather than difficult. A sufficient markup would be \ada{text} or text in many cases. With respect to on-line documentation, one would use markup as outlined above, and would automatically generate appropriate variants in standard HTML. All of this applies to extra handouts and of course examination scripts as well. If you want to test N style variants, you need one properly marked up master copy of everything, and N automatically derived variants, so the disc space required is (N+1) times that required for a single variant. If students use a UNIX system, it is possible to give each student a symbolic link to the appropriate directory, so that each student sees ~/CS100 as the place to look for CS100-related documents. Nothing else in the computer administration needs to vary between students. If students use a NOVELL server from DOS boxes, it is possible to set things up so that each student has a drive, K: say, bound to the right directory on the server. If you already have enough students to need to run several streams, there are your groups ready for your several treatments. (You may not be able to randomly assign students to groups, but you _can_ randomly assign _treatments_ to groups, which might be enough.) A wild guess of the cost of performing such an experiment, over and above the normal costs of running a CS1 course, might be on the order of A$20 000. I already know that the cost of converting all the Ada sources for such a course is on the order of two working days, say A$300. Converting a book that is already in (La)TeX or SGML form would be rather more expensive, but might be worth it. I note, for example, that "Ada 95 Problem Solving and Program Design" does not have a separate index of Packages, Subprograms, or Types, things that definitely help when you're trying to find your way around an 814 page book. Some of the printing costs might be recovered from the students, which sounds unfair until you realise that it might actually save them money. So you see I _have_ thought about this and don't see anything "logistically intractable" about it. If someone who has actually _done_ some experimental studies of this sort can point out things I've missed and suggest a more realistic cost, I think we may make progress. Personally, I think upper case keywords are ugly and almost unreadable; they are one of the reasons I have never bothered downloading and trying Modula 3. If the consensus were that we ought to use upper case keywords in Ada, I would lose some of my motivation to oppose the proposed switch to Java as a first year language. But I ought not to let my personal tastes, or the unsupported tastes of earlier textbook writers, stand against evidence about what is really best for the students, and if it _is_ practically possible to get such evidence one way or the other, I want to see it happen. (If there were evidence that upper case keywords was better for first year students than lower case, I would be able to use that to fight off Java!) So I would really appreciate it if we could have a thread on what would be involved in doing a credible experiment to get evidence about keywords. If we can come up with a plausible design, maybe it can be taken to a funding agency and someone can actually try it. (This is not a promise to do it myself, because I am not in a position to make such promises. All I can do is nag. Count it as a promise to nag.) -- Mixed Member Proportional---a *great* way to vote! Richard A. O'Keefe; http://www.cs.rmit.edu.au/%7Eok; RMIT Comp.Sci.