From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: *** X-Spam-Status: No, score=3.2 required=5.0 tests=BAYES_00,RATWARE_MS_HASH, RATWARE_OUTLOOK_NONAME autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 109fba,baaf5f793d03d420 X-Google-Attributes: gid109fba,public X-Google-Thread: fc89c,97188312486d4578 X-Google-Attributes: gidfc89c,public X-Google-Thread: 103376,97188312486d4578 X-Google-Attributes: gid103376,public X-Google-Thread: 1014db,6154de2e240de72a X-Google-Attributes: gid1014db,public From: "Tim Behrendsen" Subject: Re: What's the best language to start with? [was: Re: Should I learn C or Pascal?] Date: 1996/09/13 Message-ID: <01bba125$4e8c4ca0$32ee6fcf@timhome2> X-Deja-AN: 180300784 references: <01bb8df1$2e19d420$87ee6fce@timpent.airshields.com> <515o3b$d7h@goanna.cs.rmit.edu.au> <01bb9fe6$7299d800$87ee6fce@timpent.a-sis.com> <5182u7$17b@goanna.cs.rmit.edu.au> content-type: text/plain; charset=ISO-8859-1 organization: A-SIS mime-version: 1.0 newsgroups: comp.lang.c,comp.lang.c++,comp.unix.programmer,comp.lang.ada Date: 1996-09-13T00:00:00+00:00 List-Id: Richard A. O'Keefe wrote in article <5182u7$17b@goanna.cs.rmit.edu.au>... > "Tim Behrendsen" writes: > >[snip] > As I said, the English dictionaries are talking about bureaucratic > procedures, which are often written down in rule books. Even when they > are not, to follow a (bureaucratic) procedure means to engage in a sequence > of behaviours regulated by rules, and those behaviours are conceptualised > as discrete. In computing, as I said above, I have been unable to find > any definition of "procedure" that distinguishes it from a Fortran > subroutine or Pascal procedure. In fact, that is why the "procedural" > languages are so-called, because a procedural program is organised as > a possibly structured collections of procedures in the Fortran/Algol/Pascal > sense. *This* in the meaning that leaps to mind to everyone who has > posted in this thread except you. > > I have been able to locate a definition of algorithm. I wonder if you > will recognise it: > > The notion of an _algorithm_ is basic to all of > computer programming, so we should begin with a > careful analysis of this concept. > ... > So this is an algorithm. The modern meaning for algorithm > is quite similar to that of _recipie_, _process_, _method_, > _technique_, _procedure_, _routine_, except that the word > "algorithm" connotes something just a little different. > Besides merely being a finite set of rules which gives a > sequence o operations for solving a specific type of > problem [%], an algorithm has five important features: > > 1) Finiteness. An algorithm must always terminate after a > finite number of steps. ... (A procedure which has all of > the characteristics of an algorithm except that it possibly > lacks finiteness may be called a "computational method." ...[%]) > > 2) Definiteness. Each step of an algorithm must be > precisely defined; the actions to be carried out must be > rigorously and unambiguously specified for each case. > > 3) Input. An algorithm has zero or more inputs ... > > 4) Output. An algorithm has one or more outputs ... > > 5) Effectiveness. An algorithm is also generally > expected to be _effective_. This means that all of the > operations to be performed in the algorithm must be > sufficiently basic that they can in principle be done > exactly and in a finite length of time by a man using > pencil and paper. .. > > [%] These statements make it clear that the distinguished computer > scientist who wrote this important textbook, whose emeritus chair > is in fact named after the book, considers "procedures" and > "algorithms" to be pretty much the same kind of thing, except that > "procedure" is a _slightly_ looser term. This is actually the same > book containing "procedure, see subroutine" in the index, but every > other book in this office agrees with that. This sounds reasonable to me. I interpret (1) to mean that it should not have an asymptotic solution; i.e., it should be able to solved in finite time. > >Of course, your optical computer fits this > >definition, since it has a discrete sequence of steps (mirrors, > >etc.), and you claim that *it* doesn't. > > No, the mirrors are not steps. It is in principle possible to build > an optical computer as a single translucent block with varying refractive > index. You don't need a crisp metal/air interface to get reflection; a > continuous change in refractive index (I did say my MSc was in underwater > acoustics) will do the job nicely. The holographic transformation of an > optical signal _certainly_ takes place in a continously varying medium; > that's why I chose that example. > > Let's compute a 2-dimensional optical signal propagating through a > medium with a predetermined static spatial modulation of its refractive > index. > > 1. Finiteness: Not necessary for a procedure. > 2. Definiteness: the transformation is precisely defined by > partial differential equations. > 3. Input: the incoming stream of photons. > 4. Output: the outgoing stream of photons. > 5. Effectiveness: fails miserably. The relevant number here > is the power of the continuum. That was the point of selecting > a model where _continuously_ varying media are involved. ^^^ Does it fail? I suggest that your machine is equivalent to a CPU with a single instruction. The single instruction calculates a particular mathematical formula. I could simulate this using pencil and paper. I quote the definition: "This means that all of the operations to be performed in the algorithm must be sufficiently basic that they can in principle be done exactly and in a finite length of time by a man using pencil and paper." Note that it does *not* say that I have to do the operations using exactly the same method, only the same operations. Your light based algorithm is simply a mathematical operation, which I can do using pencil and paper quite easily. This is like saying that since most computers are based on the flow of electrons through gates, I have to simulate the electron flow in order to prove Quicksort is an algorithm. > >No, the definition is very clear. > > You may _say_ that the definition is very clear, but you still haven't > told us what your definition of a procedure _is_, and why such an > extremely general redefinition of "procedure" is _relevant_ to CS > education. > [giant snip of procedural debate] OK, obviously I'm failing miserably to get my point across. Let me try different terms rather than the obviously loaded term "procedural". Try this on for size: 1. There are two ways an algorithm can be expressed: a. Using a methodology that expresses a path of execution (aka procedural languages). b. Using a methodology that does *not* specify a path of execution (aka non-procedural languages). 2. Algorithms can execute *only* using a time-based mechanism. Now, as for education, I submit that at the beginning it is more important to focus on what execution means rather than the abstract expression of algorithms via languages. Certainly you need some form of expression, but it should be kept as simple as possible to get the student brain into the general concept of a flow of execution without the distraction of methodologies. In fact, the algorithm definition you posted above is a good one. Do you use that in your classes? I would show it to the students the first day and have them burn it into their brains. It very forcefully shows what algorithms are all about. > >How many questions do we get in this newsgroup where a student > >simply didn't follow the flow of the program to see what happens? > >This is so obvious to you and I that we don't think about it, > >but *they didn't*! Because they have only a vague feeling of > >flow, and are still looking at the program as a kind of > >weird combination of a solid object and something with a flow > >of time. > > There is a fatal flaw in your argument. > Students don't think of looking in the index of their textbook. > Students are capable of staring at a GNAT error message: > "foobar.adb must be recompiled" > and wondering what to do about it. > Students are capable of handing in an Ada program starting > like this: "procedure 8puzzle is". (That happened yesterday.) > Amongst the students I have to deal with, some of the _native_ > speakers of English have only the shakiest grasp of English > grammar. It used to be that if I saw "Chinglish" comments I > knew I was dealing with an Asian student. Not any more. > > There is a sort of "learned helplessness" around. It's not that > they don't think of following the flow; it's that a lot of them > don't think of _anything_; they wait for someone to _tell_ them. Well, certainly there are large number of students in that frame of mind, I cannot disagree. > There is another flaw in the argument: the most powerful way of > understanding a program that I know, and I am not a bad programmer, > is to see it as a web of relations and constraints. When _I_ have > a problem with an Ada program, I don't start playing computer, I > look for relations that I have not made explicit, for constraints > that I can check. Well, I have to admit that when I think about an algorithm, I visualize data flowing through the algorithm undergoing transformations until it's what I want. I start with the highest level "black box" transformations, and then gradually break down each black box in greater and greater detail until I have C code (or whatever). This is certainly not the only way to look at problems, but it is the closest to the reality of how algorithms execute. And I submit that it is the most straightforward to understand. I admit, though, that I have a natural bias toward this point of view. > >Take recursion. How can you not understand recursion if you > >understand in your soul that computers execute a flow of > >instructions? You can't, and that's the point. Understanding > >the time axis is the key. > > There is spatial recursion as well as temporal recursion. > I am completely comfortable with recursion in programming and have been > since I met it. But I _understand_ recursion the same way I understand > mathematical induction, and the train of thought when I write, analyse, > or debug recursive code is at best indirectly related to the "flow" in > the program. The core metaphors I use to grasp recursion are primarily > spatial. Hmm; I'm not sure if I view it the same way or not. If I visualize Quicksort, for example, I visualize a big block of data that I divvy into two parts. Each of those parts get divided, and so on. To map into the computer domain, I think about a black box that does the divvying process, and then each piece gets put back through the black box. I suppose my core metaphor is also spatial, but I'm not sure if it's the same way. The point is that I (at least) don't view Quicksort as a static "hall of mirrors", everything reflecting back and forth with no sense of execution. I see a "movie" of the data breaking down section by section, and I can see the pattern between each break and generalize it. -- Tim Behrendsen (tim@a-sis.com)