From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.4 X-Google-Thread: 103376,21960280f1d61e84 X-Google-Attributes: gid103376,public X-Google-Language: ENGLISH,UTF8 Path: g2news2.google.com!news4.google.com!border1.nntp.dca.giganews.com!nntp.giganews.com!newsfeed00.sul.t-online.de!t-online.de!feeder.news-service.com!proxad.net!newsfeed.arcor.de!newsspool3.arcor-online.net!news.arcor.de.POSTED!not-for-mail Newsgroups: comp.lang.ada Subject: Re: How come Ada isn't more popular? From: Georg Bauhaus In-Reply-To: References: <1169531612.200010.153120@38g2000cwa.googlegroups.com> <1mahvxskejxe1$.tx7bjdqyo2oj$.dlg@40tude.net> <2tfy9vgph3.fsf@hod.lan.m-e-leypold.de> <1g7m33bys8v4p.6p9cpsh3k031$.dlg@40tude.net> <14hm72xd3b0bq$.axktv523vay8$.dlg@40tude.net> <4zwt33xm4b.fsf@hod.lan.m-e-leypold.de> <1j7neot6h1udi$.14vp2aos6z9l8.dlg@40tude.net> <1170347180.14376.104.camel@localhost.localdomain> <1170363233.23845.118.camel@localhost.localdomain> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Organization: # Message-ID: <1170521693.6067.214.camel@localhost.localdomain> Mime-Version: 1.0 X-Mailer: Evolution 2.8.1 Date: Sat, 03 Feb 2007 17:54:53 +0100 NNTP-Posting-Date: 03 Feb 2007 17:54:47 CET NNTP-Posting-Host: b60a4167.newsspool2.arcor-online.net X-Trace: DXC=cni`jlK2>IgHGdA9EHlD;3Ycb4Fo<]lROoRaFl8W>\BH3Yb98oN_PZ8?>aA:ho7QcPOVcKfAB:8C=Rd`nJol@`Q[4=j X-Complaints-To: usenet-abuse@arcor.de Xref: g2news2.google.com comp.lang.ada:8880 Date: 2007-02-03T17:54:47+01:00 List-Id: On Fri, 2007-02-02 at 00:40 +0100, Markus E Leypold wrote: >=20 >=20 > Georg Bauhaus writes: >=20 > I think that Ada *and* Haskell will make an interesting > > combination on .NET. >=20 > I wonder why one wouldn't just use Monads in most cases? You wouldn't just use Haskell and monads for at least two reasons: - used in a natural way, Haskell equations (incl. monads) still turn out to be comparatively slow. See Darcs (I am a Darcs user). - control over what is going to happen at what time is easier using a systems programming language. > I've written and deployed programs in Ada, C, C++, > Perl and OCaml (among others). And I've played around with Haskell. I > think I'm more productive with FP than with C and Ada. It will be most interesting to learn the specifics of what makes you more productive using OCaml instead of Ada etc, language-wise. > > A more important reason not to ignore functional programming > > is [... ] Recursion > Recursion is the least of those reasons. What is IMHO more important > is, that you start not to think about mutating variables but producing > new values from give ones (a slight shift in perspective, but with > real impact at the long run). Yes, you replace thinking about updates to a variable by thinking about values passed, and their relations. Now unless you write a straight line program, this won't work without recursion :-) Thinking about function values and their combinations like map + filter is another good thing to learn (and use). OTOH, FP style is sometimes just assignment in disguise. It hides the variable as a parameter (and relies on TCElimination) in a sense. I don't think this is always easier to follow: function Sum(L: Cursor; Variable: Integer :=3D Initial_value) return Integer is begin if Has_Element(Cursor) then return Sum(Next(L), Variable + Element(L)); else return Variable; end if; end Sum; function Sum(L: Cursor) return Integer is Variable: Integer :=3D Initial_Value; begin while Has_Element(L) loop Variable :=3D Variable + Element(L); Next(L); end loop; return Variable; end Sum; (Yes, a true FP Sum function is composed, taking a binop as a parameter for folding and you make it a base-case-step-macro in Lisp ;-) I still think that it is just a bit more difficult to follow the first Sum. If you want to know how Sum is going to arrive at a result then you have to follow the history of Variable through a series of function calls with a series of changing arguments (the induction step). In the second Sum you see the history of Variable without following the mechanics of a function call and it is still just one place where Variable is modified. It's less hidden behind the "function call screen". Still I much prefer the recursive, nested narrowing function in my Binary_Search -- even though the loop based variant is faster with some Ada compilers :-) It's certainly easier to see that the first Sum matches a primitive recursive function if and when this is important. > >> I refuse > >> to discuss merits of languages ot a micro level like: > > > > The "micro" level is the level of production, change, and correction. >=20 > As you have noticed, FP appeals to > mathematicians. Yes, FP has a mathematical appeal. However, time and space are not usually intrinsic parts of mathematical functions. This is what I am trying to point out: a computer program operates in time and space, it is not a mapping between two sets of values, even though you can interpret the operating program this way. A "functional solution" is still very helpful as a reference, just like you said. For example, a functional solution helps me stay sane maintaining one of my longer Perl programs that takes about 35 min on average to compute a number of rankings from some larger database tables. :) But sometimes a functional solution is only a starting point, a necessary precondition. It provides guidance when writing the real thing. At other times, a functional program is just fine, provided the author follows the usual conventions: Don't leave out that much, it won't hurt if you say what you mean. > I'Ve mathematical background [...]. FP just came naturally to me: > It mirrors the way I > think about a problem and about the solution to problems. For example, a Haskell program can be an executable specification. Unfortunately, this may not be good enough because what you call the "rest" is *not* optimization. It is meeting the requirements, even when these requirements are "Start no sooner than 22:00h, be ready before 23:00h." Time and space are intrinsic parts of a solution. If the problem specification includes time and space, a complete solution must in effect provide time and space values as well. (Make self-referential considerations WRT time and space when computing, so to speak.) The meaning of the word "optimization" is, I think, to improve existing solutions so they run faster, consume less resources, are easier to understand, etc. Optimization does *not* mean to turn something that isn't a solution into a solution. This is what mathematicians refuse to see, as far as I can tell. The stigmatize time and space as being just "optimization" issues. They are not. In programming, a function is more than an equation over some field that excludes time and space. > I fear your full of preconceptions. It's different from what you do in > Ada (or whatever your favourite imperative language is), so it must be > wrong or there must be something missing. Why would I be writing programs in OCaml, then? > As an FP advocate, I suggest, that the things not written down, are > not necessary. FP error messages get better in the presence of explicit types. Redundant semicolons can help the functional compilers a lot in figuring out where there is an error in the program text. Not necessary? Or are FP programmers of the compiler writer type who hardly need a robust language? > So those "savings" you address as a defect are actually > a chance.=20 >=20 > But YMMV. As I said: I'm not in the business of convincing people WRT > FP. You already indicated that you would not accept the typical > reasoning of an FP protagonist. I can't offer you something different: > It's shorter, I can see the problem clearer, I can leave out redundant > information and so on. That's the point: it's you who sees clearly, you leave out what seems redundant to you, etc.. But those other guys, trying to understand your program, will have to repeat your arrival at a clear sight,=20 they will have to iterate the learning process that makes things seem redundant to you, etc.. The usual answer I got when asking about this is, well, I need educated colleagues with skills at about the same level as mine. Sometimes that seemed just true, sometimes that's snobbish, in some cases it has seemed to express pride. It has also been an attempt of a programmer to make himself irreplaceable. > Listening to you justifying that every, every variable must be that's an exaggeration > declared with type and all, one wonders hoe mathematics itself ever > could live without type declarations. Mathematicians use explanatory sentences and bibliographic references as a replacement for declarations. So they do declare. Only the declarations are semi-formal in many cases like you show in your example. After "Let V be a vectorspace", V is declared, is in scope, and references to V will just name V of type vectorspace. > The same principle applies in FP. I fear it won't convince you. Where FP authors follow the conventions of good style,=20 they list a function's type, or they add a semi-formal comment, or both. Why would they do that if these things are redundant and should therefore be left out? > FP has set the cut off at a > different point than Ada. Question: Was that necessarily wrong? No, not wrong. It just has consequences to leave things out. > It > fits me. Does that make be a worse programmer / designer / software > engineer / mathematician? I don't think so. Imagine an engineer writing programs and documenting his assumptions even though he thinks they are redundant because they can be inferred from some context. Worse or better? Why? Imagine a script author who has to get some data laundry job done. Would he/she be well advised to write a program that can be reused, with all bells and whistles, good types, OO encapsulation? Or just use a bunch of lists, tables, and regular expressions? (What's the probability of a once-program becoming an n-times-program, anyway, in reality?) > > needs not be terminated. This leads to undecipherable error > > messages when you forget to place one missing token to complete > > an expression that is the last thing in a function definition. >=20 > I fear that hardly happens to me in OCaml. I think it's really not important what happens to you and me here. What is important is what happens to the amount of available money to pay us until we arrive at an error-free solution. How much time is spent by the programmers when correcting mistakes, and how do OCaml and Ada compare in this case in typical settings? The error messages of the Erlang system can be even more obscure. Some of them are printed at run time. Sometimes there isn't a message at all ... Still, the language is somewhat minimal and Erlang can help solve some problems quickly provided the Erlang programmer knows the language, the libraries, and the system and its quirks. If you write Erlang programs, you *must* be able to say, "I fear that hardly happens to me in" Erlang. Otherwise you will be lost tackling errors you don't understand because there is comparatively little "redundant" information in Erlang programs. > the presence of overloading and tagged > types the error messages can be become quit misleading, at least with > Gnat. It can be. > But my experience is that it is the beginners that are most > obsessed with the "broken syntax". Of course the beginners complain! Just like when someone switches from permissive C to unforgiving Ada. But is't not the syntax of Ada that seems to cause difficulties. > Related to that are the repeating > attempts on c.l.f. or c.l.l to suggest a new syntax surface for Lisp > "without all so parenthesis", implying that would hugely further the > proliferation of Lisp. There is, I think, already a c.l.l FAQ for > this. I know, and it has taken me some time an effort to make some proponents of Lisp syntax see the problem in the first place. (Usual suggestions: If you run into "syntax error at end of file", just fire up your Lisp system's editor, some function will look "skewed" and point you near to where a ')' is missing. Well...) > Though the attempts to reform ML syntax happen less often, they > happen and I count them under the same heading as those Lisp reform > attempts. You cannot take pride in having mastered a syntax that is no challenge. :-) > But really: What would that buy me? Investing > the same time into understanding more ML / OCaml / Haskell will earn > me much more. > Let me quote from the abstract of that paper: >=20 > ... > So we are talking about somebody intimately acquainted with the > language and the research on that language, striving for an > improvement. That's why I quoted the paper. It does explain why ML is important. And that too little attention has been given to syntax. > I suggest you really read the paper you quoted: I did, I usually read the papers I quote before I argue ;-) > He has some nice > things to say about the necessity of GC and the people who don't like > the "bizarre syntax" of ML. At the end of that paragraph he says: "But > in general, don't we have better things to argue about than syntax?". Syntax is structuring our communication. We have important things to achieve, and just ignoring syntax won't make us more effective. But since we are starting to throw papers at each other, here is another, more or less empirical one, talking about programmers discussing(!) irrelevant syntax:) "Of the pretense (syntax is irrelevant) and the actual reaction (syntax matters), the one to be believed is the actual reaction. Not that haggling over parentheses is very productive, but unsatisfactory syntax usually reflects deeper problems, often semantic ones: form betrays contents. "Experienced programmers, so the argument goes, will never make the error. In fact they make it often. A recent review of the BSD operating system source..." -- Betrand Meyer, Principles of language design and evolution, =C2=A78 The last sentence is about '=3D' in C comparisons. '=3D' has caused a problem in ML, too. Hm, perhaps everything coming out of Bell Labs must continue Bell Labs traditions. SNOBOL4 has '=3D', C has it so C++ has it, Aleph has it, Limbo, too IIRC. So maybe ML has had to have the '=3D' trouble maker, too. > Your approach seems to be more the Olde FORTRAN Programmer's approache: > I can do it in [...] so why must I use/learn another language. Not at all. What makes you think so? > > This costs time and money. >=20 > Well -- every legacy feature does. Tell me, Ada has none :-). In the case of OCaml, there is at least camlp4. I understand nobody seems to want the revised syntax? Not cool? Herd instinct? Fear of change? "No, it's not necessary, we have learn-ed ouR working in-group syntax." I only wish someone could win the lottery and have some language designers work on the formal principles of sound syntax for industry programming languages. Instead, legacy syntax is reintroduced again and again, by the same people who argue it doesn't matter because they have finally learned to work around the broken syntax. So why not again use the broken syntax? Professional programmers get bitten by the syntax bugs again and still continue to claim this isn't important... A few weeks ago a colleague explained he always writes if (expr =3D=3D false) { because a '!' can be hard to see. I told him he could always use if (!!!expr) { ...