From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,FREEMAIL_FROM autolearn=unavailable autolearn_force=no version=3.4.4 X-Received: by 10.66.254.137 with SMTP id ai9mr35110215pad.40.1447757398073; Tue, 17 Nov 2015 02:49:58 -0800 (PST) X-Received: by 10.182.73.167 with SMTP id m7mr12279obv.11.1447757398025; Tue, 17 Nov 2015 02:49:58 -0800 (PST) Path: eternal-september.org!reader01.eternal-september.org!reader02.eternal-september.org!news.eternal-september.org!mx02.eternal-september.org!feeder.eternal-september.org!news.glorb.com!i2no4179796igv.0!news-out.google.com!l1ni371igd.0!nntp.google.com!i2no4179788igv.0!postnews.google.com!glegroupsg2000goo.googlegroups.com!not-for-mail Newsgroups: comp.lang.ada Date: Tue, 17 Nov 2015 02:49:57 -0800 (PST) In-Reply-To: <87si45msuz.fsf@nightsong.com> Complaints-To: groups-abuse@google.com Injection-Info: glegroupsg2000goo.googlegroups.com; posting-host=82.216.245.129; posting-account=21X1fwoAAABfSGdxRzzAXr3Ux_KE3tHr NNTP-Posting-Host: 82.216.245.129 References: <14533506-4289-4148-b8c4-e970f5778b26@googlegroups.com> <87si45msuz.fsf@nightsong.com> User-Agent: G2/1.0 MIME-Version: 1.0 Message-ID: <35d6dc4f-4f2e-4e75-925c-e4acc7c8f112@googlegroups.com> Subject: Re: Haskell, anyone? From: Hadrien Grasland Injection-Date: Tue, 17 Nov 2015 10:49:58 +0000 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Xref: news.eternal-september.org comp.lang.ada:28416 Date: 2015-11-17T02:49:57-08:00 List-Id: Le mardi 17 novembre 2015 00:23:24 UTC+1, Paul Rubin a =E9crit : > > *** Rant warning, all FP fans please close your eyes for a second *** >=20 > I peeked ;). The post is full of errors and misconceptions that might > clear up with more experience. Most certainly :) As I said before, even if I've been regularly trying to u= nderstand the intricacies of FP for about a year now, it's the longest-last= ing continuous attempt so far, and it's only been a month. I have already g= rasped that getting it right is going to take a lot more time. > Ermph, that's an overstatement, I'd say that the tenet is to separate > effects from denotational values. State mutation (like i/o) is > considered an effect so FP tries to use it sparingly. But Ocaml has > reference cells whose contents you can write to in ordinary expressions, > and Haskell has cells you can access with i/o-like actions, plus ways > to thread state through purely functional code. =20 If the goal were to only use side-effects *sparingly*, I believe that all s= killed imperative programmers already do that really. Kids learning imperat= ive programming these days are repeatedly told that... - Global variables are to be used with extreme caution - Variable reuse is a terrible idea, their scope should be reduced instead - If something can be const, it should really be - Complex algorithms should be decomposed into simpler procedures and funct= ions The difference, as far as I can see it, is that the philosophy of functiona= l programming is to hunt mutable variables and effects much further, someti= mes I would say far beyond the point of diminishing returns. For example, speaking of Haskell, when your mechanism to hide side-effects = is conceptually so complicated that a many learning resources feel a desper= ate need to hide it under a rug or introduce it as late as possible, and wh= en a significant part of your user base seems more obsessed with the religi= ous-sounding concept of purity than with that of writing cool programs, I f= eel like there might be a problem. This is one of the reasons why I picked OCaml this time around, actually. I= 've read that it has a somewhat more pragmatic approach, encouraging the us= e of functional constructs when they work, but leaving imperative features = around for those situations where they are the most appropriate answer to t= he problem at hand. This sounds like an attitude I am more likely to get be= hind: know many tools, and pick the right tool for the right job. > > "For all i such that 2 < i < N - 1, B[i] =3D A[i - 1] + A[i] + A[i + 1]= " > > - A copy of B with its first element set > > - A copy of B with its first and second element set... >=20 > No of course you wouldn't do anything like that. An idiomatic Haskell > two-liner if A is a list would be >=20 > b =3D zipWith2 add3 a (tail a) (tail (tail a)) where > add3 x y z =3D x+y+z >=20 > or you could write something similar using recursion. You know, mentally counting the amount of programming concepts accumulated = in this short code snippet and imagining myself trying to explain it to my = C students of last year, I wonder if this is why Structure and Interpretati= on of Computer Programs was considered one of the hardest programming class= es ever taught at the MIT. > For immutable > arrays, you'd use toList and fromList in combination with something like > that. There are also mutable array types which would let you write an > imperative-looking loop. Converting to and from lists sounds awful from the point of view of perform= ance, which is often the core point of using arrays. I guess that as usual,= we have to hope that the compiler is going to be clever enough. > > "For all elements E of A, display E" > > In a functional programming program, loops are considered evil, >=20 > Typcally you'd write something like "mapM_ display a" which is > Haskell for running the display action over all the elements of a. > You don't even need that intermediate variable E cluttering your code. Ah, yes, indeed. I'll give you that point :) >=20 > > "let f : List a -> List =3D > > .....if a is empty then > > .........return empty list > > .....else > > .........display first element of a > > .........return f(rest of a)" > > Notice that this example is long, much more difficult to understand, >=20 > That doesn't look like Ocaml to me, but I don't know Ocaml that well. > The Haskell implementation if you didn't want to use mapM_ would be: >=20 > f :: [a] -> IO () > f [] =3D return () > f (x:xs) =3D display x >> f xs >=20 > which is probably shorter than comparable imperative code. Note that > the type annotation (first line) is not required, since the compiler can > figure it out if you omit it. There is also pattern matching in OCaml, but in this situation, I did not f= eel like its conceptual complexity was worth the code savings. Adding the a= ctual function call, it would look something like this, but I'm not sure ab= out the exact syntax because I haven't been diving into OCaml's imperative = features yet: let f list =3D match list with | [] -> [] | x::xs -> (display x; xs) f(A) That's 5 lines. Comparable imperative code would be, as I said, one or thre= e lines depending on the loop syntax : one line if you are allowed to do lo= op one-liners (as in C derivatives), and three if you need to use a block s= tatement for every loop, as in Pascal derivatives. The program length and c= omplexity is simply not comparable. However, again, I will give you that I had forgotten about using the good o= ld map, which will save us from the horrors of recursion this time around. > > imagine how a typical functional programmer feels when he has to debug > > this : > > http://benchmarksgame.alioth.debian.org/u64q/program.php?test=3Dpidigit= s&lang=3Dghc&id=3D4 >=20 > I'd say that code has been golfed a bit (since one of the benchmark > metrics is how short the code is) I believe that the Benchmark Game has finally dropped that metric, thankful= ly. But you are right that it might still have been in effect by the time t= his code was written. > and also has a few speed optimizations > that clutter things a little, but it is fairly straightforward Haskell, > and the main obstacle to debugging is probably understanding the > algorithm. Hmm... it's a bit more than that, I'd say. Since imperative programs have s= traightforward control flow, debuggers can provide programmers with the ver= y nifty abstraction of stepping through code instructions, iteratively, che= cking that it does what is expected. In functional programs, especially whe= n lazy evaluation is used, I am not convinced yet that the debugging method= ology can always be so straightforward, though I might not have met the rig= ht tool yet. Personally, I tend to dislike long expressions and break them down in the s= hortest sub-expression that will be meaningful to the problem at hand, beca= use these are not only easier to understand, but also easier to debug as in= termediate sub-results can easily be examinated (given a good enough debugg= er) or manually printed. But note that this is not specific to imperative p= rograms : in OCaml, I also heavily use let-bindings for this purpose. The m= ain thing which I dislike so far about the OCaml syntax is that let binding= s tend to require a lot of nesting, which gives idiomatically formatted cod= e a very strange diagonal shape : let a =3D 42 in ...let b =3D 42 * a in ......let c =3D 24 * b in > I'd guess that the sub-expressions were written and tested > separately before being combined to that big function. That's not so > safe in imperative coding since the side effects of the different > sub-expressions could interfere when you combine them. But it's less of > a problem in FP code that has no side effects. As I said in the beginning, it really depends how you write imperative code= . Well-written imperative code really doesn't need that many side-effects t= o get the job done.=20 In this situation, a good imperative programmer would not only implement po= rtions of the algorithm as separate sub-routines, but also leave them aroun= d in the final program for documentation. Compilers have become so good at = inlining these days, that there is usually little point in obscuring code w= ith manual > The Ada equivalent is here: >=20 > http://benchmarksgame.alioth.debian.org/u64q/program.php?test=3Dpidigit= s&lang=3Dgnat&id=3D2 >=20 > It's 5x-10x as much code, depending on how you count. It's a pretty > safe bet that the Haskell version took less time to code and debug. The main reasons why Ada is so verbose is that asks you to be quite specifi= c about everything you do (which is good for debugging, because it means th= e compiler and runtime can catch more errors), and it puts readability abov= e ease of writing. I personally think that these two characteristices make = it a good choice for long-lasting codebases, but I will certainly agree tha= t it also makes the language a poor fit for prototyping and quick hacking s= cenarii, where Python, C++, or domain-specific languages will be better too= ls for the job. > That's silly, there's no such claim. There's a justified claim that FP > makes it easy to avoid certain classes of bugs that happen all the time > in imperative programs, but that's much weaker. You can also use FP > techniques in imperative programming to avoid some of those bugs, and I > do that all the time these days. That's why I think it's worth learning > some FP even if you real work uses imperative languages. ...and part of the reason why I'm still learning FP, actually, in spite of = all the learning pain :) Just the other day, I encountered a C++ problem where passing a closure as = a parameter to a function was the right thing to do, and any other alternat= ive would have been needlessly clunky. I'm not sure I would have thought ab= out that if it weren't for all my past attempts at learning FP. =20 > > The reason why this is the case is that because type > > inference is used througout the whole code hierarchy, compilers can > > only detect errors at the very bottom of it.=20 >=20 > It's not exactly like that, it's a unification algorithm so it's like > discovering that a crossword puzzle has no solution. I've been told > that ML/Ocaml programs are still ok relying on inference since ML's type > system was historically simpler than Haskell's. Haskell programmers > generally will write annotations (polymorphic ones when appropriate) on > top-level functions, while usually letting the compiler infer types for > the stuff inside the functions. That's pretty much also the way I use C++11 type inference: no "auto" shall= touch my function signatures, but I'm perfectly comfortable using them ins= ide of short code snippets where the type of variables is obvious. =20 > > C++ people are now shifting away from it, adding some extra layers of > > type check to templates in a future version of the standard with > > "concepts". >=20 > That idea (bounded polymorphism aka type classes) has been in Haskell > since forever, and the idea is to make the types more precise, not > particularly to clean up error messages. Hmm... I have yet to start learning Haskell again, but the last time I trie= d, I got the impression that Haskell type classes were pretty similar to th= e interface concept in object-oriented programming : stating that a type mu= st have some methods defined in order to be used in some context. Ad-hoc in= terfaces which do not require explicit type support exist in Go, for exampl= e. >From what I understood, what the C++ people are trying to do with concepts = is quite a bit stronger: they want to build a language infrastructure which= allows one to assert any static predicate about a type before allowing it = to be passed as a parameter to a template. That is to say, they want to be = able to assert anything that can be verified at compile time about a type, = not just whether it has some methods or not. I find the idea quite interesting, but I'm curious about what the implement= ation would look like. C++ has a long history of coming up with pretty cool= ideas, then producing an implementation which is insanely difficult to use= correctly. Move semantics in C++11 are a great example of this. > > even though many FP languages support type annotations, I have yet to > > encounter real-world functional programs which take the time to use > > them properly. >=20 > Just what real-world functional programs have you looked at? Again I > know that Ocaml programs tend to use less of them than Haskell programs > but that's because they're of apparently less benefit in Ocaml. Not thinking about anything in particular, just the strange snippets I have= ended up on the web while browsing for functional programs over time. It's= true that the only very large and successful functional program which I kn= ow of is Emacs, and its Lisp internals most certainly do not exhibit the la= test and greatest that functional programming languages can do. > > The great thing about FP, though, is that the resulting program will > > be much easier to test, precisely because it has little to no mutable > > state. But I'm not sure if all the pain you have to go through in > > order to reach this point is worth it. >=20 > I've generally found it easier to code stuff in Haskell than in > low-level imperative languages, because of the higher abstraction, > shorter code, and larger amount of runtime services integrated with the > language. The main cost is that the Haskell code runs slower and uses > more memory, but it still suffices for lots of things. How is this different from e.g. Python ? > > For example, my running average example couldn't be implemented with > > the list comprehension syntax above, because it is a gather operation, >=20 > averages =3D [x+y+z | x:y:z:_ <- tails a] >=20 > works for me. Maybe that's nicer than the zipWith2 version I gave > earlier. Most certainly a lot nicer, now that's something I can imagine showing to f= irst-year students without scaring them away from functional programming fo= r their entire life :) Would it be correct of me to assume that if I asked you to handle end point= s by reusing edge input values, your idiomatic Haskell answer would be to a= ppend duplicates of the first and last input value on each end of the input= list ? (After all, if we're going to take the performance hit of reading from a li= st, we'd better take advantage of their fast insertion capabilities) > The problem isn't that your opinion is negative, it's that it's > uninformed. There are some well-informed critiques of Haskell around > including Bob Harper's (he's a designer of SML and rails against Haskell > all the time) and Ben Lippmeier's (he designed a language called > Disciple that tries to address what he sees as Haskell shortcomings, > that is quite interesting). But I think you've only taken a superficial > look at the topic and come away with some strong opinions that aren't > validly grounded in facts. So far, I have the impression that what I dislike about functional programm= ing is the obsession of absolute functional purity beyond the limit of prac= ticality, which is not specific to Haskell but rather a general tendency of= FP languages. Nevertheless, point taken, we would both benefit more from t= his argument after I spend a couple more months recursing in the pain of le= arning FP ! :)