Haskell and Scheme: Which One and Why?

While I was waiting for stuff to install on my new machine, I was doing some browsing around the web, and came across an interesting article at a blog called "The Only Winning Move", titled [Scheme Death Knell?](http://theonlywinningmove.blogspot.com/2006/10/scheme-death-knell.html). It's not a bad article at all, but I disagree with its conclusions, and I thought it would be interesting to
discuss why.

The article was brought on by *another* blog post, this one at [Lambda the Ultimate](http://lambda-the-ultimate.org/) suggesting that someone translate Douglas Hofstadter's columns introducing Scheme to replace the Scheme with Haskell.

Josh over at TOWM was slightly irritated by this idea, and concludes that the only reason
why people are suggesting that is because scheme is getting too old to be cool and trendy,
and so they want to switch to something newer.

I disagree.

Haskell is a *very* different language from Scheme, and I think that there are some genuinely good
reasons for teaching Haskell instead of Scheme. I *also* think there are some genuinely good reasons for using Scheme instead of Haskell. It's not really clear to me which is better, but it's worth contrasting the two to help understand the tradeoff.

Syntax
--------

Syntax is in many ways a minor issue. But for people just learning to program, it *can* be a big deal. And it's *not* the case that simpler is always better. I've actually taught an introduction to computer class using Scheme, and the syntax was a *big* problem.

I think that syntactically, Haskell is a better language for beginners. Haskell syntax is very redundant, which is *good* for people. Compare these two little snippets:

(define (fact n)
(if (= n 0)
1
(* n (fact (- n 1)))))

fact n = if n == 0
then 1
else n * fact(n-1)

Which is easier to read? And the Haskell can get even easier to read and write using
pattern matching:

fact 0 = 1
fact n = n * fact (n-1)

Try this comparison:

(define (fold init reducer list)
(if (eq? list ())
init
(reducer (car list) (fold init reducer (cdr list)))))

versus:

fold init reducer [] = init
fold init reducer l:ls = reducer l (fold init reducer ls)

The difference gets *much* more obvious with bigger pieces of code. Between the deliberate
redundancies in Haskell's syntax, and the way that pattern matching lets you decompose programs, the Haskell is significantly clearer.

There's another syntax thing that's a *lot* clearer with Haskell. Both Scheme and Haskell
are strongly oriented towards programming in a style where functions are values. When you work that way, one of the most fundamental things you want to do is create new functions. And one of the
most common kinds of functions you end up writing is currys: that is, versions of already existing
functions with *some* parameters bound. So, for example, suppose you want to search a list for
things with a key of 3. In Scheme, you'd pass a function (lambda (x) (eq? x 3));
in Haskell, you don't need to wrap it into a lambda; you can just write the function with missing parameters. For example, the function to compare to 3 is (3==). Which of those two is easier to read?

Typing
----------

This is going to be more controversial. I'm a big fan of strong typing; many other people aren't. But I'm going to try to avoid my personal opinion about what language I'd like to program in, and focus on the aspects that affect teaching.

Beginners are *very* prone to make certain kinds of mistakes as they learn. The idea that the string "123", the integer 123, and the floating point number 123.0 are three *different* things is actually remarkably tricky for beginners. I can't even begin to tell you how many times I've sat in a lab with a student getting a Scheme error that they couldn't understand
because they were mixing up data types. Strong typing is *really* good for helping catch a lot of
beginners errors, and giving them sensible error messages. It's particularly good in languages like Haskell that have really powerful type inference systems, so that most beginners programs don't need any actual type declarations.

On the other hand, getting from point of being a beginner to being really skilled in a language, Haskell types *can* start to get in the way. That wonderful type inference comes at a cost: there's a fairly complicated system of type "classes" and "instances" to describe how types fit into the polymorphic type inference setup.

Basic Semantics
----------------

At a very basic level, the semantics of Scheme and Haskell are extremely similar. They're both basically lambda calculus evaluators, and they both model data in very similar ways. There are two
main differences:

1. *Purity*: Haskell is pure lambda calculus: no assignments, no side effects; Scheme is
more of a hybrid, with pure semantics when you want them, and assignments when you do.
2. *Evaluation Order*: Scheme uses eager evaluation; Haskell uses lazy evaluation. What that means is that in Scheme, a function call "(f x y)" will evaluate "x" and "y" before invoking "f"; but in Haskell "f x y"

Both of these are tradeoffs, and it's hard to say which is preferable.

The purity of Haskell is fantastic at times: there are definitely times, especially in simple beginner programs, where pure programs are much clearer than the corresponding non-pure/imperative program. Try, if you can, to remember your reaction the first time you saw "x=x+1". And again, from the beginners perspective, the pure approach *prevents* beginners from making some common mistakes. For example, if you can't use global variables, then you won't. Once you're a skilled programmer, recognizing the places where a global is appropriate is something you're expected to be able to do. But if you teach beginners, in every class, you'll find at least 20% who don't pass parameters the way they should - they put some or all of the values that should be parameters into globals.

From the other side, some things are *much* more difficult working in a pure style. Some algorithms are far more natural written in an imperative style; and much more importantly, many things like I/O are *intrinsically* based on state change. The pure approach generally uses monads to work around this; but there are problems with the monadic approach. Monads can be *extremely* difficult to understand, and even experts can have trouble when they need to mix different kinds of monadic actions. (For example, using a state monad to allow you to write some efficient state-based algorithm, mixed with I/O.)

Laziness has a similar problem. It's very convenient to be able to write code without worrying
about what order things happen in. In fact, it's so convenient that *most* programming languages include *at least* one version of lazy operators for use in conditionals (except they call them "short-circuit" operators).

And again, from the other side, laziness can be incredibly confusing. Not knowing what order things are going to happen in can result in things happening in an unexpected order. Mix in monads in the wrong way, and you can get utterly incomprehensible results.

Advanced Semantics
---------------------

On a more advanced level, the semantics of Haskell are clearly superior. But I'm not saying that for the reasons that you usually hear in discussions about functional languages. What you normally hear is a story about correctness proofs, and how you can do correctness proofs easily in pure functional languages, but it's much harder in non-pure languages. I think that's a pile of nonsense. I mean, sure, on some level, it's true that it's easier to write a proof about a pure functional program than it is to write one about a program that's chock-full of assignments. But the simple fact of the matter is, outside of trivial classroom examples, I've never seen anyone actually write a correctness proof. I've rarely seen anyone *specify* the semantics of what they intend their program to do well enough to be able to write a correctness proof.

The reason that I like the semantics of Haskell is very simple: they're simple.

If you've got a class full of first year computer scientists, you *can* teach them to read and understand the full formal semantics of Haskell. You can make it completely non-mysterious. Everything can be explained by the standard lambda calculus α, β, and η rules, with no hidden complexities. It's all remarkably straightforward.

In non-pure languages, that's just not the case. Even Scheme, which has by far the simplest and cleanest denotational semantics of any non-pure language I've ever seen, has enough incomprehensible gunk in its formal semantics that I wouldn't attempt to teach it before *at least* third year undergrad, and probably later.

A lot of people will say that that's no big deal: why needs to know the formal semantics anyway? And for *most* programmers, they're probably right. But for someone who wants to *really understand* what's going on, the fact that the semantics *are* approachable and comprehensible *is* a very good thing.

Conclusions
---------------

In the end, do I like the idea of re-writing the Hofstadter tutorials in Haskell, instead of Scheme?

I think it's a great idea. Haskell is a great language, and I think that on balance,
it would be a great experience for interested beginners to have something like Hofstadters
essays to guide them through learning to programming using a language like Haskell. Not as a *replacement* for the Scheme ones, but as an alternative.

More like this

Well... the problem with Scheme is not really that it is old, I don't think. C is also extremely old (older, even, technically) but this doesn't hurt its popularity or practicality one bit. The problem with Scheme is nobody in the real world wants to use it. (or (are (reasons The) (obvious extremely)) (be should)).

By the way, you've got a sentence that abruptly ends with "but in Haskell 'f x y'". You may want to fix that.

Scheme:
Recently I often wonder why they don't make whitespaces significative and wipe away lots of the parenthesis, optionally at least. (I know of a niche language that do this but I don't remember its name, is also quite what python do with braces, isn't it?).

Haskell:
It's true that monads can be a pain and are a huge one for beginners, but do-notation helps a little..
What definitely Haskell lacks is tutorial/Documentation for not Lambda Calculus exposed, as I was when I started to learn it.

PS: I just love pattern matching.

The question of scheme vs. haskell, at least from a 'language to teach first-year computer scientists' I believe has less to do with formalism than you suggest. I know that when I was a freshman, I wasn't interested in learning the lambda calculus, and the lambda calculus was well beyond what I was capable of. On the other hand, the idea of list processing was straightforward enough to make it easy to learn.

Did I learn all of scheme? Of course not; the point of learning scheme, at least in my opinion as both a student and educator, is to get the student to think recursively. After one has learned to think recursively, then educators can move on to teaching students other languages.

I don't know if teaching haskell would make the learning of recursive approaches easier, but I do think you are overstating the need for students to fully understand the formalisms.

Coin: the rest of the sentence should will be created on demand when you try to read it. You might want to use seq>. ;-)

I love Haskell, I will never forget the moment I 'got' typeclassing, I don't understand why other languages don't use it.
Pattern matching makes me smile too, you can often just copy recursive function definitions straight from a maths textbook, how cool is that?

By malpollyon (not verified) on 24 Oct 2006 #permalink

Daryl, Haskell's a descendent (via Gopher) of Miranda.

Mark, I was all prepared to argue that teaching begining programming in functional languages is a bad idea because it trains people to rely on higher level features that aren't available in most of the languages in industrial use. Then I remembered my first language was (Atari 800) Logo.

Is Hofstader's scheme tutorial online anywhere? A quick google search didn't find the actual articles although there were a few references to them.

As to teaching, I think one might choose different languages depending on the students. For Computer Science students, Scheme/Lisp/Haskell/etc. might be a good choice. For others I'd seriously look at an imperative language, probably Ruby.

By Scott LaBounty (not verified) on 24 Oct 2006 #permalink

Heh. I'm only now beginning to learn Lisp in school, and it's in a special-projects AI course in my third year. I taught myself Lisp after picking up Graham's pop-programming book (Architects and Painters, I believe? Something like that.) He convinced me that it was a worthwhile language to learn, and I've come to be convinced. ^_^

I've never picked up Scheme, though from what I understand it's simply a Lisp dialect, correct? I know one of the big differences is that Scheme has a single namespace, so you don't have to mess around with funcall or apply as much. On the other hand, it's sort of nice to be able to create variables called list, or have access to the documentation for a symbol automatically.

Now you, Mark, have been steadily convincing me to try and pick up one of your pet strongly-typed languages. ^_^ Haskell seems to combine functional programming with the best part of Prolog - pattern matching. Cool. Though, I can agree with TOWM that any language which purports itself to be 'pure' is likely a pathological language, though some may be more usable than others.

There's too many good ideas from all over the place for me to be able to justify sticking with a particular paradigm. Ignoring the clever bits, it's simply easier to program some things in particular paradigms, as I'm sure you know, Marc. I'm with Graham on embracing the language-as-tool paradigm - a good carpenter has many tools in his shed, and though he may not be an expert with all of them, he knows their uses and pulls them out when needed. This is why I enjoy the sloppiness of Common Lisp - not to mention the fact that it lets me write bad code at ten times the speed I would in C. ^_^

By Xanthir, FCD (not verified) on 24 Oct 2006 #permalink

The fact that there is a debate between Haskell and Scheme's type system at all just goes to show how good Haskell and Scheme both are as programming languages.
Most programming language debates are not really held on the merits of either language being defended, but are mostly cosmetic. That's especially true of "X vs. Lisp" or "X vs. Scheme".
Verity Stob once noted: "Lisp is still #1 for key algorithmic techniques such as recursion and condescension." Haskell is probably the one programming language that hard-core Lisp hackers feel they actually need to talk about on the merits rather than just dismissively feel superior to.
OK, tongue removed from cheek now.

By Pseudonym (not verified) on 24 Oct 2006 #permalink

Do we have to pick one? Both teach very valuable principles. As for what is used in the real world -- of course vocational schools should teach Java and C++. Computer Science isn't a vocational program though.

Even Scheme, which has by far the simplest and cleanest denotational semantics of any non-pure language I've ever seen, has enough incomprehensible gunk in its formal semantics that I wouldn't attempt to teach it before at least third year undergrad, and probably later.

I don't know about this. My introductory CS course was taught in Scheme, and I didn't have any problem with its semantics (aside from the whiplash of coming from an exclusively C/BASIC background). From your examples of Haskell, I see your point that the language doesn't get in the programmer's way as much as Scheme, but compared to C++/Java, Scheme's gunk barely registers.

Also important is that LISP/Scheme have much more of a history and community than Haskell. An intro CS course should be in a language that the students have at least an outside chance of ever seeing again, and even in the AI-related fields Haskell is rare.

(I am a grad student who is currently teaching Java to undergrads, and it is painful to spend three weeks wrestling with syntax and saying, "Ignore all this 'public static void main(String[] args) crap, I promise I'll explain it later". At the point in the semester that my students are learning for-loops, my intro class had already covered function-as-object, frames/closures, recursion (normal and tail), and was starting into trees and other data structures. And the worst part is, I know that my students could understand that stuff if they had the chance. But we're stuck with a language that didn't even have parameterized types until 2004 and needs 10 lines to just print "Hello World". Aargh.)

By SphericalCow (not verified) on 24 Oct 2006 #permalink

If you're talking about functional programming nothing can beat Haskell. It was designed to talk about FP.

But Scheme just rocks, if you talk about compilers. I don't know about that Haskell Template thing, though.

I recently (a couple of months ago) dove into Haskell full speed. What seduces me about the language is the tight connections to category theory - it embraces lambda calculus and a categorical way of thinking about programming so tightly that you can see the (for me) familiar categorical phenomena expressed in code.

I think that Scheme and Haskell cannot be compared. Scheme is still important beause it is the language of the Wizard book which is IMHO still the best introduction to Computer Science available. Scheme is unique for its flexibility to implement different programming paradigms on top of it. The Wizard book shows how to go from Scheme to OO to Prolog to pure FP to ... to imperative programming all using the same simple language, namely Scheme.

I was impressed with Haskell and am still fascinated by it. But I found that my daughter, who is a CS student and who was first exposed to Scheme and later only to Haskell had very little problems larning Scheme whereas Haskell was very hard for her. This is partly due to the lack of a state of the art IDE for Haskell (maybe this has changed in the meantime), but also the type signatures, which are the only clue you get when something goes wrong are very hard to decipher once you start using functionals. I am aware that this has nothing to do with the power and elegance of the language. But as long as there is no IDE which compares to something like, say, DrScheme, Haskell is certainly no beginner's language.

Also, I think that I/O, which is an essential concept for every programmer, should be very easy to understand in a language for beginners. This is definitely not the case with Haskell.

By Peter Rosenbeck (not verified) on 24 Oct 2006 #permalink

Obviously a language should be chosen on it merits related to the problem instead of age or coolness factor.

Concerning teaching students... It depends on whether you are teaching them programming, software engineering, or computer science. A programming class should teach using a language that will appear on a resume or in a job ad. A software engineering class should teach using a language that accentuates the chosen engineering model. A computer science class should teach using a language that exposes concepts (recursion, algorithmic complexity, etc.)

I started with FORTRAN in 1966, and pretty much ignored LISP until visits to MIT's AI Lab starting in 1973 convinced me that really good implementations of LISP were as fast as FORTRAN, and infinitely better for doing AI. I remember when specialized AI languages began being introduced, and Schemes and Frames were the hot new things (and a stoned grad student trying to explain his unified theory of Fremes). I've taught AI using LISP, but also written myself and/or had students write AI programs in APL, PDP-10 assembly language, SNOBOL, and even painfully in COBOL. At one level of abstraction, the language that you use doesn't matter. At another level of abstraction, it matters profoundly.

By way of the arguments of what languages are "old" -- I have not really programmed in anything newer than Python, but my son loved Haskell, except for problems with IDE. My son (now in his supersenior year at university, soon to get his double B.S. in Computer Science and Applied Math) just dropped an AI class where the teacher made an ill-advised choice of programming languages, purportedly in a democratic response to student demand. Sometimes we teachers have to just say "I don't care if you don't like this; it's good for you; trust me."

Strangely, since my son just got a 99th percentile score on his LSAT, he'll probably drop out of the Math/Computers track, not apply to Grad School, and instead go to a top Law School. Based on my 15+ years as a paralegal specializing in writing Appellate Court and Supreme Court briefs and writs, Law has a logic, but it is NOT axiomatic. It is procedural, semantic, context-dependent, inconsistent, precedent-based, and gets fascinatingly metalinguistic at the Appellate Court and Supreme Court levels.

The meta-meta-logic goes something like this.
(1) If there is no syntactic or semantic disagreement between plaintiff and defendant, then the law means what the law says, interpreted by the Judge as ordinary common-sense English, and applied by conventional rules to the Facts (the facts being determined by the "Trier of Fact", which might be either judge or jury).
(2) If there is no syntactic or semantic disagreement between plaintiff and defendant, in their briefs with the properly formatted nomenclature and citations, then the exact language of the statute is painstakingly parsed and interpreted by the Judge in the context of previous published decisions where this conflict or one parallel to it has been weighed. [Note that there are, more and more often, decisions which are officially NOT published, or even published and then "depublished."]
(3) If the above process still leaves significant ambiguity or contradiction, then one uses a higher-order legal analysis in which one can now make deductions from (a) "the intent of the legislature" (using evidence such as staff memoranda during the writing and debate on the statute before it became law, or during amendment), (b) "public policy" models of what the effect might be on people and institutions.

I wish I had the time to rigorously show what I have been claiming for years. The use of the above 3-tiered process forces "the law" (meaning whatever the highest level courts have decided in the given State) to take chaotic trajectories, highly sensitive to initial conditions. What one commonly calls "the law" at any time is more akin to a chaotic attractor in the space of all possible laws. But should this (once I write it, maybe in collaboration with my son, or my coauthor Philip V. Fellman, whose PhD was on Law, and who teachers Chaos Theory to doctoral students in Business and Economics), should it be submitted to a Logic, Law, Computer Science, or Complex Systems journal or conference?

I disagree about the need to put off teaching Scheme until later in the CS curriculum. It's a very easy language to pick up, at least in the context of a class. I could see that it might get harder to pick up, the more experience you have with more standard languages like C++ or Java, but first-year CS students aren't as burdened by that background as people who only approach it later in their careers.

I characterize them this way:
Scheme: assembler-level metalanguage
ML: higher-level metalanguage
Haskell: highest-level metalanguage
with all the attendant comparisons between assembler, C, and C++.

Scheme lacks types (not type-safety), includes macros, and provides access to the all the parts of an applicative-order lambda calculator (including continuations). There are few (no) obscure corner cases.

ML offers type-checked Scheme, but loses the macros. As a result, two-phase program construction becomes mandatory: modules+functors are values at one level and expressions at the next.

Haskell provide types and template meta-programming, eschews macros, and provides restricted access to the normal-order lambda calculator. The machine model is difficult (how many have studied the spineless tagless G machine semantics?)

To write a language, assembly-level stuff is invaluable ...

I think it would be simply fantastic if Haskell had the wealth of resources afforded to scheme. The Little/Seasoned/Reasoned series, HDCP, SiCP, The Schemers Guide, and various web sources provide such a wealth of information for learning core CS principles.

Hopefully Haskell will get as much attention. I'd love it personally for use in our home education curriculum. The Scheme library should cover the basics nicely though, so if Haskell doesn't get picked up in the same fashion, we should be alright.

You wrote: "but in Haskell f x y"

In fact: but in Haskell "f x y" only evaluates x or y if and when they are needed.
f x y = if x == 0 then 0 else y
If x is 0, y is never evaluated.

As a long time Haskell programmer (basically since it came out):
- In general, I love it
- Monads are ok, but tend to get messy.
- Space leaks are a real pain to debug.
- It lacks the number of libraries Python can access.

Typechecking is usually very useful, something python could do with. GHC is also blazingly fast. I usually prototype algorithms in Haskell, while GUI type stuff gets done in Python (better GUI support, such as PyObjc)

My 5 year old son writes proofs for very basic algebraic expressions at the ghci (Glasgow Haskell) prompt - with my assistance of course. He will not be introduced to imperative programming until he is "ready" - much like you wouldn't introduce the concept of war or politics to a small child.

Why can my 5 year old achieve this objective? I have watched my own so-called "educated" (indoctrinated?) university students struggle at this very same task with a much lower output. My point - I wholeheartedly agree that there is a lot of unlearning of "gumph" to do before you can start learning the fundamentals of computer programming - my son simply does not have this burden and so can make progress seemingly faster than your average C/Java programmer.

Haskell was my first functional programming language, so there was a major learning curve to get my head around lambda calculus, (basic) category theory, lazy evaluation etc... one after the other.

However, after getting my head around all of those topics, I found that learning a non-pure functional language was not only easier, but a little boring. For example, I have since started learning Lisp and OCaml and find that whenever I try to mix IO actions with what would be other monadic actions in Haskell, it feels like Lisp and OCaml are....missing something. They're just not as much of a joy to program in as Haskell.

Staying on topic with which language to teach undergrads, I personally would've struggled to learn all of Haskell's features aswell as trying to balance three or four other subject's materials, so my vote would be for a dialect of Lisp or ML.

Josiah:

I didn't mean that the formalism of the language semantics is important for *every* beginning student studying CS. But there are *some* who'll be interested, and the fact that even the most formal, rigorous semantics in Haskell are approachable is a good thing.

I know that *I* would have loved it. When I learned Scheme in my third year, I struggled with the denotational semantics in the back of the R3RS, but I just didn't have the math to get it.

SphericalCow:

The *informal* semantics of Scheme are *mostly* pretty simple (with the exception of call/cc). What you're talking about in understanding the semantics of Scheme is that kind of informal semantics.

But informal semantics only go so far. There are two ways to really deeply understand a language: one is to study the formal semantics of the language; the other is to implement it. But implementing it has the problem that if you're starting from the *informal* semantics, then different implementors will wind up with slightly different concrete semantics.

Denotational semantics (which are the kind of formal semantics where Haskell has a huge edge over Scheme) aren't the only kind of formal semantics, but they're still pretty much the standard that other aspire to. So if you want to be able to really get every detail of the deep semantics, you probably want to see the denotational semantics of the language - and that's a pretty reasonable thing to do in Haskell.

(On the topic of why things like formal semantics matter, I'm reminded of a story from my grad school days. One of my professors was involved in the OSI network protocol fiasco. The canonical specification of the OSI protocols was informal prose, rather than either formal semantics or a reference implementation. The first time they tried to connect up machines with OSI networking stacks from different vendors, *no* pair of machines from two different vendors could actually work together. Because each had interpreted the informal semantics in slightly different ways, and those differences were enough to make them totally non-interoperable.)

PLT Scheme in its graphical incarnation, DrScheme, has a set of educational languages (most (all?) of which are designed for particular books or courses). There you'll find very beautiful constrained versions of Scheme, many purely functional.
I think that's a nice tool for teaching.

I find Haskell's type system helpful when reasoning about code that makes use of high-order functions and monads. I've also been programming since the age of ten and still feel that syntax matters alot -- beautiful code makes me happy. I guess I'm still a beginner. :/

By Johan Tibell (not verified) on 25 Oct 2006 #permalink

Coin: the age of C does hurt its applicability, and even its popularity, but not nearly as much as it should! We have learned a lot of very important things since C was made, relating to how a program ought to be structured, which constructs are prone to errors, we have found entire classes of errors can be avoided with a little help from the language.

I downloaded a program to manage my ipod from Linux some weeks ago, written in C of course, and it segfaulted immediately at startup. I whipped up the good old GNU debugger and tried to figure out what went wrong. I didn't succeed, but when attempting I came to appreciate how far we've come with issues like manual memory management, uninitialized pointers, unchecked access and casting, bounds violations...

I am not a scientist, I am an engineer. I have been very interested in functional languages, and I still am, I suppose, but it's pretty obvious to me that to do the work that my company's program does now, functional languages would be a pain. All of them. You'd need extremely skilled people to be able to use them for anything non-trivial, and even then you would have to sweat blood to do things that are easy in the languages made for our kind of software - Java and C#. Going the other way, using those for "academic" problems, would be easier, IMO, no matter what some people believe.

I'd like to see the lessons from functional programming applied outside academia. I want tools to reason about my models and my code. Alloy is just so sweet.

In terms of educational languages, going with Haskell would be dumb, unless you want everybody stumbling over the type system.

We went with PLT Scheme and didn't learn about mutation for a long time.

This meant a pure language with dead simple syntax. No need to worry about type theory when teaching algorithms because everything is a bignum, list, or an abstraction.

Once you reach advanced programming (waaay down the road), the use of the languages becomes more important - both complicated haskell and scheme programs are difficult to unravel (and a nightmare in most other languages).

FYI, if you look at modern PLT Schemer's code, you'll probably see heavy use of pattern matching. 'Match' is a step up from ocaml style matching, though not significantly so.

Haskell is nice, but some of the control constructs are a tease, and I wish I had a full macro pre-processor there ala scheme or lisp.

Why? Well, my latest complaint is that I've got something that's almost a Monad (and in fact may be a Monad in the category theory sense - I don't know enough about category theory and found the GM/BM posts on it unenlightening). However, according to GHC, it's not a Monad and so I can't use the nice do syntax with it. Sure, I can still do something like this:

somefunction args >>= \somevar ->
someotherfunction moreargs >>= \othervar ->
...

But that's ugly. (And I still can't make >>= have a lower precedence than $, which means extra parentheses) I'd really like to be able to use the do syntax, but can't because my Monad-like thing doesn't work on an arbitrary type a but only on (Ord a) => a.

(Hey, why can't I format code in monospace type in a comment?)

@nostrademons: thanks I'll try that this weekend :)
By the way, what would you think of a CS course where the only language that you see is Java?

Daniel: What fails you with GH? For reference, a categorical monad is (definition fetched from Barr: Toposes, Triples and Theories - there it is called a "Triple")

A triple (T,e,m) of and endofunctor T with two natural transformations η:Id -> T, μ: T2->T such that μ(Tμ)=(μT)μ and μ(Tη)=μ(ηT)=Id (normally given by one commuting square and two commuting triangles...)

So, for the correspondences between this and the Haskell Monad concept: The natural transformation η corresponds to the Monad class function return, μ occurs in the use of >>=, but rather more implicitly: >>= takes something already in the monad, say in TA, and a function A->TB which thus ends up inside the monad, and then applies the function to the object, ending up with something in TTB, which with μ then gets transformed into something in TB.

I went to Imperial College (www.doc.ic.ac.uk) and our first language was Haskell. I think the second one was Java, but I think they use (this was a while ago) a translator to allow people to write "simple Java" before they figure out all of the public static crap.

Haskell is a lovely first language, however I've made at least one annoying mistake due to laziness so it can also be quite strange :)

By Andrew Shuttlewood (not verified) on 30 Oct 2006 #permalink

BASIC didn't have function parameters. You assign to a particular global variable and GOSUB 3141. No local variables. It started to fall apart when you got to maybe 1,000 lines of code. YMMV.

Assembler was the same. But you'd come up with a convention and follow it. Registers were temporary, and could be used for local variables. There's only GO TO, but you'd code as if block structure existed. In short, you'd use discipline.

Are you into set theory or differential calculus? To me, imperitive languages are like differential calculus, and Haskell and Scheme are like set theory. I don't like indirection in definition. My skills lie in the imperitive. I want to know how the program works, down to how the CPU actually implements instructions in hardware. My current favorite language is C. And yet, some of my programs have found novel approaches to problems, and taught me what those were. That, in less than 1,000 lines of code, written for speed. Having tried it more than one way, it was easier to learn the precise semantics with a physical model than it was to learn to manipulate totally abstract semantics. Lisp and Forth make much more sense to me, now that i've seen how such languages can be implemented on real computers.

At no time was i ever confused that x = x + 1 had anything to do with algebra. Computers, in my mind, didn't think, they act. This had to be an imperitive command. The BASIC idea of adding verbage: Let x = x + 1, didn't help at all. That's exactly what my algebra teacher used for syntax. In algebra, y = x + 1 was always a statement of equivelence: a fact, not a command. At the heart of computers is copy, compare and branch. Prolog and Sendmail notwithstanding.

Were i to pick a student's first language, i'd go with something very simple that could be mastered. Not Fortran, with it's historic warts - like horrific symbolic manipulation. Not BASIC, without any localized scope. Most of the languages i know are pretty complex. I'm afraid that leaves only TECO.

> (or (are (reasons The) (obvious extremely)) (be should)).

You do realize that when translated that expands to

reasons The are obvious extremely or be should

What point are you trying to convey here?

I can understand that prefix syntax isn't for everyone, but you're mixing prefix and postfix syntax wildly without reason.

Furthermore, in such a contrived example you would be applying or and are to to strings, not to lists unless quoted.

I'll agree that Scheme's syntax could use an optional whitespace notation, though.

> "it's pretty obvious to me that to do the work that my company's program does now, functional languages would be a pain. "

I always hear this from people who haven't written large programs in a functional style, which makes me glad I'm coding for my own hobby programs rather than trying to put bread on the table.

I would characterize FP as being extremely practical, given the team actually understands it well. Not as being a drop in panacea that will magically clean up codebases, especially codebases designed in a heavily OO or procedural style. To provide contrast, just imagine converting a large ml or haskell program into java, I personally think a transition either way would be a pain, though in my case I'd far prefer converting Java code to haskell. Maybe I'm biased, but I can't imagine how someone could see the situation the other way around....

I am not so certain that the infix notation of scheme is something that should scare new programmers. Newcomers need a good curriculum to learn this kind of stuff correctly. But it is true that the same "functional" code looks larger in Scheme and smaller in Haskell, but that is just because of pattern matching and likewise sugar, right... Nothing special about it I guess. I think Haskell with its complicated monad I/O stuff is too much for a beginner even with a respectable math background. Scheme could be a pretty nice place to begin with .. SICP free online videos go a long way. Maybe array parallel languages like SAC or SISAL may be an interesting option for sophomores with some knowledge of vector algebra, numeric methods etc. SISAL is practically dormant as of today. Quite a pity actually.

#38: Scheme and Lisp are not lazy languages -- you must evaluate your arguments before you can apply the function. Therefore, the translation you propose is actually backwards; the original author has it right.

#37: BASIC on 8-bit home computers were definitely restricted to 70s-era BASIC implementations. 80s-era Dartmouth BASIC was compiled, had local variables, proper functions (although you still couldn't pass them around), etc. Don't let the BASIC of your Apple II+ tarnish your opinion of a more advanced BASIC implementation.

In my opinion, I think every beginning computer student needs to learn assembly language first and foremost. Using assembly language, they need to write their own language compiler. The language needn't be too tough -- Forth or Scheme would be ideal.

THEN, once that is done, each student needs to use their language to write some simple programs, but here's the fun part, to write them also so that they work on other student's compilers too. This is the only reliable way to teach a student the value of language standards, the importance of libraries and re-usable code, etc. Once they have a feel for these issues, THEN they're ready for Java, C#, and other languages that strongly push objects, reusable component-oriented software, etc.

By Samuel A. Falvo II (not verified) on 08 Aug 2007 #permalink

I have been very interested in functional languages, and I still am, I suppose, but it's pretty obvious to me that to do the work that my company's program does now, functional languages would be a pain. All of them. You'd need extremely skilled people to be able to use them for anything non-trivial, and even then you would have to sweat blood to do things that are easy in the languages made for our kind of software - Java and C#.