As some one who has enjoyed the Lisp language (in several flavors) for about 15 years now, I wanted to express some of my reactions at recently discovering Haskell, and why it has supplanted Lisp as the apple of my eye. Perhaps it will encourage others to explore this strange, wonderful world, where it looks like some pretty damn cool ideas are starting to peek over the horizon.

First, let me say that unlike many posts on the Lisp subject, I have nothing negative to report here. It’s not that I haven’t had my share of ups and downs with Lisp, but that if you want to know about those, look around. Most of what other bloggers have to say is dead on, so there’s little need to repeat it here.

I’ll just address some of the cooler aspects of Lisp, and how Haskell compares in response.

# Elegant syntax

While many dislike Lisp’s abundant parentheses, I fell in love with them. Perhaps it’s because I spend so much of my time working on compilers, and Lisp programs read like their parse trees. This “code/data equivalence” is beautiful. It makes it trivial to write DSLs, for example, since you all you need to do is model the syntax tree as a series of Lisp data structures, and then evaluate them directly. It removes the need for an intermediate parse-tree representation.

When I first approached Haskell, I was shocked at the amount of syntax I saw. Operators abounded — more even than C — like: ->, =>, ::, $, $!, etc, etc. The more I looked, the more operators there seemed to be, until I began feel as lost as when I read Perl code.

What I didn’t realize is that in Haskell, much of the syntax you see are just special function names. There is very little “true” syntax going on; the rest is built on top of a highly expressive core. Lisp looks clean because nearly all its operators are used like functions. Haskell goes for an “infix optional” style, which allows you to call anything as either prefix or infix, provided you quality the function name correctly:

(/= (+ 1 2) 4)        ; Lisp reads very logically
((/=) ((+) 1 2) 4)    -- Haskell can look almost identical!
1 ^ 4                 -- this is the infix form of ((^) 1 4)


Nothing can match Lisp’s rigorous purity, but once you see past the sugary veils, Haskell is pretty basic underneath as well. Almost everything, for both languages, boils down to calling functions.

# Macros

Another beauty of Lisp is its macro facility. I’ve not seen its like in any other language. Because the forms of code and data are equivalent, Lisps macro are not just text substitution, they allow you to modify code structure at compile-time. It’s like having a compiler construction kit as part of the core language, using types and routines identical to what you use in the runtime environment. Compare this to a language like C++, where, despite the power of its template meta-language, it employs such a radically different set of tools from the core language that even seasoned C++ programmers often have little hope of understanding it.

But why is all this necessary? Why do I need to be able to perform compile-time substitutions with a macro, when I can do the same things at runtime with a function? It comes down to evaluation: Before a function is called in Lisp, each of its arguments must be evaluated to yield a concrete value. In fact, it requires that they be evaluated in order1 before the function is ever called.

Say I wanted to write a function called doif, which evaluates its second argument only if the first argument evaluates to true. In Lisp this requires a macro, because an ordinary function call would evaluate that argument in either case:

(defun doif (x y) (if x y))       ; WRONG: both x and y have been evaluated already
(defmacro doif (x y) (if ,x ,y)) ; Right: y is only evaluated if x is true


What about Haskell? Does it have a super-cool macro system too? It turns out it doesn’t need to. In fact, much of the coolness of Haskell is that you get so many things for free, as a result of its design. The lack of needing macros is one of those:

doif x y = if x then (Just y) else Nothing


Because Haskell never evaluates anything unless you use it, there’s no need to distinguish between macros and functions.

# Closures

The next amazing thing Lisp taught me about was closures. Closures are function objects which retain information from the scope they were constructed in. Here’s a trivial example:

(defun foo (x) (lambda (y) (+ x y)))

(let ((bar (foo 10)))
(funcall bar 20))
=> 30


In calling foo, I’ve created a function object which adds two numbers: the number that was originally passed to foo, plus whatever number get passed to that closure in turn. Now, I could go on and on about the possibilities of this mechanism, but suffice it to say it can solve some really difficult problems in simple ways. It’s deceptively simple, in fact.

Does Haskell have all this closurey goodness? You bet it does, in spades.

foo x = (\y -> x + y)        -- here \ means lambda
bar = foo 10
bar 20                       -- arguably cleaner syntax, no?
=> 30


In fact, Haskell even one-ups Lisp by making partial application something as natural to use as an ordinary function call:

foo = (+)
bar = foo 10
bar 20
=> 30


This code doesn’t just make foo an alias for add, which I could have done in Lisp as well. It says that foo returns a function object expecting two arguments. Then that bar assigns one of those arguments, returning a closure which references the 10 and expects another argument. The final call provides the 20 to this closure and sets up the addition. The fact I’m evaluating it in the interpreter loop causes Haskell to perform the addition and show me the result.

This combination of lazy evaluation with partial application leads to expressive capabilities I’ve frankly never experienced before. Sometimes it causes my head to spin a bit.

# Parallelism

One thing about Common Lisp is that it harkens back to a day when computers were much simpler — before multi-threading, and multiple processor machines were both cheap and common. Since it was designed at a time when there was One Processor to Rule them All, it didn’t go to great lengths to consider how its design might effect the needs of parallelism.

Let’s take function argument evaluation, as a simple example. Because a function call in Lisp must evaluate all arguments, in order, function calls cannot be parallelized. Even if the arguments could have been computed in parallel, there’s no way to know for sure that the evaluation of one argument doesn’t cause a side-effect which might interfere with another argument’s evaluation. It forces Lisp’s hand into doing everything in the exact sequence laid down by the programmer.

This isn’t to say that things couldn’t happen on multiple threads, just that Lisp itself can’t decide when it’s appropriate to do so. Parallelizing code in Lisp requires that the programmer explicitly demarcate boundaries between threads, and that he use global locks to avoid out-of-order side-effects.

With Haskell, the whole game is changed. Functions aren’t allowed to have side-effects, and their value is not computed until needed. These two design decisions lead to situations like the following: Say I’ve just called a function and passed it a bunch of arguments which are expensive to compute. None of these operations need to be done in sequence, because none of them depend on the others for their value. If then I do something in my function which needs some of those values, Haskell can start computing the ones it needs in parallel, waiting on the completion of the whole set before returning the final result. This is a decision the language itself can make, as a by-product of its design.

# Community

Lastly, the Haskell community is amazing. Newbies, you are welcome here. Their IRC channel is both a friendly and knowledgable place, where newcomers are cherished and developed.

Likewise, the web resources and books I’ve read about Haskell so far have all been top-notch. You get the feeling people are fascinated by the language, and eager to share their joy with others. What a refreshing change. Lisp may have a rich history, but I think Haskell is the one with the future.

1. http://www.lispworks.com/documentation/HyperSpec/Body/03_ababc.htm

### 41 Responses to “Hello Haskell, Goodbye Lisp”

1. Hi,

Re: lisp-like macros in Haskell — we have “template haskell” which basically does the same thing, although it’s not nearly as nice to use as lisp macros.

2. Nice post. I have dealt with Lisp before, but primarily I’m a C++ developer — and I’m also falling in love with Haskell, for the same reasons you document above.

Although I have only been using Haskell for a while in the work I do, I can definitely see how powerful it is just from the features available to me. It’s also making my C++ code writing better by introducing more functional programming principles in the code I write, so just learning Haskell I think makes someone a better programmer in many ways.

3. Well, i like lisp(scheme) & haskell too. But note that the need for macros is only eliminated in some cases. In particular, typeclasses like monads & arrows have special notation which helps a lot in using them. If i am not wrong, i think there is no way for you to directly define something for your own custom typeclass, what the do notation does for monads.
So you still need macros, either via something like Template Haskell or Liskell.

4. “Haskell goes for an “infix optional” style, which allows you to call anything as either prefix or infix, provided you quality the function name correctly:”

should probably be

“Haskell goes for an “infix optional” style, which allows you to call anything as either prefix or infix, provided you _qualify_ the function name correctly:”

5. Nice post!

A nitpick: Lisp is a family of languages, as you know, and some members of the family don’t specify the order of argument evaluation.

6. Thank you for this post, it is probably the best explanation of basic Haskell syntax I’ve ever seen.

I didn’t switch myself because I like some Lisp features that I didn’t yet se in Haskell — the condition system and interactive development (I use SLIME, and anything else is somewhat painful).

BTW,

Lisp itself can’t decide when it’s appropriate to do so

Not for function arguments, but remember let and let*, and other similar dual constructs — let is *parallellizable by default*, and you’d have to use let* to enforce a strict ordering.

8. Wow, you do indeed raise some valid points.

10. I don’t think you really did CL-style macros justice. They can be used for a lot more than just changing the order that arguments are evaluated in — you can create whole new syntactic constructs at will.

For one thing, this means that CL doesn’t tend to ‘lag behind’ in terms of language design, since if another language ever introduces something innovative then you can easily ‘extend lisp’ with macros to add that functionality. There is no need to wait for an updated compiler.

The other thing is that it allows you to build languages tailored to solving the particular problem at hand. DSLs are cool

Having said that, I have issues with lisps that macros just don’t make up for, and love Haskell more in any case :-p

11. You’re indeed right, I couldn’t do CL justice in this regard. When I referred to being like a “compiler construction set”, I meant to imply a whole world of goodness. Being able to utilize the entire Lisp runtime at compile-time is something that just can’t be expressed in a few words like this.

12. I think you do a bit of a disservice to Lisp’s macros: the more interesting macros are not ones that simply delay evaluation of certain forms. More interesting is when a macro transforms something that doesn’t have any meaning into something that does. I give some examples of such macros in Practical Common Lisp (http://www.gigamonkeys.com/book/), in particular Chapter 24 on parsing binary files. Which is not to say that Haskell isn’t cool too.

13. You’re so right about that, Peter. Lisp’s macros can be used to transform arbitrary syntax at compile-time into something legal, which allows for extreme freedoms of expression. You can even implement whole DSLs by macro alone — which is just what LOOP does, for instance.

So I take back my assertion that it’s essential purpose is to control evaluation, it’s truly a thing of beauty that other languages should take note of.

14. I would think that many other languages *have* taken note — the issue is that macros only really work in lisp because of the list based syntax. You can certainly do them in a languages with more ‘normal’ syntax (see Dylan and Nemerle, for example) but they’re far less pleasant to use.

There really isn’t a lot you can do about it, either, since it’s the trivial syntax of CL that makes CL macros so easy to use. I think we’ll eventually see reasonable macro systems for complex syntaxes, but AFAIK they haven’t arrived yet.

So, someone might say, why have complex grammars at all? They obviously aren’t *necessary*, since simple ones like those found in lisps are obviously usable, but by providing special syntax for common operations you can make the language more succinct and expressive. One of CL’s failings, in my opinion, is that although the syntax can more or less be adapted to work with anything, it’s still general and never gives you the optimal solution for anything. More specific syntaxes are less flexible, but usually far more expressive and succinct in their particular problem domain.

One day I hope to see a language which allows for specialised syntax, but still translates in into a clean AST which can be manipulated by macros at eval time. Maybe I should make a demo language… :-p

• Chech out ioke. I think it does what you mention regarding any syntax translating to one clean AST

15. Great post, John!

You’ve mentioned top-notch Haskell books but haven’t told us which ones you had in mind. There are 10+ of them on Amazon and I am very much curious to know what do you recommend.

Thanks.

Have looked into Clojure? I wonder what would be your opinion of it. I’m doing some simple stuff into it, and so far the one thing that stops me from being productive is my lack of experience with any JVM based system (not sure what to call sometimes, just to get say the current path, etc.).

18. I have looked at Clojure, actually. And while I liked it a lot, I don’t care for the JVM so much. That is, except for doing server-side work for clients, in which case its brilliant (I really love Maven). But since Haskell lets you write scripts, and compile into tiny, fast-running executables, there’s nothing about the JVM to get me excited for the places where I’d use Haskell.

It’s the same complaint I have with Groovy. I like it better than both Python and Ruby (both of which I still use), but because of the JVM, it almost completely eliminates it from my regular toolbox. It’s become relegated to server tasks at work, and writing unit tests for Java.

19. I never really understood this particular critisism of Clojure, – it uses the JVM.

Every CL uses some VM usually they don’t call it anything, but it’s always there, just as its there for Perl, Python and Ruby.

20. Your ‘comments’ to the code are flowing outside visible area (at least on my mac / safari)…

For e.g. In this :

(defun doif (x y) (if x y)) ; WRONG: both x and y have been evaluated already

I can only see this :

(defun doif (x y) (if x y)) ; WRONG: both x and y have been

Hope it helps
Robins

21. It’s not VMs that I have an issue with, but the slowness of the JVM in particular. I’ll post a note about various running times of Hello, World in the next blog entry.

I’ll also note that Clojure apparently didn’t learn from Common Lisp’s obtuseness when it comes to easily building source files into standalone executables. I’ve been at it for 15 minutes now, and even [the documentation](http://clojure.org/compilation) is not proving helpful.

Simple things should be simple. You mean I have to evaluate a call to (compile) in a REPL just to get a class-file compiled? I ran “clj hello.clj” and was rewarded with no indication of why it didn’t run the main class in my source file. Compare all this to Haskell:

Or, to make a standalone binary:

ghc –make hello.hs

22. Just one more comment about Clojure/JVM – I thought too that JVM is slow, and indeed the client version if generating the code fast, but the generated code is not fast

But just switching to “java -server” and the things are looking much much better, the compilation is a slower, not terribly, but the produced code is comparable with “C”, if you have hinted Clojure at certain places.

23. As much as I love and use Clojure I have to agree. The way Clojure compiles is weird and I don’t like it.

24. I’ve been using Lisp for 33 years, since I wrote system software for
the Lisp Machine at MIT, and later as a co-founder of Symbolics. I’m
using Common Lisp again now, as part of a big team writing a
high-performance, highly-available, commercial airline reservation
system, at ITA Software. Recently, I started learning Haskell. It’s
fascinating and extremely impressive. It’s so different from the Lisp
family that it’s extremely hard to see how they could converge.
However, you can make a Lisp that is mostly-functional and gets many
of the parallelism advantages you discuss. We now have one that I
think is extremely promising, namely Rich Hickey’s Clojure.

If you want to program in Common Lisp, read Practical Common Lisp by
Peter Seibel, without question the best book on learning Common Lisp
Bryan O’Sullivan et. al. It’s excellent and I highly recommend it.

All of the comments that I was going to make have been made very well
extension and making domain-specific languages.

Sam, above, wonders whether we’ll see reasonable macro systems for
complex syntax. I presume he means macro systems that can match the
power of Lisp’s macros. There is some progress being made in this
area. At the International Lisp Conference next week, there will be
an invited talk called “Genuine, full-power, hygienic macro system for
a language with syntax”. This is by David Moon, my long-time
colleague, who among many other things was one of the designers of
Dylan. He has been inventing a new programming language, roughly
along the lines of Dylan in some ways, and he’ll be talking about it
for the first time. I’m pretty sure he does not claim to have brought
the full power of Lisp macros to an infix-syntax language, but I think
we’ll find out that it’s another important step in that direction.

By the way, the conference also features a tutorial called “Clojure in
Depth”, by Rich Hickey himself, running five hours (in three parts),
“The Great Macro Debate” about the virtues and vices of Lisp macros,
and all kinds of other great stuff. We’ve closed online registration
but you can still register at the door. It’s at MIT (Cambridge MA).
See ilc09.org.

Clojure’s being written in terms of the JVM has an extremely important
advantage: it lets the Lisp programmer access a huge range of
libraries. Although there are a lot more great Common Lisp libraries
way Common Lisp can ever keep up with all the specialized libraries
being developed for the JVM.

There are also two huge implementation advantages: Clojure’s
implementation can ride on the excellent JIT compilers and the
excellent garbage collectors of the various JVM implementations (have
you tried out JRockit?) rather than having to do this work all over
again.

Because your post showed so much depth of understanding, I was very
interested to hear how you felt about Clojure. I don’t understand it,
though.

It’s always been unclear to me precisely what people mean by “scripts”
and “scripting languages” The terms are used widely, but with very
different meanings. For example, to some people, it seems that a
“scripting language” is one with dynamic typing!

As far as I’m concerned, nobody has a broader and deeper knowledge of
computer languages than Guy Steele. (I can back up that claim, if
anyone wants me to.) So I asked him, and here’s what he said:

“By me, the term ‘scripting language’ is not intrinsic, but extrinsic:
it describes the context and application for the language. That
context is typically some large mechanism or collection of facilities
or operations that may usefully be used one after another, or in
combination with one another, to achieve some larger operation or
effect. A scripting language provides the means to glue the
individual operations together to make one big compound operation,
which is typically carried out by an interpreter that simply ‘follows
the script’ a step at a time. Typically scripting languages will need
to provide at least sequencing, conditional choice, and repetition;
perhaps also parallelism, abstraction, and naming. Anything beyond
that is gravy, which is why you can put a rudimentary scripting
language together quickly.”

Steele’s answer seems in line with John Hennessey’s explanation of
what Tcl was meant for. The idea is that you have two languages. At
the lower level, you have something like C: suitable for writing
programs that are very fast and work with the operating system, but
hard to use for anyone but a professional. At the higher level, you
have something like Tcl, which is easy to learn and use and very
flexible, and which can easily invoke functionality at the lower
level. The higher level acts as “glue” for the lower level. Another
example like this is Visual Basic, and the way that you can write C
programs that fit into VB’s framework.

In my own opinion, this kind of dichotomy isn’t needed in Lisp, where
the same language is perfectly suitable for both levels. Common Lisp,
as it is used in practice, is not so dynamic that it cannot be
compiled into excellent code, but is easy to write for the kind of
simple purposes to which Tcl is typically put. (Particularly for
inexperienced programmers who are not already wedded to a
C/C++/Java-style surface syntax.)

In your own case, you mention “tiny” and “fast-running” executables.
I am not sure why “tiny” matters these days: disk space is very cheap,
and the byte code used by the JVM is compact. Common Lisp programs
compiled with one of the major implementations, and programs written
for the Java Virtual Machine, execute at very high speed.

The fact that you distinguish between server-side and client-side
applications suggests to me that what you’re really talking about is
start-up latency: you’re saying that a very small program written for
the JVM nevertheless has a significant fixed overhead that causes
perceived latency to the user. Is that what you have in mind?

The last time this question came up, I did my own very quick and dirty
test. I tried running a simple program in Clozure Common Lisp, from
the command line, and I saw about 40ms of start-up latency on a
not-particularly-fast desktop running an old Linux release. A trivial
Python program took about 7ms. That’s better, but 40ms is not very
noticeable. (I suppose if you’re writing a long command line piping
together many “scripts” or running them in a loop, it would start to

As a hypothetical question just to clarify your meaning: if there were
a JVM implementation that started up instantly, so that the speed of
execution of a small program would be the same as the speed of the
same code appearing in the middle of a long-running server process,

25. Hi Dan, you raise some great points. I’ve answered you in [another posting](http://www.newartisans.com/2009/03/the-jvm-and-costs-vs-benefits.html).

26. Nice article, quite typical of the trend. Lisp is a very good language per se, but the standards in application development are different now from what they were a few years ago, too bad lisp didn’t evolve.
For me haskell is the lisp of today. I went half insane when i began writing code with it. I was just feeling like i had the world in my hand. Lazyness, pure functional, typing system are all great, but when you have a deadline, the beautiful syntax gets ugly when the ring bells and you have a performance standard to fit.
For an all purpose, functional flavored language i’d pick OCaml or Erlang. No romanticism here, both have a strange syntax, not beautiful nor elegant, Erlang is pretty slow and OCaml is fast when your code is imperative, but they are to me far more reliable than haskell or lisp (for the same amount of work).
Clojure is really well designed. But the underlying object oriended JVM and the trick to call objects methods making them mutable in a pretty much functional language, seems to me a very weird approach (my 2 cents).

27. I love Haskell and am teaching it to my students. But I’m using Scheme for my current major research project because I need “programs as data”. I’m writing partial evaluators and also playing with domain-specific languages.

Haskell can also *embed* domain specific languages very nicely. See the “parser combinator libraries” or HaskellDB. See

Finally, I’d like to point out that I have a version of “nice syntax” + “generic AST” in a new syntax notation called Gel. It parses like lisp but looks like Java
See http://wcook.blogspot.com or the paper:
http://www.cs.utexas.edu/%7Ewcook/Drafts/2008/gel.pdf

28. Good post and very informative comments. Haskell, OCaml, Lisp, Clojure… every language has its pros and cons. One thing I am just thinking about Lisp; even if it’s not the best programming language of the world, Lisp has no equivalent as a building tool (that’s why Scheme or CL were mentioned before for DSLs). Haskell isn’t a good program for making programming languages because it was made perfect in its paradigm like Smalltalk or Oz in their genre (though they can do metaprogramming).

As an example of the metamorphosis of Lisp, Qi deserves a look too!

Lisp is dead, long live Lisp!

29. I must say, I very much liked Qi when I was studying it, but I’m attracted by the parallelism and “scripting speed” that Haskell offers. I also couldn’t get Qi to work with ECL, which prevented me from making completely standalone binaries.

30. > Haskell can start computing the ones it needs in parallel, waiting on the completion of the whole set before returning the final result. This is a decision the language itself can make, as a by-product of its design.

No, Haskell does not and cannot make such a decision itself. Please look at the basic examples. You, the programmer, must decide where to call pseq.

For the enterprising young PhD theses writers, or for anyone with an extra decade on their hands, it may be possible to make a runtime ‘parallelizing optimizer’ in the spirit of Java Hotspot. Useful parallelizable sections can only be determined by profiling code at runtime — or (currently) by programmer intervention. To parallelize everything is to grind your program to a halt as it descends into a dark sludge of thrashing.

31. “Haskell isn’t a good program for making programming languages because it was made perfect in its paradigm”

I disagree. Haskell is very adept at making embedded domain specific languages. In Haskell, EDSLs are library modules, however they are much more powerful and flexible to program than in other languages (say Python or Ruby or C). There are several reasons for this. Despite the complete lack of a lisp-like macro system (not accounting for Liskell and Template Haskell), Haskell can ensure that new operators and functions can work predictably and safely with the various tools that it offers to the programmer:
1: Monads offer the programmer the ability to simulate state and (with continuations) create many diverse and useful flow control constructs.
2: Types (not the ones that the mainstream is used to) can essentially be used to abstract anything (an EDSL’s function can be abstracted as ‘type EdslFunc a b = …’) and nothing that can’t ‘understand’ the EdslFunc (like haskell prelude function or an IO action) is allowed to act on it.
3: Laziness lets the programmer define ‘infinite’ data structures and only evaluates data when it is absolutely needed. For example, applying the ‘length’ function on [1+2,4+4,6-3] does not evaluate the results of the expressions in the list, but it does confirm that the length is three. Naturally, I would not recommend using ‘length’ on a list of [1..] (which is an infinite list).

Granted, writing DSLs in lisp may resemble writing (or programming) a compiler. However, Haskell isn’t far behind. The only ‘problem’ with haskell is that can’t simulate Object Oriented Programming (not accounting for O’Haskell) with EDSLs (as well as other paradigms that don’t map well onto a purely functional model). IMHO, this is actually a good thing because concepts like OOP, while successfully hyped, present no real benefits over functional programming (or even procedural programming). It seems that OOP appears most attractive to glue-huffing, ruby-coding teenagers and curly-haired bosses. But that’s just me.

32. I agree with your comments, Nick. Haskell (and the monadic system) is powerful (enough) and you can do everything in a quite flexible way. As I like coding in dynamic imperative language (Smalltalk and factor are the last kings of imperative language), I am not totally agree with you about OOP: objects (like a kind of ‘let over lambda’) are a good thing for procedural languages as they reduce locally side effects. As there is no kind of state (except through monads) in Haskell, the concept of OO is an heresy. An dynamic typed OO imperative-based DSL for fast development phase and simulation (like robot example in SOE book) on a strongly typed truly functional language is what I am looking for: Haskell will probably fulfill my goal for next 10 years.

33. That there’s “no state” in Haskell is quite wrong; in fact, there’s plenty of state all over the place (though perhaps not quite as much as in other languages). The difference is the control over the state: specific bits of state in Haskell programs are restricted to specific places in the code in a much more comprehensive and, well, “bondage and discipline” way than in other languages. Techniques such as monads are ways to make this less painful.

And it is painful at times, because this way of working does come with a cost. That said, it has benefits, too, and I found those to outweigh the costs, once I’d learned Haskell reasonably well.

Where Haskell really shines (and why it can survive purity and still be a language in which it’s not too difficult to program) is in its use of various powerful mathematical abstractions (of which monads are only one: also consider functors, monoids, various forms of applicative stuff, functional reactive programming, and so on) and their pervasive use in the libraries. (I don’t personally find having a lot of libraries to be all that useful if I have to start programming in pseudo-Java again to use them.) I feel that before being able to fully evaluate Haskell, you need to learn to use at least some of these things effectively and build upon them. That opens the kind of new world that, for example, users of languages without macros just don’t understand when they don’t know Lisp.

I’m not saying you don’t lose things, too, when you use Haskell. Lisp’s macros offer some good things that Haskell just doesn’t do as easily. The real question, though, is, do you gain more than you lose?

34. I found this post searching for lisp and Haskell.

I know it’s two years old, but since you should have more experience with Haskell. How does it feel working with a statically type language after coming from a dynamic one?

• I have to say it really depends on the task. I like the feeling of completeness afforded by statically typed languages, but there are times when the language can feel like it’s getting in the way. I haven’t done as much yet with Haskell as I had hoped to, but most of my experiences with it to date have remained fun and worth doing.

35. Hi, Tim Daly mentions in the post (http://www.mail-archive.com/clojure@googlegroups.com/msg45050.html) about the usefulness of homoiconicity of Lisp (and also of the limitations of Template Haskell in attempting to achieve this). Do you have any views in regard to this? Thanks

36. [...] Hello Haskell, Goodbye Lisp [...]

37. Just a random thought: Daniel Weinreb mentioned that the dichotamy between “scripting” and other languages is blurred by Lisp; when I first started learning Haskell and Lisp, I noticed this, too. Both languages have higher-order constructs that allow you to do express things in an elegant way, yet both can be used to compile fast code.

Because of this, I’ve taken to calling these languages “transcendental”, because they transcend the categories we tend to try to pigeonhole languages in.

As a mathematician, I really like the mathematical purity of both Haskell and Lisp. While I also like the theoretical underpinings of Haskell, I can’t help but enjoy the flexibility of Lisp–Haskell’s type system is a little too rigid for me, and it doesn’t quite make up for it in benefits.

Having said that, I still haven’t learned either language to nearly the level I would like!

38. About the partial application example you gave..i think you can quite easily implement it in lisp:

(defmacro reduce-function (new-fun-name fun &rest args)
(defun ,new-fun-name (&rest remaining-args) (apply ,fun ,@args remaining-args))
)

So now you can use it:
(reduce-function new+ #’+ 10)
(new+ 20)

The only difference i can spot is that i must call reduce-function.

In general i believe that lisp is capable of letting the programmer implement new features and programming models..that’s lisp’s power, and it comes from macros, without underestimating Haskell (in which i’m new).

Please correct me if i understood something wrong

39. Best writing about haskell I’ve seen so far, I saw that it has very good performance, but I got stuck into “OMG this is Perl again gaaacK” mode at first when I looked at it this was very enlightening for me.