# Haskell: the Basics of Type Classes

classes
. Today, we're going to try to take our first look real look
at them in detail - both how to use them, and how to define them. This still isn't the entire picture around type-classes; we'll come back for another look at them later. But this is a beginning - enough to
really understand how to use them in basic Haskell programs, and enough to give us the background we'll need to attack Monads, which are the next big topic.

Type classes are Haskell's mechanism for managing parametric polymorphism. Parametric polymorphism
is a big fancy term for code that works with type parameters. The idea of type classes is to provide a
mechanism for building constrained polymorphic functions: that is, functions whose type involves a type parameter, but which needs some constraint to limit the types that can be used to instantiate it, to specify what properties its type parameter must have. In essence, it's doing very much the same thing that parameter type declarations let us do for the code. Type declarations let us say "the value of this parameter can't be just any value - it must be a value which is a member of this particular type"; type-class declarations let us say "the type parameter for instantiating this function can't be just any type - it must be a type which is a member of this type-class."

The type declarations in Haskell let us do this in a very flexible way, derived from something called the Curry-Howard isomorphism, which shows that type systems for lambda calculus are isomorphic to inference systems in some logic. The simply-typed lambda calculus has a type system related to simple propositional logic; Haskell, with its type variables, has a type system derived from first order predicate logic.

In fact, in Haskell, when you declare a type with a type variable, in the logic of the type
system, you've implicitly quantified the type with a universal quantifier. So a type "a -> b" in
Haskell is actually implicitly understood to mean: (∀ a,b : a → b).

If you think back to predicate logic, you might recall that "such that" constructions like "∀ x such that x ∈ S: T(x) → U(x)" are not really primitives in the logic - they're just a shorthand for a logical implication: ∀ x : (x ∈ T) → (T(x) → U(x)). That's exactly what Haskell does with type-classes - it writes that as predicate (e.g., "Num(x)) with an implication symbol ("=>"). The reason for the separate implication symbol is
basically just to visually distinguish the part of the type that is a constraint on the type variables. So when you write something like "(Eq a) => a -> a -> Bool", what your really saying is "For all types a such that a is in Eq, this function has type "a -> a -> Bool".

Enough theory - let's look at how we really use these things. Suppose we wanted to write a function
that added a list of values. Without using type classes, you might try to write it as:





If you were to try to compile that function, you'd get a compile error: it uses the "+" operation on values of type a, but the type declaration says that it works on any type a. What we need to do is specify a type constraint that supports addition - like Num:


>addValues :: Num a => [a] -> a



What we've done is chosen the appropriate type predicate for selecting types that we can use in the function. And there's something important about that: the Haskell prelude and standard libraries define a lot of type classes. If you're not defining a type-class for something very specific to your application, there's a good chance that there's already a suitable type-class in the libraries, and you should use the library one, rather than defining your own. Your code will be much more portable and reusable if you take advantage of the type classes in the standard libraries.

Whether you're only going to use library classes, or define your own, you'll still need to be able to read type-class declarations, and read and write instances.

A type-class declaration consists of two basic parts: the first is a declaration of what operations (methods) an instance of the class must support; and the second is a set of default implementations of those methods.

For example here's the declaration for the type-class Ord, which is the class of all
ordered types:


class (Eq a) => Ord a where
compare              :: a -> a -> Ordering
(<), (<=), (>=), (>) :: a -> a -> Bool
max, min             :: a -> a -> a

compare x y | x == y    = EQ
| x <= y    = LT
| otherwise = GT

x <= y  = compare x y /= GT
x <  y  = compare x y == LT
x >= y  = compare x y /= LT
x >  y  = compare x y == GT

-- Note that (min x y, max x y) = (x,y) or (y,x)
max x y | x <= y    =  y
| otherwise =  x
min x y | x <= y    =  x
| otherwise =  y



The first line declares the name of the class being declared, and any constraints on what types can be instances of this class. The constraints are other type classes that a type must be an instance of before it can be made an instance of the new type. In the case of Ord, it says
"(Eq a) => Ord a", which means before a type can be an instance of Ord,
it must also be an instance of Eq. The type variable a will be used throughout the class definition to refer to an instance of type-class Ord.

After the declaration line, there are a list of function-type declarations. These declare the
methods that must be implemented in order to be an instance of the class. (We'll get to how
functions are made into type-class methods when we look at the instance declarations.)

What this says is: for a type to be an instance of Ord, it must first be an instance of Eq; and to be an instance of Ord, it needs to provide implementations
of methods for compare, <, <=, >, >=, max, and min. It then provides some default implementations - if an instance declaration doesn't specify an implementation of a method for its type, then the default from the class declaration will be used. There's one interesting thing to note about the default methods: a default method can use any operation specified on the type-class, or required by the class constraint. The default methods do not have to make sense together. If you were to use Ord without providing an implementation of either <= or compare, you'd wind up the Ord operations all being infinite loops! The default implementation of compare is written in terms of <=; and the default implementation of <= is written in terms of compare.

So what would a type-instance of class Ord look like? Let's try putting together the beginnings of an implementation of surreal numbers..


>-- A surreal number is a number with two sets of surreals:
>--    a left set which contains elements less than it, and
>--    a right set which contains elements greater than it.
>-- By definition, zero is the surreal with both sets empty.
>
>data Surreal = Surreal [Surreal] [Surreal] deriving (Eq, Show)
>srZero = Surreal [] []
>srPlusOne = Surreal [srZero] []
>srMinusOne = Surreal [] [srZero]
>srPlusOneHalf = Surreal [srZero] [srPlusOne]



Then, we implement the basic ordering operations on surreal numbers. We don't do anything special - just implement them as perfectly normal functions. As a reminder, here's the definition of ≤ for surreal numbers:

Given two surreal numbers x = {XL|XR} and y = {YL|YR}, x ≤ y if and only if:

1. ¬∃xi∈XL : y ≤ xi, and
2. ¬∃yi∈YR : yi ≤ x

>srleq :: Surreal -> Surreal -> Bool
>srleq x@(Surreal [] []) y@(Surreal [] []) = True
>srleq x@(Surreal xl xr) y@(Surreal yl yr) =
>      (all (\yi -> not (srleq yi x))  yr) &&
>      (all (\xi -> not (srleq y xi)) xl)



Now, we can bind these functions into the type-class class method,
so that they'll be used for surreal numbers:


>instance Ord (Surreal) where
>   x  The where clause of the instance declaration contains
a perfectly normal list of function declarations. The only reason for separating the function declarations from the instance declaration is
because it allowed us to write the type declaration, and it made the
code clearer to have a separate declaration of the ≤ operation (with the explanation of what ≤ means for surreal numbers) than it would have to
put the implementation in-line.
With that declaration, we can now use all of the comparison operations defined in Ord on our surreal numbers:
*Main> srZero <= srPlusOne
True
*Main> srZero <= srMinusOne
False
*Main> srPlusOneHalf > srZero
True
*Main> srPlusOneHalf  srMinusOne <= srZero
True
*Main> compare srZero srMinusOne
GT
*Main>

Tags

### More like this

This brings up an important point.
Mathematicians use the term "class" to mean something like a "set". The difference between a "class" and a "set" depends on your axiomatisation, but basically, a "class" is some kind of meta-thing which lets you talk about collections without invoking issues like Russell's paradox. If you haven't seen it, please read that before reading on.
In a nutshell, Russell's paradox is based on constructing a "set" W which contains all sets which do not contain themselves. There are various ways to get around this, but most involve axiomatising sets such that W isn't a set. So we define a broader notion of collection, the "class", of which sets are but one variety. Roughly speaking, if it plays nice with logic, it's a "set", and if it doesn't, it's a "class".
Type theorists use this terminology in a similar way.
You can think of a type as a "set" of values. So, for example, the type "int" in C is a set of whole numbers which must at least contain -32,768 to 32,767. (The theory pedants present will note that it's slightly incorrect to refer to types as "sets" when you get to fancier types. I'll deal with this in a moment.)
A "class", then, is a "set" of types. You can see that from the article.
A complication is that in a more traditional OO language, the word "class" actually does three things.

class Foo { ... };

This declares a class (i.e. a set of types) called Foo. It also declares a concrete type called Foo. It also declares that the type Foo (and all its "subclasses") is a member of the class Foo.
In Haskell, these three operations are decoupled. You say "class" when you want to declare a class, "data" or "newtype" to declare a concrete type, and "instance" to declare that some type is a member of some class.
This makes for some extra expressiveness that you don't get in traditional OO languages, such as being able to declare a built-in type to be a member of a user-defined class. (In Java, this would require hacking the library source to import the new interface!)
Now, the nit earlier: I claimed that a type is a set of values. This isn't true in Haskell. To see why, consider the following type:

data Foo = Foo (Foo -> Bool)

This declares that the type Foo is isomorphic to the type 2^Foo. There are exactly no sets in the world which could possibly satisfy this isomorphism. Yet this is a valid Haskell type!
The reason why it works is that types are not sets; they have more structure than that. This is why the rich area of type theory is heavily based on category theory: it's the theory of mathematical structure.

By Pseudonym (not verified) on 16 Jan 2007 #permalink

The simply-typed lambda calculus has a type system related to simple propositional logic; Haskell, with its type variables, has a type system derived from first order predicate logic. Wow, that's cool! I'd heard about the Curry-Howard isomorphism, but it's usually described as an isomorphism between proof and programs. Suddenly it became slightly more graspable. Let me guess: These dependent type languages, like Epigram and Cayenne(?), have type systems related to higher order logic, then?

This series is great, not least because of Pseudonym's comments. Thanks!

It is not quite correct to use deriving(Eq) for the surreals, instead I think you can use the following instance:

instance Eq (Surreal) where x == y = srleq x y && srleq y x

By Ãrjan Johansen (not verified) on 16 Jan 2007 #permalink

I'm enjoying these tutorials as I tackle Haskell as my new language for 2007. Is there a way to get an RSS feed just for this category? If not, consider that a feature request!

Actually, Hakell's type system corresponds to _second_ order logic, not first order. (Pure second order, to be more precise). Dependent types correspond to first order logic.

By Alexey (not verified) on 18 Jan 2007 #permalink

I had a related question for you. How does the type system deal with bot. Say the == function has type (a->a->Bool) but could also return bot if one of the arguments happen to be bot? Am I correct? Does this mean that all types share one member - bot i.e. is bot is common to all types (sets). This also raises the question about what the value of (bot == bot) should be T/F/bot. Should it also be bot? Or does each set have its own bot? Or is each instance of bot itself unique? Or is bot out side the scope of the type system, is it merely a run time anomaly?

Zoheb: Yes, all types in Haskell are "lifted", and bottom == bottom is bottom. (Most modern Haskells provide mechanisms for dealing with "unboxed values", but this is an extension.)
The reason why each type doesn't need its own "bottom" because of parametric polymorphism. Haskell has built-in functions such as:

undefined :: forall a. a
error :: forall a. String -> a

both of which return "bottom". I've put in the "foralls" to make the situation absolutely clear: For any type a, "undefined" is of type a.

By Pseudonym (not verified) on 18 Jan 2007 #permalink

I am trying to figure out bot so that they make sense mathematically.

So, it looks like each type/set needs to have more than one bot. Or else, clearly bot == bot must return true. Each invocation of bot needn't necessarily be unique otherwise bot == bot has to return false. I don't know what implications this has on the referential transperancy of a functional program. Or is the mistake in trying to map == of Haskell to equality in Maths. Clearly, we can ask if two functions are equal in FOL (am I wrong here?) but not so in Haskell, it wouldn't get past the type checker. I have no knowledge of type theory/ set theory but it looks like bot arises when we make a mathematically invalid statement in Haskell i.e. supplying an argument to a function that is outside its domain. Does "type theory" prevent such a situation from arising? It seems fairly difficult to do so, unless we allow type resolution algos to not halt.

Morever the presence of catch lets one continue after performing an apparently illegal operation. It looks more like a monad now. So, if I were to write my own == operator for my type that catches bot and returns true if both arguments are bot, (false otherwise) have I created a set which has exactly one bot? Or should I just stop trying to map Haskells == to a mathematical notion of equality. Or maybe types just dont correspond to sets as you explained before!

â x such that x â S: T(x) â U(x)" are not really primitives in the logic - they're just a shorthand for a logical implication: â x : (x â T) â (T(x) â U(x)).
I think it should be (S instead of T in the second):
â x such that x â S: T(x) â U(x)" are not really primitives in the logic - they're just a shorthand for a logical implication: â x : (x â S) â (T(x) â U(x)).

By Jan Hermanns (not verified) on 18 Jan 2007 #permalink

Zoheb, denotational semantics actually distinguishes strong equality and weak equality. The difference, as you might suspect, is in how they handle bottom.
In Haskell, strong equality is spelled "=", but weak equality is spelled "==". So using this notation bottom = bottom would be true (or it would if "=" was a function), but bottom == bottom would be bottom.
If you're curious, the classic book on this topic is Joseph E. Stoy, Denotational Semantics: The Scott-Strachey Approach to Programming Language Semantics.
Christopher Strachey is a computer scientist that more people should know about. He not only invented the difference between l-values and r-values in programming languages, he also coined the term "currying". (So you can blame him for the fact that it's not called "SchÃ¶nfinkeling".)
My favourite Strachey quote:

It has long been my personal view that the separation of practical and theoretical work is artificial and injurious. Much of the practical work done in computing, both in software and in hardware design, is unsound and clumsy because the people who do it have not any clear understanding of the fundamental design principles of their work. Most of the abstract mathematical and theoretical work is sterile because it has no point of contact with real computing.

By Pseudonym (not verified) on 19 Jan 2007 #permalink

Zoheb:

I think the key to getting over your confusion is to realize that bottom is not really a value. Bottom is a representation in the semantics for a failed computation - a function that doesn't return is treated as returning bottom. So you can't compare bottom to bottom - doing the comparison would require getting the function to complete and return a value - but bottom represents the fact that the function never completes and returns a value.

An excellent explanation of 'bottom' can be found in the book by Richard Bird, "Introduction to Functional Programming", second edition, Prentice Hall. I recommend the book to anyone wanting to learn Haskell.