Saturday, October 27, 2012

Haskell Types and Type Classes (2.5)


In category theory a category is composed of objects and morphisms between the objects.  A functor is a mapping from one category to another category.  There are two parts to every functor a mapping for the objects in a category and a mapping for the morphisms.

Category theory works in Haskell because Haskell is composed of pure functions.  The Hask category is composed of types (the objects) and functions (the morphisms between the types).  The Functor type class in Haskell isn't exactly a true category theory functor.  The Functor type class only maps functions to functions (leaving out the types).  Additionally, it really only applies to one kind of functor ... like a functor from Hask to some subset of Hask.  So compared to Category theory functor it's kind of lame, but compared to the alternative of not having it it's kind of awesome.


class KFunctor f where
    kfmap :: (a -> b) -> f a -> f b

data KEither a b = KLeft a | KRight b
    deriving Show

data KMaybe a = KJust a | KNothing
    deriving Show

instance KFunctor KMaybe where
    kfmap _ KNothing = KNothing
    kfmap f (KJust a) = KJust (f a) 

instance KFunctor (KEither e) where
    kfmap _ (KLeft e) = KLeft e
    kfmap f (KRight r) = KRight (f r) 

jabber x = x + 5


kfmap jabber $ KNothing -> KNothing
kfmap jabber $ KJust 5 -> KJust 10
kfmap jabber $ KLeft "message" -> KLeft "message"
kfmap jabber $ KRight 5 -> KRight 10


The KMaybe KFunctor is relatively straightforward (at least as far as these things go).  The (KEither e) KFunctor on the other hand isn't so obvious.  Writing out the code for (KEither e) KFunctor helped me a lot in figuring out what was actually going on.  So again remember that KFunctor can also be defined like so:

class KFunctor (f :: * -> *) where
    ...

And that KEither is going to have a kind of * -> * -> *, but if we curry it, then we will get the desired kind for the KFunctor ... ie (KEither e) :: * -> *.  Now it personally bothered me that I had no idea about what the "e" was in the (KEither e).  But really the "e" doesn't actually matter because it doesn't show up in the type signature of kfmap ( ie (a -> b ) -> f a - > f b ).  The consequence is that the function referenced the kfmap ( ie (a -> b) ) will never have any way of dealing with the "e".  Which is totally fine depending on what usage you want to get out of it.  In functor's case we are more or less using it in the same way as the KMaybe is being used ... the only difference is that instead of KNothing you can actually have some sort of error message.  For a more complicated usage of KEither, KFunctor won't do the trick and you'll have to use something else.

Thursday, October 25, 2012

Haskell Types and Type Classes (2) ... Kinda

What would the following type mean?

data Ikky a b = Ikky a b

Well that would be a type Ikky that can be parameterized with two different sub types.  So ...

Ikky Int Int
Ikky [Int] Float
Ikky (Ikky Int) [Int]

These are all valid instances of Ikky.  Now what about this:

data Ikky a b = Ikky (a b)

Today's topic is about Kinds ... the Type of a type.

The appropriate GHC compiler directive is:


{-# LANGUAGE KindSignatures #-}

With it we can rewrite "data Ikky a b = Ikky (a b)" into:

data Ikky (a :: * -> *) b = Ikky (a b)

Let's talk about Kinds in a bit more detail.

So most types we are aware of have a Kind of *.  * means that it is a normal type, so ...

Int
[Int]
Maybe Int

Are all of kind *.  However, how might we specify something like ...

Maybe

Well, we can sort of think about it as a function that takes a type as a parameter and returns another type as the result.  So * -> *.  This can be expanded into * -> * -> * and so on and so forth.  Also there's (* -> *) -> * type of kinds which are similar conceptually to functions like map ... they take a type "function" that goes from one type to another and they apply it to something, and then yield a new type.  And of course there's also * -> (* -> *), but whatever, it's more or less the same as normal function types except with types as the parameters.

Although it's not exactly the same.  In normal function types we see things like a -> a.  Here we just have a single *.  Doesn't really leave much room for parametric polymorphism on the type level.  And that's because it doesn't allow parametric polymorphism on the type level.  Why not?  Well there's not mathematical reason why you can't do it.  However for whatever reason they decided not to include polymorphic kinds or even user defined kinds in Haskell (actually I think recent GHC releases have introduced polymorphic kinds and user defined kinds ... but so far I haven't gone through the work of understanding their syntax, so let's just focus on normal kinds for now).  

Okay, so data Ikky (a :: * -> *) b = Ikky (a b) would be use like ...

x = Ikky (Just 5)

And "x" would have a type like ...

x :: Ikky (Maybe Int)

I'm honestly not really sure why anyone would ever want to do that sort of thing, but I've probably just not encountered any problems that are best structured in such a manner.  There are some type classes that make use of this sort of thing, so it's probably more interesting to talk about how kinds are used with type classes, but first consider the following:

x = Ikky (Left 5)

Now Left belongs to the Either type.

data Either a b = Left a | Right b

Just specifying "Left 5" let's us know that we're dealing with "Either Int b", but it doesn't tell us what the "b" is.  However, that's exactly what the Ikky type is all about.

So ...

x = Ikky (Left 5)
x :: Ikky (Either Int) b

And you might say, "but what can we do with the 'b'".  And the answer is nothing.  We don't know what concrete type is it, but we can put it into a container, other type constructors, or return it.  If that doesn't make much sense don't worry about it too much.  Just keep it in mind if you ever take a look at the Functor type class and see it applied to "Either e" or something like that.

Here's some type classes.

class Jabber1 (a :: *) where
    wocky1 :: a -> a

class Jabber2 (j :: * -> *) where
    wocky2 :: j a -> j b

class Jabber3 (j :: * -> * -> *) where
    wocky3 :: j a b -> j c d

data Ikky (a :: * -> *) = Ikky (a Int) 
data Ikky2 b (a :: * -> *) = Ikky2 (a b)

data Neep a = Neep a

class Jabber4 (j :: (* -> *) -> *) where
    wocky4 :: j a -> j a 

instance Jabber4 Ikky where
    wocky4 = undefined

x (Ikky Nothing) = undefined
x (Ikky (Just x)) = undefined 

instance Jabber4 (Ikky2 x) where
    wocky4 = undefined

All of that compiles.  The important thing to do in order to get the hang of this is to sit down until you're confident that you get the hang of what's going on.


However, there's the gist. 

class Jabber a where
    wocky :: a -> a

This means that you need to hand the Jabber type class a nice first order type and the wocky function will map from that type to that type.

data Neep = A | B | C
data Blah a = Blah a

instance Jabber Neep where
    wocky a = undefined

instance Jabber (Blah Int) where
    wocky a = undefined

Now, then there are type classes like this:

class Jabber j where
    wocky :: j a -> j b

which can be more clearly written like ...

class Jabber (j :: * -> *) where
    wocky :: j a -> j b

This type class demands a higher order type.  So the "data Neep = A | B | C" isn't valid any more, but the "data Blah a = Blah a" is valid as long as you leave off what type the "a" is.

instance Jabber Blah where
    wocky a = undefined

This is a fundamentally different function from the earlier wocky.  The first wocky was moving some type to the same type.  This new higher order typed wocky is moving one type to another type, but within some other container type.  And of course you can make those kind signatures really complex, so the story only get's more complicated.


So the first question might be, "why do all of this?"  And the answer is probably along the lines of, "this is the type of things that mathematicians like to dream up."  However, "why do this in Haskell ... it seems really complex and abstract", seems like a much more pragmatic question, and the answer is unfortunately, "when you want to do complex and abstract things."  Originally, you probably could have gotten away with not thinking about this and using Haskell ... but at some point someone squinted their eyes and said, "hey Haskell sort of looks like a category", and the next thing you know there's a mountain of category theory concepts that are a core part of Haskell.  There are certain Monads and Functors which I could not make sense of at all until I went through and figured out what was going on at the Kind level.

Now personally, I like the idea of kinds a lot more than sub-typing.  Sub-typing is of course the thing that most programmers are familiar with ... or at least most programmers are familiar with sub-classing and basically they sometimes use it to handle situations where you want sub-typing.  Unfortunately, few languages have any sort of useful kind stuff going on, so I haven't really been able to experiment with it very much.  However, GHC has some sort of polymorphic kind support now, and I've gotten a renewed interest in assembly.  So I can stare at what Haskell is doing with kinds to see what I think about it.  Or, maybe ... I can get some sort of rudimentary programming language up and running and experiment with type systems on my own.

Thursday, October 4, 2012

Haskell Types and Type Classes (1)

Let's pretend we have a simply typed language.  We can begin by populating this language with some primitive types.

Int
Char
Float

This is a good start, but we might want to allow the user to create some types.

data ? = ?' <something>* (| ?'' <something>* )*

Okay, I'm abusing several notations pretty badly ... let me explain what we have here.

data is a keyword that tells the compiler that the user is defining a new type.

? is a place holder for whatever the new type name is going to be ... the user needs to fill this in for it to compile.

?' and ?'' (and even ?''' etc) are different constructors for this type.  We could just construct the type with a function by the same name as the type, but that's less interesting than letting the user provide many different possible constructor names.  Specifically, because some of the constructor names might take different parameters.

<something> is a preexisting type.  So we might see something like:

data Circle = Circle Float
data VennDiagram = Venn Circle Circle

The * on <something> indicates that we might have multiple somethings (or maybe zero) *and* they might be different somethings.  In the above example we have a constructor "Venn" that takes two circles as parameters.  

The ( ... )* indicates that we might have multiple constructors after the first one or we might have zero more constructs; each one following the same rules as the initial one.

Now we have primitive types and user defined types ... what about the types of functions?

<something> -> <something>

Okay, maybe an example that looks a little bit less pointy.

Int -> Int

Okay, cool.  This is a function that takes an Int and returns another Int.  So maybe negate or absolute value or something.  However we can make things a little bit more complex.

Int -> Int
Int -> Int -> Int
Int -> (Int -> Int) -> Int

There's a mathematical process known as currying which basically states that a function that takes multiple parameters is equivalent to a function that takes a parameter and then returns a function that takes the next parameter and so on.


λ x y z . x y z <=> λ x . λ y . λ z . x y z



Additionally, the arrow (->) is right associative which means that

Int -> Int -> Int <=> Int -> (Int -> Int)

And the upshot of that is that if you want to write a function type that indicates you take a function as a parameter you need parens ... so

Int -> (Int -> Int) -> Int 

That function type indicates a function that takes an integer, then a function that takes an integer and returns an integer, and finally returns an integer.

The simply typed language works, but it leaves something to be desired.  If you want an array or a list or some other container, then you have to make it by hand ... once for each data type you want to use it with.  Hardly ideal.

So how about parametric polymorphism.  That means that instead of writing new data types that take specific existing data types, we get to write new data types that take arbitrary data types.

data List a = Cons a (List a) | Nil

The 'a' can be any data type ... this allows us to write functions that can operate over a List of any type.

List a -> Int

So that's probably a function that returns the number of items in the list.  And we can also do things like this:

List Int -> Int

This one is probably a function that returns the sum or product of all the Ints in a List of Ints.  This can also be generalized some.

List a -> ( a -> b ) -> List b

This function takes a List of as and a function that can convert an a into a b ... err and a function that can convert an "a" into a "b" and then returns a list of "b"s.  This is called map or select in many different languages. 

What if we do something like this:

Pair a b -> (c -> c) -> Pair a b

This function takes a Pair type which contains one item of type 'a' and one item of type 'b' and then it takes a function from 'c' to 'c' and finally returns another Pair (after running the function over both the items in the Pair).  It seems about right, but it won't work in Haskell because the type checker will assume that 'c' is equal to 'a' (or actually 'b' if the first thing you do in the function is try to convert the second item in the Pair).  The type checker will then signal an error because you try to feed a function that takes an 'a' an item that has type 'b'.

To get around this you have to do something like this:

forall c . Pair a b -> ( c -> c ) -> Pair a b

This invokes Universal Quantification, which allows the type checker to realize that even though 'c' is going to be equal to 'a' it can also be equal to 'b'.  The whole point of going to parametric polymorphism was to allow functions that take arbitrary types after all, but the type checking algorithm has to be different if you're going to allow this type of higher order function type checking.  In the Glasgow Haskell Compiler (GHC), this option has to be activated using the following pragma:

{-# LANGUAGE Rank2Types #-}
or
{-# LANGUAGE RankNTypes #-}

Alternatively, you can also activate this compiler extension by using a command line flag.  

Other Haskell compilers might not support this option ... I don't really keep up with what all of them support, so you'll have to check in on it if you're looking into a compiler other than the Glasgow one.

So, parametric polymorphism is pretty nice, but sometimes we just want to do something specific when we have a specific type.  This is where type classes come in.

class X a where
    methodName :: a -> a

instance X Circle where
    methodName a = a

instance X Square where
    methodName a = a

(X a) => a -> a

The class introduces a type class.

The instance introduces a specific instance of this type class.

The (X a) => ... indicates that you have a function that can take a parameter of any 'a' as long as that 'a' also is an instance of the type class 'X'.

Okay, so I'm basically going to leave off here because things start to get really complicated.  Type classes bring up the issue of kinds (the type of your types).  And they also bring up the issue of "what if I want to do ..." to which the answer is "you can't ... unless  you activate this compiler extension".  As far as I can tell the Haskell type classes in the standard are actually pretty simple (but allow powerful things to happen).  I haven't actually read the standard, but I'm just kind of inferring based off of all the compiler extensions GHC has.

The story that happens again and again with type systems is along the lines of, "why doesn't this work?"  And the answer is usually, "yes that makes perfect sense, but we didn't write the algorithm that way because it's hard or impossible or we wanted it to be easy for other implementations to match our specification or etc."  With Haskell it looks like they wanted to make the type system interesting and powerful, but also relatively easy to implement so that there would be lots of different implementations.  With GHC it looks like they took the liberty to implement a bunch of other interesting but difficult features on top of what already exists within Haskell.

I hope to cover a bunch of these extensions later (starting with what's available with type classes).  And yes this means that I'm going to be bumping into monads eventually ... although I'm going to try to avoid yet another "monads are really like ..." post.  The internet already has enough of those and I actually wanted to start a series like that before I realized what I had done and pulled the plug.  The gist, btw, is that monads are an abstract algebraic concept that is sort of related to Haskell because Haskell is only pure functions.  If you want to learn what a monad is, learn category theory.  If you want to learn how to use monads in Haskell, read a bunch of blog posts.

Wednesday, October 3, 2012

Dynamic vs static ... vs system f vs prototype inheritance vs macros vs eval vs ...

It's general knowledge in the programming world that there are two major types of programming languages.  Dynamically typed programming languages and statically typed programming languages.

It used to seem like there was an ongoing argument about which type of language was better.  However, recently it seems to me that no one really talks about this anymore.  People tend to use dynamic languages for some tasks and static languages for other tasks.  

A dynamically typed language will tend to look something like this:

function add( a, b )
    return a + b
end

class A
    _value
    function +( o )
        return o + _value
    end
end

class B
    function +( o )
        return 5 + _value
    end
end

add( B.new, A.new )

While a statically typed language will tend to look something like this:

int add( IAddable a, IAddable b )
{
    return a.add( b );
}

class A : IAddable
{
    ...
}

class B : IAddable
{
   ...
}

add( new A(), new B() );



They basically look the same, but the major differentiator is that a statically typed language will make you classify your constructs ahead of time, preventing your program from compiling if the classifications do not match up correctly, whereas a dynamically typed language will allow your program to run without checking your classifications, shutting down the program if a construct is asked to do an operation that it is unable to perform.

There are of course a bunch of different variations that make things even more complex.  The picture I have painted in the code above tells a story about writing a compiler with the machine in mind.  Classes are memory layout specifications, objects are pointers to memory slots, and functions are basically dereferences.  In this story there really isn't any difference between static and dynamic languages.  The static language is more verbose, but it protects you from simple errors and it gives compiler mandated hints to programers as to what is going on.  The dynamic language is more permissive, but it does nothing to prevent trivial failures and is a maze symbols without meaning.

I think things get much more interesting when you look at the variations.  Smalltalk, Self, and Lua all tell a story about ideal objects and messages that get delivered to them.  There's really no such thing as an operation on an object ... it's more like asking an entity "leche" and seeing if it understands what to do.  Or seeing what it will do.  On the other hand, ML and Haskell tell a story about mathematical rigor and functions.  You are no longer performing operations on anything really ... you're evaluating functions much in the same way as if you were performing it with pen and paper instead of silicon and electrons.

Finally, you add in "other features" and see how people begin to become weary of the process.  The familiar slots and memory addresses melt away and much more complex questions arise.  Metaobjects allow objects to decide in a turing complete way how they will react to your program itself.  You thought you knew what subclassing meant ... let's see how else we might define that.  Macros and syntax extensions change the very meaning of programming language at compile time ... what is the best language to solve *this* problem.  What if we define data types not by their memory slots, but model them after algebra?  What happens if we try to generalize that process?  If all our functions are pure, doesn't that mean our language forms a category?  I wonder if that allows us to do anything useful.  Maybe we want to find a loop hole in the halting problem, well there's a solution to that pesky mathematical proof depending on how hard you are willing to work.  Or maybe you want to go the other way and evaluate arbitrary code after your program is already finished.  

When it comes to most of the programers I have interacted with, they seem to have a pretty good grounding with memory slots and operations upon them.  And, for the most part, this seems sufficient to create software that can be sold.  However, I feel a bit weary of trying to model more and more complex problems by trying to think about how to fit it in a computer.  I would much rather be thinking about how to get the computer to fit my problem.