Sunday, July 15, 2012

Lambda calculus


Lambda calculus is pretty simple.


λ x . x


x


(λ x . x) x


x x


(λ x . x x) x


That's more or less all of the types of things that you are likely to see in lambda calculus.  




Functions are the only thing in lambda calculus.  You can create functions with λ and you can call functions.  Abstraction and application.




λ x . x x 
(λ x . x x) x


Function application is left associative.  So "x x" is the function 'x' taking itself as a parameter.  "x y z" can be rewritten "(x y) z", meaning that 'x' takes 'y' as a parameter and returns another function which takes 'z' as a parameter.  If you want things to evaluate from the right you have to use parentheses.  "x (y z)", the function 'y' takes the function 'z' as a parameter and then the result is passed to the function 'x'.


Functions in lambda calculus only have one parameter, but multiple parameters can be simulated by returning multiple functions from the original one (see currying for more information about this type of thing).


(λ x . λ y . x y)


It's not a big deal to just create functions that take multiple parameter and then just assume that the real deal uses currying to achieve the same effect.


(λ x y z . x z ( y z ) )




To calculate function application you just replace the variable with the function that was defined earlier with λ.


(λ x . λ y . x y) (λ z . z) ( λ w . w )


= ( λ y . ( λ z . z ) y ) ( λ w . w )
= ( λ z . z ) ( λ w . w )
= ( λ w . w )




What happens if some of the variable names of one of the functions you are inserting is already used in your function?


( λ x y . x y ) ( λ y . y ) ( λ w . w )
= ( λ y . ( λ y . y ) y ) ... um ...


This doesn't seem right because you will get a different answer to this equation than if the second function had variable names inside of it that were different than 'y'.  This is solved with alpha conversion, which basically says that you can rename the names of function variables so that they don't conflict with another function's variables.


( λ x y . x y ) ( λ y . y ) ( λ w . w )
= ( λ y . ( λ y . y ) y ) ... quick alpha conversion 
= ( λ y . ( λ z . z ) y ) ( λ w . w )


There's also some other rules that involve greek letters.  Beta reduction is function application by rewriting a variable with its parameter value.  This is just function application that we covered before.


There's also eta conversion, which says that two equations are equivalent if their forms reduce to the same value.  This comes in handy if you want to lazily or conditionally evaluate an equation.


( λ x . z x ) = z




Also of note is the y combinator.  So far I haven't mentioned any way to globally name a function, and that's because there is no way to globally name a function.  This is a problem because it means that you can't have recursive functions because the function cannot call itself because there is no global name to call.  Lambda calculus doesn't have any other type of iteration, so things suddenly become kind of worthless unless we can find a way to pass a function into itself, so that it can call itself.


Y = λ f . ( λ x . f (x x) ) ( λ x . f (x x) )


The y combinator doesn't really work in languages that evaluate all of the parameters of a function before evaluating the function itself (however it works fine in lazy evaluated or if you are manually rewriting the function on paper).


Z = λ f . ( λ x . f ( λ y . x x y ) ) ( λ x . f ( λ y . x x y ) )


We just use the eta conversion trick we learned earlier to rewrite the y combinator to be an equivalent function, but lazily evaluate parameters fed to 'f'.  I've seen this called the z combinator in a couple of places.




Numbers can be represented with functions, called Church Numerals (Alonzo Church invented lambda calculus).


Zero = λ f . λ v . v
One = λ f . λ v . f v
Two = λ f . λ v . f ( f v )
...
Ten = λ f . λ v . f ( f ( f ( f ( f ( f ( f ( f ( f ( f v ) ) ) ) ) ) ) ) )


This is the same idea behind the peano axioms.  However, instead of having a number which is just a blob of data, these numbers represent calling a function on a value some number of times.  In other words the number ten is like a loop which runs ten times.


Here's an increment function:


Inc = λ number . λ f . λ v . f ( number f v )


The intuition is pretty straight forward here.  Take a number and then take the function and value that the number would have taken.  Then run the function with the original f and v.  Finally, run the result one more time through f.  Conceptually it's like you've added one more f to the series of fs that originally created the number.


After that addition isn't too difficult.


Add = λ number1 . λ number2 . number1 Inc number2


Here I'm just inserting the 'Inc' function's name instead of writing the whole thing out.  This isn't legal lambda calculus, but it is a good short cut to make the equation recognizable.  I could just replace 'Inc' with the equation I wrote above to make my Add equation legal.


But anyway, what's going on with Add?  Well, I'm running the Inc function number1 times on the value number2.  In other words, I'm adding one to the second number a number of times equal to the first number.  Let's try it with normal numbers to get the intuition.


5 + 4
= 4 + 1 + 1 + 1 + 1 + 1
= 5 + 1 + 1 + 1 + 1
= 6 + 1 + 1 + 1
= 7 + 1 + 1 
= 8 + 1 
= 9


Okay, looks good.


The decrement function is pretty crazy.  I had to look it up in order to figure it out, and even then I had to think about it for a very long time to convince myself about what it was doing and that it made sense.  There's a couple of different ways to do this, I'm picking the one I first got the hang of.


Dec = λ number . λ f . λ v . number ( (λ a . λ b . b ( a f )) (λ u . v) (λ x . x) )


The best way to figure it out is probably just to run through it... Which I'm not going to do right now because it's going to take forever.  I'll try to write a simple lambda calculus machine sometime that works via parameter rewriting and saves all of its steps.  That way I can just implement decrement and then copy and paste the output.  


There's also a bunch of other stuff that goes on with lambda calculus.  However, I don't appear to be that interested in it.  At least I like to know how the machine works, but I don't like to know the endless commentary on weird patterns that show up when the machine is actually working.  Sometimes that sort of thing is good to know because it will save you from spending your time figuring out things other people have already figured out (some of that stuff takes hundred of years for people to figure out ... well not for lambda calculus because it hasn't been around that long, but things like Fermat's last theorem come to mind).  However, most of the time it seems like reading things other people have already figured out is boring because it's like reading about how someone else had a very fun time of solving a sudoku puzzle.  I want to be the one solving the sudoku puzzle, not reading about how someone else noticed that 3 was already used in this box so the correct number had to be 9.

No comments:

Post a Comment