## Thursday, August 15, 2013

### Cognitive Complexity Classes: Shapes

In order to expound upon the different types of Cognitive Complexity Classes (CCC), further attention must be applied to the types of shapes which algorithms may be composed of.  This is unfortunate because my current concept of a "shape" of an algorithm is overly vague.  So, to be able to focus on what makes any given CCC different from any other given CCC (or for that matter what makes any two CCCs the same … or even what types of sameness are needed to support this system), I first need to flesh out the idea of algorithmic shape.

Shapes on the cave wall

So my initial description of an algorithm shape invoked the imagery of the input and output of a function.  While that might have been a good way to break the ice, that description is hardly ideal.  After all, if you have a function that takes a graph and then returns the exact same graph, it would be intellectually dishonest to describe it as having anything other than a scalar shape.  Similarly, if you have a TreeMap function that maps all the elements in a tree using a given function and then returns the same tree with different elements, that seems a lot like a List shape.  You are doing operations on trees, but the nature of the operation is very linear in nature.  Given a function that traverses an entire tree and a function that traverses an entire list, the Map function is nearly identical between trees and lists (or for that matter scalars and graphs).

Comparison sorts are a pretty good class of algorithms to consider.  If you create a Merge sort algorithm that recursively breaks apart the list you want to sort and then glues it back together in the desired order, then you will end up with a tree shape across time.  On the other hand if you create a Heap data structure and then insert and remove all of the elements in your list, then you will end up with a tree shape across space.  This leads me to believe that comparison sort algorithms have a tree shape to them.

There's two interesting concepts that emerge from algorithmic shape that I want to discuss.  The first one is that different programming languages have different "dimensions", which are determined by the features available in them.  So I assert that a programming language that has recursion has an extra dimension than a programming language that does not have recursion.  If you want to implement a comparison sort algorithm in a language without recursion or you want to implement it without recursion, then you have to "flatten" out the comparison sort algorithm.  With recursion the algorithm can take its natural shape; without recursion you have to figure out how to represent the algorithm's "true form" in a flatter context.

Now I'm kind of eluding to something that I'm sure is going to end up getting me in a lot of trouble.  Best case is that I escape by being painted overly naive, ambitious, and/or idealistic.  So, flattening representations isn't that big of a deal.  We do this all the time.  We flatten 3D geometric shapes so that they can be represented on a 2D piece of paper.  We represent 2D matrices as an array of 1D arrays.  I'm sure that  "flattening" an algorithm's representation is something that can be formalized.  Now on the other hand, "true form" is clearly getting way to close to philosophic digressions or even worse poetic ruminations; I really don't see how I can hope to get away with that one without something concrete or at the very least wildly convincing.

So, Yuri Gurevich has a very interesting series[1] on Microsoft's channel 9 research website.  I'm sure I haven't devoted enough time to fully digest everything he said during that series, but a couple of things did stick that I think spell trouble for the direction that I'm hoping to go in.  Turing argued very convincingly for Turing Complete functions.  But he didn't argue at all concerning Turing Complete algorithms.  So while essentially every programming language is turing complete means that we can compute any function in every programming language, it does not mean that we can implement every algorithm.  And of course this makes sense if you think about it.  In the C programming language you could write an algorithm that writes a bunch of data to a file, calls a Unix command line utility to process that file, and finally reads the result from the Unix command to complete the algorithm.  Implementing this algorithm in lambda calculus is impossible.  Lambda calculus doesn't even recognize that files exist let alone have facilities to call out to Unix command line utilities.

Professor Gurevich, if I recall correctly, argues that the class of digital algorithms cannot possibly contain the class of all algorithms.  Analog algorithms for example make use of reals, which can't be contained in natural numbers let alone digital memory.  Not to mention any all encompassing theory of algorithms would have to contain all future human invocation (who foresaw quantum computers back when the Greeks were first coming up with GCD).

I'm fairly sure that I'm some flavor of constructivist[2], which I believe Professor Gurevich wasn't too keen on in his video series.  And I suspect that this might explain my idealistic "true form" sentiments.  After all if the only things that *really* exist are things that can be constructed, why all you have to ensure is that the Comparison Sort Problem Constructor is only called one time and it is going to be pretty hard to argue that the result is anything other than the "true form" of comparison sorts.  Of course in that last sentence I just spilled the beans as to why the road ahead of me is likely going to be a tough one, but let's get back to that in a little bit.

First though, let's get to that second interesting concept that I mentioned quite a bit ago.  Let's go ahead and assume that my algorithmic shape idea has all the merit in the world.  And let's assume that programming language features really do correspond to different shapes.  In that case, if statements and simple pattern matching seems to correspond pretty closely to scalar complexity.  Loops, map, fold, unfold, et al seem to fit nicely with list complexity.  Recursion and algebraic data types are pretty clearly tree complex.  Not quite sure what would correspond with graph complexity.  Some sort of state machine maybe?  Well at the very least an actual graph would be graph complex.  Of course we've got an interesting progression here.

Lists are a generalization of scalars.  You got one thing, now you got lots of things.  Trees are generalizations of lists.  Graphs are generalizations of trees.  And hypergraphs[3] are generalizations of graphs.  Now in my previous essay I suggested that maybe the most complex thing we have to worry about is graph complexity because you can implement lambda calculus (a turing complete language) with a graph reduction machine.  But on the other hand, let's go with the flow for a minute.  What kind of programming language features might show up to make graph complexity as easy to reason about as a Map operation with parametric polymorphism.  What about hypergraph.  There might be some very powerful abstraction techniques out there, and these complexity shapes might provide some insight to developing them.

Constructivism and the overly poetic imagination

So mathematical constructivism is probably my type of thing.  Turing completeness kind of gets in the way of a constructive view of things.  My thinking goes kind of like this:  If the only things that exist are things that can be constructed, then equivalence is just a hop, skip, and a jump away.  Just trace things back to where they were constructed and if it's the same constructor, then you're in business.  Unfortunately, with turing completeness we can't even know in general if an arbitrary function halts [4,5].  Forget about showing equivalence between two of these indeterminately halting beasts.

With two turing complete functions, we can't determine if they're the same thing.  But who's talking about functions?  I wasn't.  I was talking about algorithms.  Which are an even weaker concept than functions.  At least we have a definition for turing computable function.  Algorithms are pretty vague and tomorrow a new class of them might show up out of nowhere.

But then again.  Who's talking about algorithms?  I wasn't.  Not really.  In my constructivism heart I was talking about some sort of "true form" not for algorithms, but for problems.  Can't have a true form without equivalence.  At least if you want to stay outside the realm of poetry.  I mean how would you show that something was in it's true form if you're not equating it to *the* true form.  And in turing complete land you can't even have function equivalence.  Let alone algorithmic equivalence.  And turing complete algorithms aren't even a thing.  How do you have a true form for something that isn't even definable.  Now I'm talking about Problems.  Like not even implementations of algorithms for something.  But some sort of "higher plane of existence" where the Problem itself is a concrete thing.  Pretty ridiculous.

Although … on the other hand … turing completeness kind of sucks.  I mean don't get me wrong.  All things being equal, it's a pretty nice safety blanket.  After all you wouldn't want to be stuck up the river without a turing complete paddle to implement that function, which turns out can only be implemented in a turing machine.  However, a function that can be generically shown to be total is a function that is a lot easier to understand.  If you had a choice between a function that you knew was mathematically shown to always return a value no matter what value you pass to it and a turing computable function that had a "quite impressive" 90% test coverage, you're going to choose Mr. Turing-Complete-Is-For-Chumps.  Granted I might be overestimating the typical programers fear of ⊥, however I like to think the informed decision is to ditch turing completeness when you don't actually need it.

So perhaps my Problem as a concrete object isn't as far fetched as it sounds.  You do have to get rid of turing completeness.  Or at the very least have a subset of your workspace reserved for provably halting functions.  Hmm.  Homotopy type theory[6] supposedly says some interesting things about equivalence.  I guess it's time to hit the books.