Wednesday, August 6, 2014

two three one

The Feynman problem-solving algorithm was a joke that Murray Gell-Mann [0] made in order to talk about the seemingly magical way Richard Feynman came up with solutions to problems (a trait that was noted by more than one person [12]).  The algorithm works like this:  1) write down the problem, 2) think very hard, 3) write down the solution.  While I think the reason why this is a joke is pretty obvious, the frustrating thing is that occasionally this is the only honest way to proceed with solving a problem.  Looking back on the landscape of dead ends, blind alleys, and failed techniques, the victors are forced to concede that the answer really did come from the 20 minute break when they went off to grab some yogurt.  The solution came from a process that was not only non-continuous, but actively disjoint from as far as anyone can tell the rest of reality. 

I’ve got a different algorithm that I go through although alas it seems rather less directed than Feynman’s.  1) think very hard, 2) write down the solution, 3) write down the problem.  Like … it’s nice when you realize that what you’ve been thinking about actually applies to things the rest of the world cares about, however the process of getting there involves thoughts that are difficult justify.  

What I’ve been thinking about ‘recently’ feels like a *thing*, but unfortunately I’m pretty deep into the first step of Inman’s problem solving algorithm.  This might simply be me rediscovering something that everyone else is already aware of or it might be me noticing a bunch of suspicious looking things that in reality do not have any actual relation to one another.  Hard to tell, but I want to try to express some of this.

This probably started with type theory.  Which then lead to category theory.  It turns out you can use F Algebras [3] to convert a recursive data structure into a non recursive data structure.  And I guess this is just the category theory description of work from domain theory [4] (check out lecture 2 about recursively defined programs).  Self referential mathematical objects caused a lot of problems in set theory, so there was some interest in eliminating these problems from recursive definitions in lambda calculus in order to ensure there was no formal issues that would prevent it from being used as a model for denotational semantics.  And while I’m on the domain theory topic, I’ll go ahead and mention that domain theory’s creator Dana Scott also came up with a rather neat observation:   computable functions are just scott continuous functions [13].  

Okay, so far we’ve seen two weird things.  The first is that there is some sort of relationship between recursive data and non recursive data.  The second is that we can treat computability as a mathematical object.  Let’s move on.

While poking around in category theory and f algebras, of course I couldn’t help notice discussions regarding co recursion [2].  Specifically, a HN user, Pitarou, mentioned something about co recursion and transforms [1].  The gist of the comment is that there is asserted, but not proved, a path from a recursive algorithm to a stream algorithm via co recursion and then from the stream algorithm to an iterative algorithm via deforestation techniques and tail call optimization.  

Much like the first observation concerning a transform from recursive data structures to non recursive data structures, we now see a transform from recursive algorithms to iterative algorithms.  Granted we don’t see the same rigor as we do from domain theory’s explorations, however just the idea is the important bit.  Keep moving.

Derandomization [5] is a technique for converting a randomized algorithm into a non randomized algorithm with possibly some slow down.  There are a bunch of techniques for accomplishing this, but I don’t think there’s any proof yet that all randomized algorithms can be derandomized (although I believe it is thought that this is the case).  This of course sounds vaguely similar to Pitarou’s recursive algorithm to iterative algorithm transform.  A thing that sounds about right but no proof that it’s possible for all inhabitants of its space.

Relatedly (because complexity theory) is the subtitle to Scott Aaronson’s blog [6].  “Quantum computers … can be simulated classically with exponential slowdown.”  While I’m aware that blog subtitles aren’t proofs, I have heard this before from several independent sources (and Scott Aaronson is a much better complexity theorist than I will probably ever be), so let’s call this one “proved” for the time being.

Interesting, we’ve got a relationship between recursive data and non recursive data and a relationship between classical computers and quantum computers both of which stand on firm theoretical grounding.  Additionally, we’ve got derandomization that sounds good but is unproved and a recursion to iterative process that also sounds good but is unproven.  All of that seems vaguely like the same sorts of things are going on (in some sense of those words).

Along comes HN user tel who starts a pretty interesting blog series concerning mutable algorithms in an immutable context [7].  Meanwhile HN user jerf clarifies and indicates a transform from mutable algorithm to immutable algorithm [8] really ought to exist.  I’m not sure that I buy jerf’s argument, but that’s just because I haven’t been able to get the whole algorithm into my head yet.  I think there’s some ambiguity with the description that will be resolved if I actually step through it manually with an example, however I’m not sure what problem I’m actually trying to solve so I haven’t convinced myself to go through that exercise yet.  Either way I’m already dealing with several other transforms that don’t have “totality” proofs yet, so I’m perfectly happy placing this (rather incredible even if only partial) transform in a pending state mentally for now.  

The point is that this yet another computer science thingy transform from one side of the fence to the other.  Normally the discourse is concerning iterative vs recursive or immutable vs mutable or classical vs quantum.  However, again and again I’m bumping into arguments that these things are really the same thing or perhaps one of them is just the ‘bigger’ version of the other.  Okay, only two more things to work though and I think I’m done.

I’ve long thought that lisp macros [10] and Λ abstraction in system f [11] feel really similar.  I’m not exactly sure what I’m suggesting here, but it feels sort of like it belongs in the same (or a similar) category as the rest of the transforms here.

Finally, I wrote kind of a weird monadic parser [9].  I’ve also used a similar method to convert a data structure that had a lot of implicit properties into a data structure that made those properties explicit.  The idea being that maybe there’s some sort of relationship between unstructured data and structured data that can be represented by monads, and maybe that sort of thing is similar to some of the other ideas I’ve been encountering.

Short story long.  It looks a lot like *something* is going on between things that are normally thought of as opposite sides of the spectrum.