I’ve noticed that I have a much better time analyzing how code works than I do actually writing out code. This actually touches on several different ideas that I’ve had in the past. Cognitive complexity classes  so far haven’t turned out to be any sort of overreaching “this is what problems really are” concept, but it might be a good way to describe how people approach, solve, and understand problems. Additionally, I have some ideas about the different ways in which people try to understand and mutate the world that I haven’t published. Looks like I’m going to have to do a write up about that soon. But the end game is an expounding on how people are different and why that makes writing software hard that I initially wrote about in .
It seems like it’s going to take a while to write everything up, but I’ve got a couple of things I want to get out here to start with. I’ll go back and fill in all the details that led up to this later.
Analysis is easier for me than actualization. And it’s something along the lines of I can analyze arbitrarily complex things and I enjoy the process, but when it comes to performing an action (like coding up some functionality) I want everything to be very simple. This has caused me problems with my interest in parsers because parsers are kind of complex. Unless you use monadic parsers. Monadic parsers are pretty simple to use, but require either understanding category theory or a lot about how lambdas and higher order functions work (or maybe a little of both). So I guess my point is that if I want to do something complex, I want to be using sub lexical scope injecting, dependently typed macros. And that’s kind of weird. Also this isn’t a “use cool new hip features because I’m a rockstar” type of thing. This is a “I can keep the type system, macro expansions, meta object assignments, and higher order anonymous functions straight in my mind, but I’ll take a cheese grater to my face if I have to deal with passing a widget to that function after assigning three properties magic numbers in the right order and calling init on the ‘unrelated’ foofactory” type of thing. Hopefully a better explanation will be forthcoming in blog posts yet to come, but for now…
Three Steps For Writing Functions that Never Fail*
1) Always write functions that are obviously total
2) If the function can’t be total, write a DSL that generates correct inputs for the function. Only ever use the DSL and all inputs will be correct by construction.
3) If you can’t write a DSL because the inputs to the function come from some outside source (or some other situation prevents it), run all inputs through a parser to verify that the inputs are valid for the function. Use a monadic parser composed of a bunch of simple obviously total functions and monadic combinators that form a correct by construction DSL. If at any point anything becomes too complex recursively follow these rules.
I don’t know if all interesting programs can be written following this strategy (or even the smaller set of all programs *I’m* interested in), but I’ll see if I can’t look into that and write a follow on blog post focusing on that question.
* - This process seems like it should work for me. Unfortunately, this gets into ‘people are different’ territory and we’ll have to wait until later for a digression into that.