Monday, April 27, 2015

Programming Language Feature Diagrams as Cognitive Complexity

After observing things in the software engineering world for about eight years now, I’m vaguely concerned that people approach problems in fundamentally incompatible ways.  The adage “the right tool for the job” makes a lot of sense when you compare web programming to embedded development, but I’m afraid that “the right tool for the person” is something we’re also going to have to come to terms with.

Regardless of how we come to terms with differences between individuals, I still really want a universal metric for measuring how difficult a problem actually is.  I’ve made several attempts at this [1][3], but so far I haven’t encountered anything that would allow me to assign numbers to a problem that indicates how difficult a problem is to cognitively comprehend.  An early attempt was Cognitive Complexity Classes [3].  Basically, I noticed that working with lists is pretty easy, but working with a graph is kind of difficult, so I hypothesized that you could divide algorithms based off of some associated data structure that *matches* the algorithm.  I don’t think this idea is really valid, however with my current line of thinking it is actually surprisingly close.

My most recent attempt at cracking this problem was Mental Comprehension Complexity [1].  This guy might actually be viable, but it would need a lot of refinement before it starts to yield results.  The gist here is that I hypothesize that a function with simple inputs and outputs and also with a very complex internal nature would be difficult to comprehend.  Simple vs. complex would be measured by Kolmogorov Complexity [4] or something similar to it.  I think Mental Comprehension Complexity isn’t going to go very well because in order to handle inputs/outputs of an algorithm that contain turing complete features (i.e. an object that has methods attached to it) you’ll need to do a bunch of work to represent the algorithm in terms of an effectively computable function (like maybe a turing machine).  In the end I’m not sure that the mathematical statements we’ll be coming up with will actually apply to the medium that programmers typically work with (something closer to C#).  And that’s kind of misses the goal I was hoping for (Mental or Cognitive complexity not some sort of algorithmic complexity).


Consider the number 5

So start by considering the number 5.  It’s about the simplest thing you can consider; if you encountered it in a programming context, you wouldn’t be intimidated by having to keep track of it mentally.  However, if we add more constraints to the number 5, it begins to become more problematic.  Maybe it’s important that this number must be odd.  Or maybe it’s important that this number be prime.  Or maybe a triangular number.  As we add constraints the amount of mental effort we have to put forth increases.  I think this is because with a single number the space we have to consider is “any one of all the things”, but once our space has constraints we now have to consider, “one of the things, but not any one of the following based off of this complex formula.”

So take another step up on the Cognitive Complexity Class ladder.  A list of *things* is pretty simple to comprehend, but a list of things that all have constraints is going to be similarly hard to comprehend as a constrained number.  And it can get so much worse.  Think about a sorted list.  In this case each item in the list now has certain rules that go with it that are based on the other items in the list.  This can get arbitrarily complex.  Perhaps the second element has certain constraints, but only if the fifth element possesses other constraints.

If we ascend into tree data structures, we can have entire subtrees that have their own special rules based off of the constraints we place on their roots.  And we can go that much further with graph data structures.

This isn’t Cognitive Complexity Classes because that framework assumed that you could see algorithms as having the “essence” of certain data structures.  However, I am borrowing the basic procession to illustrate how we can create and get an approximate feeling for what complex and difficult to comprehend things look like.  I think the basic forces involved are the size of the space that we are working in and the number of constraints that have been put on the elements in that space.  Additionally, I suspect the nature of the grouping of the elements in any given space is relevant as well.  If we have a simple property like even or odd, that’s not too hard to comprehend; if we have something harder to predict like primality, then we have to spend additional effort to understand when we are looking at a valid element in that space.  

However, this is just static things like data.  Is there a way to extend this idea to handle things like objects with methods or a function that accesses global mutable data?  


Enter Programming Language Feature Diagrams (PLFD)

I came up with an idea a while back about how to talk about certain types of programming language features [2].  I didn’t really like the statement “exceptions are exceptional” because it didn’t actually explain anything that was going on.  So I drew a tree shaped structure with a box at each node and said, “exemptions go up this thing until it hits a circle (representing try/catch statement).”  I like PLFDs because they let you think of features in terms of what they do, not in terms of when you would use them.  An exception being a dynamic stack communication device is better than it being for exceptional situations because in the latter case you just know to only use them sparingly, but in the first case you know that there needs to be a try/catch statement above you in the call stack or else you’ll crash the program (not the best user experience).  Additionally, when you see a subtree drawn out with an indicator of where the relevant try/catch lives, you know how you can refactor a program without having to worry about adding additional try/catch statements to retain the semantics of the original un-refactored program.

These things can be used to measure cognitive comprehension complexity in much the same way as we were above with the data structures.

I noticed that PLFDs are graphs (lazy evaluated graphs I guess, but totally graphs) with a bunch of very strange constraints on them.  Consider a mutable reference.  When you have a mutable reference and you pass it to functions you call in the function the reference is initialized in, what you’re actually doing is setting up a bunch of different constraints between the functions that make use of or alter that mutable reference.  This is exactly the situation we have above with the data structures except instead of a chunk of data that exists at one time we have a chunk of data that exists over time.  However, if you kept track of each value returned by every function call, you could fill out the entire call tree as if it were a data structure.

Different features used in different ways imply different sets of constraints on the final program.  How big the space is and how many and what kind of constraints you have determine how hard it is to understand the program.

Gist

If you can create a programming language feature diagram for every feature in a programming language, then you should be able to measure the cognitive comprehension complexity of the entire program (including highlighting sub-complexity of sub-parts of the program (assuming the program is that well structured)).  For data I suspect you really need some sort of dependent types to allow the system to know how cognitively complex the values actually are.  But this system will be able to handle both data structures and objects and higher order functions.  Because a method is just another diagram that travels around with data, you should be able to use this technique to also handle higher order (and turing complete) situations in the same terms as the programmer is using, which will hopefully mean that any conclusions you draw from cognitive complexity will actually apply to programs that people write instead of mathematical constructs that are used for proofs.

What the actual measurement is at the end of the day is something that I still need to work at.  It seems like the output is going to be a space of some sort and then a description of how easy it is to select legal elements in that space.  Additionally, there might need to be some work done to identify what groupings of legal elements are easier or harder to wrap your head around.





No comments:

Post a Comment