Thursday, May 26, 2016

Cognitive complexity

I’ve been worried about what I’ve come to call Cognitive Complexity for a while.  In short, cognitive complexity is why some problems are hard for people to solve.  This stands in contrast to what computer science typically is concerned with when it talks about Complexity.  Algorithm complexity in computer science is concerned about how many steps of computation or how much space a problem takes to solve.  Solving a problem in a very simplistic way may have a very high algorithm complexity, but it might be easy to understand and vice versa.

One might hand wave away Cognitive Complexity by saying everyone has different cognitive abilities and experience levels.  A complex problem might just be a lack of experience or require a specific perspective in order to become a simple problem.  However, I think there might be something more fundamental going on.  For sure there’s some relative things going on where experience, perspective, and skills will make a complex problem easy where a simple but unfamiliar problem may be harder.  I think, though, that all things being equal there are problems that really do take more cognitive effort if you could somehow level the cognitive playing field.

There’s at least several ways to look at this.  My focus is primarily about what makes different programming problems complex.  Maybe that’s about a problem you’re trying to solve with a program,  maybe that’s the way you’re trying to solve the problem with a program, or maybe it’s something you’re trying to understand about a program.  In the programming context, I think there’s a few different ways you have to look at why something might be complex.  I’m going to talk about the first one here and then add the others in future blog postings.

The first type of cognitive complexity I want to talk about is the complexity of the inputs and outputs of some sort of black box.  Maybe this is a mathematical function that is defined completely by the nature of its inputs and outputs, maybe it’s a procedure where you don’t have the source code, or maybe it’s some sort of transform where the implementation doesn’t tell you as much as the before and after pictures.

So far I don’t have an algorithm setup to generate a Black Box Cognitive Complexity metric.  And the rules I’m about to go over don’t really have an absolute ordering in their importance.  

Given some sort of function (the function being your black box) you’ve got inputs and outputs.  With the inputs we can generate a set of valid inputs.  We’re going to call this Valid Space.  For what’s coming next it’s probably useful to be able to construct a metric space [1] or a topological space [2] with the input space to the function (where the Valid Space is going to be some sort of subspace).

Rule 1:  If Valid Space is equal to the set of all inputs (i.e. the domain), then you’re dealing with low complexity.  

Rule 2:  Maybe your Valid Space isn’t equal to the domain.  Well in the next best thing is to have a Valid Space that has minimal holes [3][4].  A hole in the Valid Space messes with your intuition about the black box.  Some inputs are fine, but if you travel through the valid input space in an arbitrary fashion you’re going to encounter illegal inputs, which break your black box in either overt or covert ways.  With minimal holes you also minimize the edge cases you need to keep track of.

Rule 3:  Your Valid Space should be a path connected or well connected space [5].  Having a Valid space that’s path connected means that there exists some function that will let you index through the entire space.  This isn’t as good as having no holes in your Valid Space (because then you can jump around arbitrarily), but it’s the next best thing because instead of memorizing the entire Valid Space you just have to memorize the function that generates the entire Valid Space.  Of course if you can’t be path connected being well connected is probably next best because then your Valid Space is just one “chunk”.  Not having to worry about special disjoint regions of inputs seems beneficial to overall understanding.

Rule 4:  The function of your Valid Space to the output of the black box should be continuous [6].  The general intuition of a continuous function is that small changes in inputs will correspond to small changes in outputs.  This is beneficial because it makes life easier on your intuition.  If changing the input on your black box just a little results in radically different outputs from the black box, then it’s much harder to use the outputs for some other black box.  Predictability leads to lower cognitive complexity.

Rule 5:  The function of your Valid Space to the output of the black box should “preserve paths”.  I don’t really have a great definition for this one right now, but I think it’s roughly the same as saying that your black box should be a fully faithful functor [7].  The basic idea is that if you have some sort of system with relationships or transforms and you send that system through the black box, then the resulting system should have the same sort of relationships (or else you should expect trouble). 

My motivating example for Rule 5 is as follows:  Let’s assume that there exists a black box that takes a spoken language and produces a corresponding written language.  With spoken English, the letter “N” sounds like the letter “M”.  Now let’s use our black box to create written English.  In written English the letter “N” also looks like the letter “M”.  This is what I mean by “preserving paths”.  The spoken “sounds like” relationship is converted into the written “looks like” relationship and exists for the same letters in both systems.  This seems like a good thing for English because if you misspeak the wrong sound (“N” vs “M”) it will be written in such a way that miswritten word will sound similar to the correctly written word.

An example of where paths are not preserved, resulting in a problem, is with name lookups in databases.  The black box we’re considering is either written English to database lookup function OR spoken English to database lookup function.  With both written and spoken English, “N” and “M” are similar and can be confused with one another.  However, a database lookup function considers “N” and “M” to be completely separate entities.  The relationship between the two letters/sounds is destroyed.  This is problematic because it means that if your name is misspoken or miswritten it will be very hard for a person to notice that something has gone wrong, but the database will fail to locate your personal information.  Rule 5 asserts that black boxes that do not preserve paths will create situations that are more cognitively complex.











No comments:

Post a Comment