My initial goal as of late was to be able to identify all problems that are hard for people to devise solutions for. You may think, “Hard for you, but easy for me. And isn’t intelligence versatile and non-transitive? So perhaps something might also be easy for you and hard for me. If this is the case then how do you plan to have some *universal* metric for how difficult things are for people to accomplish?” And I think this is a reasonable thing to consider. Can we really create some sort of ordered lattice of problems where the higher you ascend the harder things get *regardless* of who you are or what experience you have? Well, I had a hypothesis for trying to accomplish just that.
Kolmogorov complexity describes how complex mathematical objects are. I don’t think I can use Kolmogorov complexity itself for my purposes, but I wanted something similar. Basically, given an input or an output I want to be able to rank how much mental energy you have to expend to think about it (at least relative to other inputs and outputs).
Define a hard problem as any problem that has a solution implemented with a function where the inputs and outputs have a low relative mental comprehension complexity when compared to structures internal to the function. The idea here is that this function is hard to comprehend because considering it solely in terms of its inputs or outputs will miss out on significant internal features. Failure to be mindful of these internal features makes the function easier to implement incorrectly (some inputs result in invalid internal states) or easier to use incorrectly (expecting some outputs that will never occur because of certain features of the internal state). In order to correctly implement and utilize this difficult function you have to be able to project the complex internal nature of the function onto the inputs and the outputs. The bigger the disparity and the greater the internal complexity the harder the function is to comprehend.
If you can find a mathematically solid definition for the hard problem in step 2, then you can build a machine that recognizes hard problems (in terms of people being able to find solutions for them). This is the first goal. We have a machine for recognizing hard problems.
Try to see if the machine from step 3 can be adapted to generate an infinite number of unique hard problems. The naive solution is to just generate a random problem and then use the machine from step 3 to decide if the problem is hard. If the problem is hard you return it if the problem is not hard you generate another random problem. I’m not sure if this machine is particularly useful by itself, but how we use it in step 5 is important.
Begin giving people the hard problems generated in step 4. Some people will be able to solve familiar hard problems simply because they have seen them before. But if the hard problem definition really does encode what it means for a problem to be hard, then eventually you’ll generate a hard problem that they do not know how to solve from experience. This step is really only describing a way to frustrate people, but end goal is to find or train a person who can successfully solve hard problems.
Encounter a person who can consistently solve hard problems even when they are not familiar with them. The hope is that their technique will either be to find a way to decrease the required internal complexity of the difficult problem OR to find a way to divide the hard problem into two or more non-hard problems. With any luck these two techniques (or any others that show up) can be adapted to arbitrary hard problems. This is the second goal. A method for dealing with arbitrary hard problems.
Of course if you encounter a person who can easily solve hard problems, but not by making the problem more approachable or being able to explain why they are easy, then we haven’t really gained anything. Although I suppose you could just hire that person to solve your hard problems.
But now what
Unfortunately I’m not sure my hypothesis is likely to be viable. To start with, it seems like you could make a local hard problem easy by making the inputs and outputs of the function more complex. The end result is that hardness increases globally, but becomes harder to detect locally. This means my technique being successful might end up making software solutions *harder* to work with.
The second thing to consider is that inputs and outputs that include objects with methods or first class functions are kind of hard to assess for complexity. A first class function might look very simple, but it’s internal nature might include global mutable state that is accessed concurrently in several different places.
And the third issue is that because some inputs or outputs might be hard to assess, you really need to take a very abstract mathematical view on things. Komlogorov complexity itself seems like it has some mathematical limitations that we don’t want, and the alternatives don’t seem much better. The root of the problem is that we have to evaluate inputs and outputs in terms of effectively computable mathematical objects. We’re no longer really talking about constructs that people work with when they try to solve problems, but recursive topological spaces. I’m not sure that we’re going to end up with results that actually reflect how mentally difficult it is to comprehend a problem because there may be effective abstractions to a otherwise complicated mathematical object and that’s going to throw off all our assessments for complexity.