Wednesday, May 30, 2012

Limitations

Computers can be frustrating sometimes. Staring at a spinning hourglass (or a spinning beachball if you are Mac user) gets annoying very quickly. The problem is, after waiting patiently for a few seconds, you do not know what you are supposed to do. Is that spinning hourglass ever going to stop or will it keep spinning forever? Sometimes it starts spinning when you ask your computer to do a simple task, like open a file, and you start wondering why such a simple task is taking so long.  Eventually you give up and start canceling everything you asked your computer to do. Sometimes canceling the tasks stops the spinning hourglass. But sometimes it does not.  Then you start randomly hitting keys on your keyboard, and yelling at your computer. And, finally you give up, and reboot the computer by turning its power off. Not a very pleasant experience.

You know what would be great, if there was a way of guaranteeing that when you run a program on your computer, the program eventually terminates. Then, you will know that, when the spinning hourglass appears, it will eventually disappear and the program will finish the task you asked it to do. But, how can you guarantee such a thing? If you rely on software developers manually checking the programs then you are likely to have human errors. The best option would be to develop a program-checking-program. This program checking-program would take a program as input and automatically determine if the input program eventually terminates or not. There will be no room for human errors.

It turns out it is not possible to write such a program-checking-program. And, in fact, there is a proof that shows that this is a problem that can never be solved using a computer. The speed of the computer's microprocessor or the size of its memory does not change this result. It is a fundamental limitation of computers!

How can one prove such a result? The basic idea is the following: Assuming that there is a program-checking-program leads to a contradiction. Since existence of a program-checking-program leads to a contradiction, we conclude that it cannot exist. It is not too difficult to explain how the contradiction arises.  Let's assume that there is a program-checking-program. Now, let's write a new program using the program-checking-program. The new program first asks the program-checking-program the following question: "Do I terminate?". After the new program gets the answer, it does the following: if the answer was yes,  then it does not terminate (just keeps spinning the hourglass forever), if the answer was no, then it immediately terminates. The point is, this new program we construct just does the opposite of what the program-checking-program says it is supposed to do. This is a contradiction because we were assuming that the program-checking-program can figure out if any given program can terminate or not. Apparently, no such program exists.

Based on this result we know that there are problems that we cannot solve with computers. We call such problems "uncomputable" problems. Computers do many wonderful things but they have their limits, they cannot solve uncomputable problems. The problem I described above is called the "halting problem,"  the problem of figuring out if a program terminates (halts) or not. Once computer scientists figured out that halting problem is uncomputable, they went on to identify many other problems that are uncomputable. Assume that we are working on a  problem and we show that given a program that solves that problem, one can solve the halting problem. Then we can conclude that this new problem is also uncomputable since we know that the halting problem is uncomputable. So there is no point in trying to solve that problem using a computer.  For example, in general it is not possible to write a program that determines if a mathematical theorem is provable or not, because it is possible to show that if there was such a program then we can use that program to solve the halting problem. So, we know that we cannot replace mathematicians with computers.

In addition to uncomputable problems that can never be solved with computers, there are also some problems that are solvable using computers but we do not know how to solve them efficiently. The programs we write to solve these problems are able to find the solution eventually but they can take a very long time. One of the most famous problems that we do not know how to solve efficiently is called the traveling salesman problem. The problem is simple to state: There is a traveling salesman who has to visit a set of cities and he wants to minimize the length of his trip. The input to the problem is a list of cities and the distances between them. And as output we want a route that covers all the cities and  has the minimum length.  The most efficient algorithm we have for this problems is not very efficient. Every time we add a city to the list, the time it takes to compute the minimal path can double. For example, if this algorithm computes the shortest path for 100 cities in 1 second, for 101 cities it could take 2 seconds, for 102 cities it could take 4 seconds, for 105 cities it could take half a minute, for 110 cities it could take sixteen minutes, for 120 cities it could take about 4 months, for 130 cities it could take about 30 years, and for 160 cities it could take more time than the time it took from the big bang to now. We call this type of increase exponential, the time it takes to solve the problem doubles when we increase the size of the problem just by one. For some problems, like traveling salesman problem, the best solution we have is an exponential-time solution. This type of problems are called intractable problems since it can be infeasible to solve them for large problem instances.

One thing to note here is that when computer scientists analyze algorithms for solving a problem, they typically focus on the worst case scenario. The question they want to answer is: What would be the longest possible time an algorithm can take given an input of certain size? For example, we may have an algorithm that solves the traveling salesman problem for the cities in California very fast. But the question is, can it solve all possible instances of the traveling salesman problem fast? And, as I stated above, the best algorithms we have for this problem will take exponential amount of time in the worst case.

Interestingly, if one claims to know the solution to a given traveling salesman problem instance, then it is not difficult to check if it is the correct solution. If someone claims that a particular path is the shortest path for visiting a set of cities, we can check if that particular path is indeed the shortest one efficiently. So, if we guess a solution, we can check if it is the correct solution efficiently. The problem is, since there are an exponential number of possible paths, if we start guessing and checking, it takes an exponential amount of time to find the correct solution. But the best algorithm we know for this problem, in the worst case, is not much better than guessing and checking.

Traveling salesman problem is among a class of problems for which we do not have an efficient solution, but  solving one of them efficiently will imply that we can solve all of them efficiently. In some sense these problems are equivalent. Using a solution to one of them, one can solve all the others. One of the biggest open questions in computer science is figuring out if there is an efficient solution for this class of problems. This is not known. Although nobody has been able to figure out an efficient solution to these problems yet, no one has been able to prove that there is no efficient solution either.

4 comments:

  1. A question I have always had about whether or not a computer chooses to terminate a program revolves around the amount of memory the CPU is currently using at the time the program crashes. I've heard varying answers: background applications do hinder additional applications and vice versa. However, explaining the unsolvable problem concept through the travelling salesman analogy is an interesting way of interpreting how the memory required for a seemingly simple process can exponentially increase, rapidly.

    ReplyDelete
  2. Traveling salesman problem makes me realize that even though computer is strong enough to solve many higher level math. Still, Traveling salesman problem sounds like a simple problem that could done by a human being but cannot be done by a computer. I was surprise.

    -Zhuocheng Yu

    ReplyDelete
  3. I wonder if there are built-in limits to how long a program will work on a problem. In the case of the traveling salesman, the program is decently efficient at calculating the route for a certain number of cities. Wouldn't a simple solution be to limit the number of cities that a user can input? And for problems that a program simply cannot compute, would it be possible (or is it already done?) that the program has a set of rules that limits the amount of time the program will work on a single problem before terminating or halting the process? It is still surprising that computers cannot efficiently find the right route in the traveling salesman problem, it seems like the type of problem a computer could compute much faster than a human.

    ReplyDelete
  4. I agree while waiting for computer to respond is very frustrating. I have experienced that many times and my usual solution is to force turn off the computer. I wonder doing so would hurt the computer itself and the program because several times after restarting the computer, I felt that the program runs a bit differently, but I couldn't tell where, and how.

    ReplyDelete