This article is about a frequent claim that computers have been proven to be inherently limited and incapable of “deciding” certain things, like whether or not a PC program can loop indefinitely (the so-called “Halting problem” for computers). Such a claim is often found with remarks along the lines of “Therefore humans will always be necessary for maintaining them”, or “This why antiviruses cannot be perfectly reliable” or even “That is why it is impossible to write a program that would detect bugs in another program”.
The belief that those are mathematically proven facts is wrong and this page explains why. If you had never heard or believed that rumor, you can leave this page. Otherwise please let me develop.
The theory: the halting problem is decidable for computers
If computers were Turing machines, they would be finite Turing machines, that is Turing Machines with a finite amount of memory. The halting problem is decidable for finite Turing machines. Here is a sketch of the proof:
- A Turing machine with a finite amount of memory can only be in a finite number of configurations. Call this number N.
- After running N+1 instructions, you must have been twice in the same configuration, by the pigeon-hole principle. Thus, after running N+1 instructions you know that the machine won’t stop.
- Conversely, if before or when the N-th instruction ran the program stopped, then you know that it does not loop for the input you provided.
It is true that memory capacity has considerably increased over the past years. But at all moments it stayed finite and will stay so in the foreseeable future. As long as it stays finite, the halting problem is decidable.
It is also true that N is huge for current computers. And one can prove things about the worst-case asymptotic complexity of the problem in term of N, but even that won’t prove anything about the actual amount of time and memory that would be necessary for a perfect anti-virus to decide on the latest thing I downloaded. And we do not know how N will vary anyway, so studying asymptotic complexity does even not allow to forecast the evolution of this problem.
A more concrete example with humans
This may help to put things under a different light: Turing Machines were meant to model humans processes. (cf his biography by Andrew Hodges)
And we only live for a finite amount of time, so if you think of ourselves as Turing machines, we can only run a finite number of instructions. A Turing machine running a finite amount of instructions can only use a finite amount of memory. Therefore, we can think of ourselves as finite Turing machines. And yet many of us are solving the halting problem for some real-life programs.
So existing finite Turing machines are solving the Halting Problem for programs running on other existing finite Turing machines.