(These are excerpts from my book "Intelligence is not Artificial")
The Dangers of Machine Intelligence: Speed Limits for Machines?
In our immediate future i don't see the danger that future machines will be conceptually difficult to understand (superhuman intelligence), but i do see the danger that future machines will be so fast that controlling them will be a major project in itself. We already cannot control a machine that computes millions of times faster than our brain, and this speed will keep increasing in the foreseeable future.
That's not to say that we cannot understand what the machine does: we perfectly understand the algorithm that is being computed. In fact, we wrote it and fed it into the machine. It is computed at a much higher speed than the smartest mathematician could. When that algorithm leads to some automatic action (say, buying stocks on the stock market), the human being is left out of the loop and has to accept the result. When thousands of these algorithms (each perfectly understandable by humans) are run at incredible speed by thousands of machines interacting with each other, humans have to trust the computation. It's the speed that creates the "superhuman" intelligence: not an intelligence that we cannot understand, but an intelligence vastly "inferior" to ours that computes very quickly. The danger is that nobody can make sure that the algorithm was designed correctly, especially when it interacts with a multitude of algorithms.
The only thing that could be so fast is another algorithm. I suspect that this problem will be solved by introducing the equivalent of speed limits: algorithms will be allowed to compute at only a certain speed, and only the "cops" (the algorithms that stop algorithms from causing problems) will be allowed to run faster.
Back to the Table of Contents
Purchase "Intelligence is not Artificial"