Increasingly bizarre typos?

I make weird typos when writing. Sometimes I substitute an entirely different word in place of the correct one; otherwise times I simply a word. Both kind of typos are more common than misspelling a word, indicating that the typo mechanism is operating at a higher level than the spelling or typing itself.

This parallels some of the intuition people have about deep neural networks, which is backed up by pretty pictures of what different neurons see. According to the intuition, a deep neural network for classifying images starts with low level, local features of images (gradients, edge detectors) and moves layer by layer towards high level features (biological vs. inorganic, fur vs. hair, golden retriever vs. labrador retriever).

A neural network trained to generate images rather than classify them operates in reverse: it starts with high level features and moves towards lower and lower level features until it spits out pixels at the end. As a consequence, the kind of errors produced by such a network depend on the layer at which the error first occurred: an error near the pixel level will be localized and fairly boring; an error at a higher level might create an extra eye in the middle of a limb (as in Deep Dream).

Humans are similar, at least phenomenologically: we can make typos at a variety of scales. I can mispell words, skip words, say grammatically correct sentences with small logical errors, write entire programs based on faulty assumptions, harbor incorrect political views for decades, and so on. There’s error correction, so most of the mistakes are caught quickly. Not all, though, and some of the mistakes that are not caught quickly are subject to positive error correction, and remain mistakes for years or decades.

At the highest levels, many of our biggest mistakes are due to invalid or imprecise connections between different ideas. For example, the U.S. political system has unified the ideas of fiscal conservative, religiousness, and uncontrolled gun ownership into one party, which means that mistakes in one of those issues blur over into mistakes in the others. This is a typo at a very high level, both in terms of ideas and numbers of people.

This blurring of somewhat but not exactly related concepts is one of the keys to intelligence, so it’s going to remain with us forever. One thousand years from now, the superintelligent AIs of the future will still have implicit associations between only vaguely related ideas. Indeed they will be much more creative they we are, so they will likely have more implicit associations between even less related ideas than we do. They will also have much deeper thought processes, in the sense of having more layers with a greater gap between low level and high level features.

Will they have even weirder typos? No.

Error correction

Error correction is perhaps the single most important idea in computer science. For example, it is more important than the idea of a bit, since without error correction it would be impossible to approximate a discrete choice like a bit on top of the analog physical world. The most important fact about error correction is

  • A reasonably small amount of error correction is the same as an infinite amount (almost).

In other words, if you’re building a computer system out of a huge number of little pieces, you need to have error correction for each of the pieces, but you do not need more error correction because you have a lot of them. That’s not exactly true: a computer with a lot of pieces needs a little more, but most of what you need is already needed by a very simple computer. For example, say we have a physical transistor which makes errors 10% of the time. That’s a lot of errors, so it might take quite a few transistors in a little error correction circuit to drive the errors down to 1% of the time, and then 0.1% of the time. However, once you’re there, adding more copies of the error correction machinery will drive the error rate down exponentially, so it doesn’t take very much to get it down so far that no errors will ever be detected (a practically infinite amount of error correction).

This principle is true for classical computers as proved by von Neumann, and it is true for quantum computers as proved by Ben-Or and Aharonov. In both cases, there is a finite threshold $\epsilon \gt 0$ such that if the error rates of each gate can be pushed below $\epsilon$, error correction can push it exponentially close to 0 with a moderate amount of extra work.

Unfortunately, this is not true of humans. We have some error correction, but it is unevenly distributed and in many areas we don’t have nearly enough to hit the threshold. However, there’s no reason to believe a self-designed AI wouldn’t take advantage of the threshold theorem(s), so whenever it desired it would make no errors (with arbitrarily high probability).

There are two main caveats to that statement which need to be explored: the definition of an “error” and “whenever it desired”. First, when considering fuzzy, human type intelligence, the concept of “error” is pretty fuzzy, and in general defining “error” is the same as being able to formally specify AI. However, since $\mathrm{P} \ne \mathrm{NP}$, recognizing an error in a thought is often much easier than having the thought in the first place; whenever this is the case error correction can work. The really problematic errors (such as not recognizing climate change) are caused not by failure to define the error (by scientists) but a failure of our mechanisms for propagating information through the rest of the system (politics). That latter bit is what error correction fixes.

Second, error correction is in apparent conflict with our previously discussed mechanism of intelligence: implicit associations between slightly related ideas. To some degree I think this is true: there are presumably valuable creative thoughts which cannot be well translated into a formalized setting. However, I think in practice the conflict is illusory, because the ideal level of error hovers on the boundary between chaos and rigor.

Just the right amount of error correction

If there is a lot of error, long trains of thought are impossible: any interesting though will decay into noise in a few steps. Too little error is more subtle, since by the principle above it doesn’t take too many extra resources to have almost no errors at all. The actual problem with too little error is that isn’t very “nimble”: interesting thoughts that aren’t the most interesting thought might decay away before they can be explored and amplified. For a formal analogy, we can imagine a process that can either converge to zero or explode to infinity depending on a scale parameter; the interesting behavior is in the thin middle regime where it neither explodes nor converges. Another analogy is a PID controller (for a robot or heater, say): a very high damping coefficient means errors are hard to introduce but response is molasses-like, a very low damping coefficient causes oscillations, and a damping coefficient near the critical value gives the fastest possible response to both errors and new settings.

For a less formal analogy, consider an artist or musician making a new piece. Many of the good artists know quite a bit of theory, and can apply that theory to avoid low level “mistakes” in the work. However, depending too much on theory can interfere with creativity, and even where the theory is useful it may be best applied by first internalizing it into semi-conscious reflexes and then applying the reflexes.

An interesting aspect about a level of error correction that hovers between too little and too much (between error-prone creativity and plodding rigor) is that with a little bit of error it is possible to push it to one side. That is, if we have a thought using the critical value, but can revisit and replay the thought process, more work should let us push the thought into rigorous, explainable territory, at which point any remaining errors can be detected and driven to zero. This is the real reason why I think AI will essentially immune to bad errors: if we accept that the ideal error rate is near the threshold value, there is no reason not to go the rest of the way for important decisions.

Of course, humans can do this too: an art critic can push a little further to explain aspects of a creative work using art theory, and a mathematician can push a little further to turn intuitive conjectures into proofs. We just don’t do it enough. We’ve evolved something pretty close to a critical error rate, but since it’s evolved it’s uneven, and our processes for correcting errors at higher levels are similarly uneven. Hopefully we get closer to the threshold before we explode.

comments powered by Disqus