I make weird typos when writing. Sometimes I substitute an entirely different word in place of the correct one; otherwise times I simply a word. Both kind of typos are more common than misspelling a word, indicating that the typo mechanism is operating at a higher level than the spelling or typing itself.
This parallels some of the intuition people have about deep neural networks, which is backed up by pretty pictures of what different neurons see. According to the intuition, a deep neural network for classifying images starts with low level, local features of images (gradients, edge detectors) and moves layer by layer towards high level features (biological vs. inorganic, fur vs. hair, golden retriever vs. labrador retriever).
I am writing this in Mac OS X, having momentarily given up getting Linux satisfactorily configured on my laptop. So, in the spirit of escapist fantasy and cracking nuts using sledgehammers, I am going to write about what a world with strong AI would be like. Warning: I am in a very lazy, rambling mood.
Say we get strong AI. This means we understand intelligence sufficiently to be able to replicate it digitally. We’re going to completely ignore any potential speed advantages: pretend that this new strong AI has exactly the same effective intelligence as a normal human when running on conventional hardware. However, like everything digital, intelligence is now repeatable, shareable, and mixable.
In the scale free government post, one of the completely unresolved issues was what to do about the federalism axis. There are two scale free extremes to choose from: completely uniform democracy and pure libertarianism (i.e., anarchy). This post will ramble about the anarchy option without getting anywhere very useful.
Anarchy would only work if the universe is such that the middle ground can be efficiently simulated by ad-hoc coordinated groups. Recall that the goal isn’t actual anarchy, which is absurd, but a system with as few foundational rules as possible.
I’ve been reading The Age of Spiritual Machines by Kurzweil, and got to the obligatory section where he pontificates about the philosophical issues behind consciousness copying and transfer. In the process I thought of a nice analogy between one of the practical issues involved and bungee jumping.
Imagine that your brain is scanned and an improved replica of yourself is created. There are now two copies, and for practical reasons (say population control), you only get to keep one. The question is: what do you do with the old body? Is killing it murder?