Increasingly bizarre typos?

I make weird typos when writing. Sometimes I substitute an entirely different word in place of the correct one; otherwise times I simply a word. Both kind of typos are more common than misspelling a word, indicating that the typo mechanism is operating at a higher level than the spelling or typing itself. This parallels some of the intuition people have about deep neural networks, which is backed up by pretty pictures of what different neurons see.

No more experts

I am writing this in Mac OS X, having momentarily given up getting Linux satisfactorily configured on my laptop. So, in the spirit of escapist fantasy and cracking nuts using sledgehammers, I am going to write about what a world with strong AI would be like. Warning: I am in a very lazy, rambling mood. Say we get strong AI. This means we understand intelligence sufficiently to be able to replicate it digitally.

Would anarchy work?

In the scale free government post, one of the completely unresolved issues was what to do about the federalism axis. There are two scale free extremes to choose from: completely uniform democracy and pure libertarianism (i.e., anarchy). This post will ramble about the anarchy option without getting anywhere very useful. Anarchy would only work if the universe is such that the middle ground can be efficiently simulated by ad-hoc coordinated groups.

Exponentially harder isn't hard enough yet

In well designed cryptographic security systems, the attacker needs to do exponentially more work than the defender in order to read a secret, forge a message, etc., subject to appropriate hardness assumptions. Maybe this is true for many non-computer security-ish systems as well, like choosing good representatives in a voting system or avoiding overpaying for advertised merchandise, and we simply haven’t reached the level of intelligence as defenders for the exponential effort of attackers to be prohibitive.

People vs. ants

There’s a weird notion in some science fiction (or in some futuristic predictions) that if super intelligent beings existed, they might view us in the same way that we view ants. The main implication is that they would show little to no interesting in talking to us. This is absurd. If ants could talk, thousands of biologists would spend a great deal of their time talking to them. I don’t think they would view us as pets either; people already try to endow our pets with as much intelligence and consciousness as possible, others campaign for their rights, etc.

Consciousness transfer and bungee jumping

I’ve been reading The Age of Spiritual Machines by Kurzweil, and got to the obligatory section where he pontificates about the philosophical issues behind consciousness copying and transfer. In the process I thought of a nice analogy between one of the practical issues involved and bungee jumping. Imagine that your brain is scanned and an improved replica of yourself is created. There are now two copies, and for practical reasons (say population control), you only get to keep one.