Posts

Against long term thinking

The Long Now Foundation is a wonderful organization advocating for long term thinking. Specifically, by long term they mean the next ten thousand years: The Long Now Foundation was established in 01996 to develop the Clock and Library projects, as well as to become the seed of a very long-term cultural institution. The Long Now Foundation hopes to provide a counterpoint to today’s accelerating culture and help make long-term thinking more common.

Increasingly bizarre typos?

I make weird typos when writing. Sometimes I substitute an entirely different word in place of the correct one; otherwise times I simply a word. Both kind of typos are more common than misspelling a word, indicating that the typo mechanism is operating at a higher level than the spelling or typing itself. This parallels some of the intuition people have about deep neural networks, which is backed up by pretty pictures of what different neurons see.

You can't always get what you want

(This is an expanded version of a Facebook comment, because Jeremy asked.) Recently I came across an article about opposition to housing development in San Francisco. The headline positions of everyone involved are uninteresting: housing advocates want more affordable housing, housing developers want less. The really interesting bit is more subtle: at one point the developer says they’re still trying to figure out what the community wants and is immediately booed.

No more experts

I am writing this in Mac OS X, having momentarily given up getting Linux satisfactorily configured on my laptop. So, in the spirit of escapist fantasy and cracking nuts using sledgehammers, I am going to write about what a world with strong AI would be like. Warning: I am in a very lazy, rambling mood. Say we get strong AI. This means we understand intelligence sufficiently to be able to replicate it digitally.

Rootstrikers conference summary

I went to the Rootstrikers conference yesterday, which consisted of a few panel debates/discussions plus questions from the audience. I also got to hang out in a bar at a table with Lawrence Lessig for a half hour or so after the conference, which was pretty cool. I’ll summarize the conference here, and include links for anyone who wants to follow the movement or get actively involved. Stepping back: there are two core ideas behind Rootstrikers: (1) representative democracy in the United States is being corrupted by the influence of money and money-connected lobbying, and (2) even if this corruption isn’t the most important problem to solve, it is the FIRST problem to solve, since it is blocking satisfactory progress on nearly every other issue (climate change, tax reform, health care costs, etc.

The one charity theorem

The normal scheme for donating to charities is to divide money up among several different charities. The following argument shows why this strategy is often wrong. Both the statement and the proof will be extremely informal: One charity theorem: Assume we have a fixed amount of money to divide between $n$ charities. Assume that utility is a smooth function of the performances of the charities, which in turn depend smoothly on the amount of money each receives.

Would anarchy work?

In the scale free government post, one of the completely unresolved issues was what to do about the federalism axis. There are two scale free extremes to choose from: completely uniform democracy and pure libertarianism (i.e., anarchy). This post will ramble about the anarchy option without getting anywhere very useful. Anarchy would only work if the universe is such that the middle ground can be efficiently simulated by ad-hoc coordinated groups.

Toothpaste and amortized complexity

A past girlfriend and I would occasionally (cheerfully) quibble over the optimal strategy for extracting toothpaste. It occurred to me recently that the disagreement was fundamentally about amortized vs. worst case complexity. Being lazy, I tend to squeeze the toothpaste out of the front of the tube, optimizing the time spent in the moment and reducing the degree of control required since pressure is exerted near the toothbrush. She would carefully squeeze the tube from the back, maintaining a flat region that would slowly grow as the toothpaste emptied.

Not everything happens for a reason

The phrase “everything happens for a reason” came up in a couple contexts recently (conversation with a friend, Radiolab, etc.). It’s a good example of an obviously false statement that contains plenty of useful insight, and is interesting to think about in that context. We’ll get the pedantic out of the way first: “everything happens for a reason” is literally true in the sense that the future happens for the reason that is the past.

Exponentially harder isn't hard enough yet

In well designed cryptographic security systems, the attacker needs to do exponentially more work than the defender in order to read a secret, forge a message, etc., subject to appropriate hardness assumptions. Maybe this is true for many non-computer security-ish systems as well, like choosing good representatives in a voting system or avoiding overpaying for advertised merchandise, and we simply haven’t reached the level of intelligence as defenders for the exponential effort of attackers to be prohibitive.