Posts

Thank you for holding my duck

There’s a story I like to tell, which I vaguely remembered as originating at Bell Labs or Xerox PARC. A researcher had a rubber duck in his office. When he found himself stumped on a problem, he would pick up the duck, walk over to a colleague, and ask them to hold the duck. He would proceed to explain the problem, often realizing the solution himself in the middle of the explanation. Then he would say, “Thank you for holding my duck”, and leave.

Lessons from Lyndon Johnson

I’m in the middle of the third book in Robert Caro’s biography of Lyndon Johnson. In brief, Caro’s thesis is that (1) Lyndon Johnson cares only about power, and (2) Lyndon Johnson is spectacularly skilled at politics. Moreover, (2) holds in a strong sense: Johnson is not simply skilled at politics, but far more skilled than nearly everyone around him. As a result, Johnson’s life is an example of asymmetric play in a theoretically symmetric game, and a beautiful illustration of how such asymmetric play is equivalent to the game itself having asymmetric rules.

Morality does not come from within (retracted)

Retraction (15 April 2022): Greg Egan has kindly explained on Twitter that I was misinterpreting the narrator’s statements, and specifically that from the “from within” part means that morality is in part a result of human internal mental processes but that those processes of course condition on the external world. I am happy to stand corrected! Post prior to retraction: Greg Egan’s short story “Silver Fire” is about people falling back from secular values.

A constructive critique of Sapiens and Homo Deus

Thanks to a recommendation from Dandelion Mané, I recently read “Sapiens” and “Homo Deus” by Yuval Noah Harari. Both books are wonderful breaths of fresh air and perspective. “Sapiens” is organized as a history of the species Homo Sapiens, tracing from our evolutionary separation from other primates through the cognitive revolution, the agricultural revolution, through the rest of history to the present. From this historical background, “Homo Deus” attempts to extrapolate into the future, in particular asking how our morality and goals will evolve with technology.

Against long term thinking

The Long Now Foundation is a wonderful organization advocating for long term thinking. Specifically, by long term they mean the next ten thousand years: The Long Now Foundation was established in 01996 to develop the Clock and Library projects, as well as to become the seed of a very long-term cultural institution. The Long Now Foundation hopes to provide a counterpoint to today’s accelerating culture and help make long-term thinking more common.

Increasingly bizarre typos?

I make weird typos when writing. Sometimes I substitute an entirely different word in place of the correct one; otherwise times I simply a word. Both kind of typos are more common than misspelling a word, indicating that the typo mechanism is operating at a higher level than the spelling or typing itself. This parallels some of the intuition people have about deep neural networks, which is backed up by pretty pictures of what different neurons see.

You can't always get what you want

(This is an expanded version of a Facebook comment, because Jeremy asked.) Recently I came across an article about opposition to housing development in San Francisco. The headline positions of everyone involved are uninteresting: housing advocates want more affordable housing, housing developers want less. The really interesting bit is more subtle: at one point the developer says they’re still trying to figure out what the community wants and is immediately booed.

No more experts

I am writing this in Mac OS X, having momentarily given up getting Linux satisfactorily configured on my laptop. So, in the spirit of escapist fantasy and cracking nuts using sledgehammers, I am going to write about what a world with strong AI would be like. Warning: I am in a very lazy, rambling mood. Say we get strong AI. This means we understand intelligence sufficiently to be able to replicate it digitally.

Rootstrikers conference summary

I went to the Rootstrikers conference yesterday, which consisted of a few panel debates/discussions plus questions from the audience. I also got to hang out in a bar at a table with Lawrence Lessig for a half hour or so after the conference, which was pretty cool. I’ll summarize the conference here, and include links for anyone who wants to follow the movement or get actively involved. Stepping back: there are two core ideas behind Rootstrikers: (1) representative democracy in the United States is being corrupted by the influence of money and money-connected lobbying, and (2) even if this corruption isn’t the most important problem to solve, it is the FIRST problem to solve, since it is blocking satisfactory progress on nearly every other issue (climate change, tax reform, health care costs, etc.

The one charity theorem

The normal scheme for donating to charities is to divide money up among several different charities. The following argument shows why this strategy is often wrong. Both the statement and the proof will be extremely informal: One charity theorem: Assume we have a fixed amount of money to divide between $n$ charities. Assume that utility is a smooth function of the performances of the charities, which in turn depend smoothly on the amount of money each receives.