Retraction (15 April 2022): Greg Egan has kindly explained on Twitter that I was misinterpreting the narrator’s statements, and specifically that from the “from within” part means that morality is in part a result of human internal mental processes but that those processes of course condition on the external world. I am happy to stand corrected!
Post prior to retraction:
Greg Egan’s short story “Silver Fire” is about people falling back from secular values. It’s the near future, and organized religion is fading away but “the saccharine poison of spirituaity” is taking its place. The main character is a medical researcher, and most of the plot deals with spirituality in conflict with reliable science. In the background, the reseacher worries about her daughter, who thinks science is boring and much prefers alchemy.
Thanks to a recommendation from Dandelion ManĂ©, I recently read “Sapiens” and “Homo Deus” by Yuval Noah Harari. Both books are wonderful breaths of fresh air and perspective. “Sapiens” is organized as a history of the species Homo Sapiens, tracing from our evolutionary separation from other primates through the cognitive revolution, the agricultural revolution, through the rest of history to the present. From this historical background, “Homo Deus” attempts to extrapolate into the future, in particular asking how our morality and goals will evolve with technology.
The Long Now Foundation is a wonderful organization advocating for long term thinking. Specifically, by long term they mean the next ten thousand years:
The Long Now Foundation was established in 01996 to develop the Clock and Library projects, as well as to become the seed of a very long-term cultural institution. The Long Now Foundation hopes to provide a counterpoint to today’s accelerating culture and help make long-term thinking more common. We hope to foster responsibility in the framework of the next 10,000 years.
I make weird typos when writing. Sometimes I substitute an entirely different word in place of the correct one; otherwise times I simply a word. Both kind of typos are more common than misspelling a word, indicating that the typo mechanism is operating at a higher level than the spelling or typing itself.
This parallels some of the intuition people have about deep neural networks, which is backed up by pretty pictures of what different neurons see. According to the intuition, a deep neural network for classifying images starts with low level, local features of images (gradients, edge detectors) and moves layer by layer towards high level features (biological vs. inorganic, fur vs. hair, golden retriever vs. labrador retriever).
(This is an expanded version of a Facebook comment, because Jeremy asked.)
Recently I came across an article about opposition to housing development in San Francisco. The headline positions of everyone involved are uninteresting: housing advocates want more affordable housing, housing developers want less. The really interesting bit is more subtle: at one point the developer says they’re still trying to figure out what the community wants and is immediately booed. What he means is that they are trying to figure out how the community would prefer to allocate a fixed amount of housing affordability across different levels of affordability: 50% of market, 80% of market, etc.
I am writing this in Mac OS X, having momentarily given up getting Linux satisfactorily configured on my laptop. So, in the spirit of escapist fantasy and cracking nuts using sledgehammers, I am going to write about what a world with strong AI would be like. Warning: I am in a very lazy, rambling mood.
Say we get strong AI. This means we understand intelligence sufficiently to be able to replicate it digitally. We’re going to completely ignore any potential speed advantages: pretend that this new strong AI has exactly the same effective intelligence as a normal human when running on conventional hardware. However, like everything digital, intelligence is now repeatable, shareable, and mixable.
I went to the Rootstrikers conference yesterday, which consisted of a few panel debates/discussions plus questions from the audience. I also got to hang out in a bar at a table with Lawrence Lessig for a half hour or so after the conference, which was pretty cool. I’ll summarize the conference here, and include links for anyone who wants to follow the movement or get actively involved.
Stepping back: there are two core ideas behind Rootstrikers: (1) representative democracy in the United States is being corrupted by the influence of money and money-connected lobbying, and (2) even if this corruption isn’t the most important problem to solve, it is the FIRST problem to solve, since it is blocking satisfactory progress on nearly every other issue (climate change, tax reform, health care costs, etc.). Lessig has a variety of talks laying this out; the most recent one at TED is particularly well done (I hadn’t seen it until yesterday), and I highly recommend it:
The normal scheme for donating to charities is to divide money up among several different charities. The following argument shows why this strategy is often wrong. Both the statement and the proof will be extremely informal:
One charity theorem: Assume we have a fixed amount of money to divide between $n$ charities. Assume that utility is a smooth function of the performances of the charities, which in turn depend smoothly on the amount of money each receives. In the limit of a small amount of money, it is optimal to give to only one charity. Conversely, with overwhelming probability, it is never optimal to give to more than one charity.