Thanks to a recommendation from Dandelion Mané, I recently read “Sapiens” and “Homo Deus” by Yuval Noah Harari. Both books are wonderful breaths of fresh air and perspective. “Sapiens” is organized as a history of the species Homo Sapiens, tracing from our evolutionary separation from other primates through the cognitive revolution, the agricultural revolution, through the rest of history to the present. From this historical background, “Homo Deus” attempts to extrapolate into the future, in particular asking how our morality and goals will evolve with technology.
The core theme of both books is the separation between three types of reality:
- Objective reality, which exists independent of our believing in it. Physics, biology, evolution, etc.
- Subjective reality, which exists only in the mind of the believer.
- Intersubjective reality, which exists in the mind of many believers at once.
Intersubjective reality includes such ideas as cities, money, religion, and companies. Money is an imagined construct whose meaning vanishes if everyone stops believing in it, but it does not vanish if only I stop believing in (though it does slightly decrease in value).
Harari’s notion of religion includes not only Buddhism, Islam, Christianity, Hinduism, Judaism, etc., but also Communism, Liberal Humanism, Nazism, and Science. This is true. The idea of Science as not religion is roughly the Aristotelian view: the idea that we can derive the structure of the world from pure logic, with no assumptions required. This is not the case: one needs religious assumptions such as “objective reality exists”, “the world is explainable in simple terms”, “induction is correct”, and so on.
So we need belief systems – religion – and religions are not objective. But religions are influenced by the objective world, and change to the objective world forces change in religions. The major classical religions have one by one acknowledged evolution and been changed by it. The same applies to the religion of Liberal Humanism, which enshrines individualism, free will, and self determination. The solidity of these concepts is being eroded by neuroscience, behavioral economics, and the increasing technical power of advertising and marketing.
Harari is worried about the future. “Homo Deus” asks what comes after Liberal Humanism once the intersubjective notion of free will dissolves. He does not have a good answer, and he is rightfully concerned that society is not focusing on the question.
So “Sapiens” lays out the history of our species, and “Homo Deus” worries that we are heading blindly into a future of changing morality. It’s a good question to ask. Harari doesn’t know the answer. Unfortunately, Harari misses a huge part of the story, which prevents him from talking about the space of solutions in depth.
What Harari is missing: derived morality
Here’s how I think about morality, using Harari’s language:
- To start, you need a religion.
- The religion let’s you derive various principles about the world, and about how one should behave.
- One uses the derived principles to make decisions.
Unfortunately, Harari has a poor understanding of the boundary between religion and derived principles. Conveniently, Harari has a single sentence which perfectly illustrates the problem:
No amount of data and no mathematically wizardry can prove that it is wrong to murder, yet human society cannot survive without such value judgements.
This is stated as a fact; decoupled, he means two things:
- You can’t derive that murder is wrong from theory.
- If humans did not believe murder was wrong, society would not survive.
He’s identifying “derive” with the Aristotelian notion of proving something from pure logic. This is a meaningless notion: one always needs axioms, and the useful notion of derive is to produce one statement from another. With the right notion, his statement is itself a derivation of the “wrongness of murder” from the “wrongness of society not surviving”.
Is that last bit objectively true? No: it’s a religion. But it’s potentially a simpler religion, and the history of the scientific revolution is in part a replacement of complex religions with simpler religions.
Does this answer his existential crisis? No: it doesn’t say what the religion should be, and there are many mistakes one can make in thinking about simplicity of religion (see below). However, it does highlight the problem: Harari doesn’t seem to realize that the intersubjective is composed of many levels, with factual or semifactual arguments relating the different levels. In this example, our religion (one level of the intersubjective) is “society should survive”. “Murder is wrong” is also part of our religion, and is usually taken as an axiom (a commandment, if you will). The fact that “murder is wrong” is part of the religion, however, does not mean that it can’t be approximately derived from other parts.
To summarize, Harari’s paraphrased position is:
You need a religion. The current religions fall apart at our current and upcoming level of technology, and it’s not clear what will replace them. We should think hard about this.
This is exactly true but misses the blur between religion and derived religion, in large part because historically it was a lot blurrier than it is today (though many people today also don’t get it).
It’s interesting to contrast this with Sam Harris’ position from “The Moral Landscape”:
Science can be used to derive most of morality, and increasingly most over time. Not everything, but most. Also my religion is that morality is about improving human well being and if you don’t agree with my religion you are unreasonable and incompetent.
Harris’ position is only partially true, but it contains the problem with Harari’s. The right position is a mixture of Harari’s views and some of Harris’: Harris is right that morality can be derived, but wrong that “human well being” is the one true religion.
In any case, missing this critical piece of the story, Harari’s discussion of possible futures contains quite a bit of nonsense. The subtlety is that some of this nonsense is Harari’s and some of it is legitimately shared by the religious adherents he is describing.
The dangers of simple religions
Before we get into the details of where “Homo Deus” goes wrong, it’s important to point out some perils of derived religion. Derived religion is about turning simple religions plus facts about the world into complicated religions. This can go wrong in a few ways:
One can start to have, as a religion, “religion should be as simple as possible”. This is one of the guidelines of my religion, for instance, but it’s not an absolute. There’s a reason the quote is “simple as possible, but no simpler”.
One can imagine that one has a simple religion but misinterpret the actual underlying goals. For example, libertarians claim the goal is freedom, often to the point of religion. To justify this goal, they often say things like “freedom makes humans happier”, or “freedom let’s people thrive”. Well, which is it? Freedom or thriving? What if we have evidence that thriving is sometimes inconsistent with maximal freedom? If we act based only on the simple religion, we may fail to achieve our true goals.
One can make incorrect inferences from the simple religion to a complicated derived principle. Harari takes it as given that “don’t murder” is important for civilization surviving. Certain Evolutionary Humanists (Nazis) instead concluded that civilization surviving required murdering all the Jews. The first inference is correct, the second is not. In general (not in that case), the process of inference can be complicated and subtle, since it requires understanding emergent phenomena for which we usually can’t do controlled experiments.
The elaborate on the difficulty of inference: we can sometimes do it, but usually only when the surrounding context is taken as fixed. Abortion is a good example: advocates can say things like “abortion doesn’t appear to lead to bulk devaluation of adult life” or even sometimes “countries that allow abortions have fewer abortions”. This seems conclusive, but there is almost always an escape hatch reachable by (1) widening the context and (2) appealing to a bit of religion: “yes it seems like abortion doesn’t have bad consequences, but it weakens the moral fabric and takes one further from God’s laws”. To combat this, one also needs a bit of religion, though a bit I agree with: “entropy is a common result of complex mixing operations, so without evidence to the contrary the short term disappearance of signal is evidence there will be no long term effect” and that “we can find a consistent religion to replace Christianity even if abortion weakens it”.
Help, my religion is dissolving
Harari’s worry in “Homo Deus” is that the modern dominant religion, Liberal Humanism, is dissolving under the forces of science and technology. Different religions have been dissolving for tens of thousands of years; this time is different because of the speed and because of our newfound ability to destroy ourselves if we get the transition wrong.
Like all religions, Liberal Humanism is a composite religion with a bunch of different beliefs, some of them derived from others. Some of these derivations were made consciously by humans, some happened by cultural evolution. Since Harari misses the concept of derived religion, his discussion of Liberal Humanism misses some historical and game theoretic context. It also misses some of the solution space: if we understand how different parts of Liberal Humanism derive from others, we may be able to preserve some parts even as others dissolve.
In particular, Harari spends a lot of time discussing how Liberal Humanism dissolves once algorithms know us better than ourselves. Liberal Humanism supposedly says one should look inside oneself for answers. First, this is not what Liberal Humanists said, at least not to the extreme Harari is taking it. He repeatedly says the Liberal Humanist ideal is to go to the ballot box, look deep inside at your emotions, and pick the candidate which resonates the best. But this idea, while certainly a large part of how people vote in practice, would have horrified the Founders. Their ideal was an informed (also white, male, land owning) citizen that would look at the facts of how policies would affect them, and act according to a combination of their and their societies actual interest.
The issue is that “look inside oneself” is a derived value, caused by two principles:
- An outsider does not know you as well as you do.
- An outsider does not share your interests.
(1) is a factual statement which is dissolving. (2) is a combination of a factual statement and a religious reference to “your interests”. These interests are left blurry, but probably religiously include Life, Liberty, Happiness, etc. In any case, once (1) dissolves the entirety of (2) still holds, both the misalignment of values and the religious reference to interests.
Can we preserve (2)? Well, it would require having a picture of the what values are, but is also requires engineering the world to avoid massive information asymmetry and power asymmetry. We’re not doing a great job of this engineering yet, but it’s a natural inference if one tries to preserve as much of Humanism as possible. Harari does not discuss information asymmetry as a derived religious value, because he takes it as given that at least at first Google shares our interests.
His discussion of free will is similar. The underlying trend is that science is proving free will a suspect concept. However, the reason free will was a useful concept historically, and is still useful if not as useful today, is that free will is what you get when you can’t predict behavior from outside. That is, it’s a derived or emergent concept that emerges once you coarsen enough to not know the details of someone’s thoughts.
There are a couple inferences from this. One is that free will is still a useful concept for more complex entities that are too complex for other entities to know their thoughts. The superintelligences of the future, if they exist, will still have free will or a partial equivalent as a concept, since the laws of physics prevent one place from having all the information. Second, free will is a useful concept even in asymmetric situations where Alice knows enough to see through Bob’s free will, but Bob does not know enough to see through his own free will. For example, the disappearance of free will in terms of cognitive neuroscience does not immediately invalidate the use of free will as a concept in the criminal justice system: if people think they themselves have free will because they can’t model themselves very well, then free will can rationally be used as an argument for punitive responses to crime in terms of its preventative effects. If you want to argue against punitive imprisonment, you can’t just say “free will is a myth”; you need something real like “punishment doesn’t prevent crime very well”.
Actually, the Bible wasn’t written by God. QED.
Another strange section of “Homo Deus” is his discussion of the factual claims that underlie religions. Harari states that, while much of the purpose of religion is moral statements about the world, all religions make assumptions about objective reality. As I noted above, science assumes objective reality exists, induction works, etc. Christianity assumes the Bible was written by God. If you want to argue against a religion, he suggests targeting the factual claims it makes, and presenting evidence against them.
But Harari basically stops there, with “Christianity assumes the Bible was written by God”. He then goes off on an interesting but largely irrelevant historical discussion of the evidence that, in fact…drumroll…the Bible was not written by God. For example, linguistic analysis shows that different parts were written at different times, by different people. Take that, Christianity!
Of course those are the wrong facts to target. Harari is taking “The Bible is written by God” and the closely related “The Bible is literally true” as atomic statements. Once you’ve assumed the whole Bible is true, everything else follows. For example, I know the Bible is literally true, and the Bible says homosexuality is bad, so homosexuality is bad. But if you actually ask a fundamentalist Christian why homosexuality is bad they won’t stop at this argument. They’ll say gay people have more mental illnesses. They’ll say they’re more likely to be pedophiles. They’ll say children with gay parents grow up warped. These are the factual claims one should be targeting.
Again, the issue is that he’s missing the concept of derived religion. Sure, Christianity says the Bible is true, but it also makes a bunch of factual claims about derivations from one set of moral precepts to another. Since it’s a patchwork of claims, they can be peeled off semi-independently. The core “the Bible is written by God” claim is actually quite robust, since it can be easily modified without breaking the whole derivation structure.
It’s strawman time
Religions dissolve in pieces. There are still a lot of Catholics, even though the part about humans being created in one go has dissolved into an acceptance of evolution. Indeed, the first sentence of the relevant Wikipedia page is
Since the publication of Charles Darwin’s On the Origin of Species in 1859, the attitude of the Catholic Church on the theory of evolution has slowly been refined.
I would like Liberal Humanism to dissolve slowly too. The advantage of slowness is that we don’t have a complete answer for what to replace it with, so slower is safer. And the way towards a slow dissolve is to understand how the different parts of Liberal Humanism derive from one another, so we can understand which parts to keep.
Or, absent a concept of derived religion, we could talk about strawmen.
The weirdest part of “Homo Deus” are Harari’s hypothetical “what comes next” religions. They’re not pretty, and they’re not meant to be: Harari is worried about the future, and he’s worried because he doesn’t have a good answer for what replaces Liberal Humanism. However, his candidate replacements aren’t just bad: they fail the same tests that are killing off the current religions.
Take Dataism. Dataism, according to Harari, is the belief that the goal is more data processing. The problem is that he is proposing this as a replacement for Humanism as the concept of individualism dissolves, but his Dataist concepts were nonsense from Shannon’s first paper on information theory. What does data processing mean? Well, it probably means one needs a definition of “data”, but the whole insight of Shannon is that you get coherent definitions only once you assume a separation between data and noise. A random process is not necessarily data: it might be noise, or it might not be, and the only way to know is to measure how well it can be used to predict another data source. Presumably one should use interesting data sources for this comparison, but then you get an infinite regress only resolvable by injecting some religion which chooses the interesting data sources. And you can’t just say “any data”: the time symmetry of physics means the total information of the universe is preserved with time, which is not very interesting as a goal. You can’t propose Dataism as a religion to replace the dissolving Humanism if Dataism comes predissolved.
In fairness to Harari, some of this silliness is intentional, since some of the confusion about Dataism and the other strawman replacement religions is shared by actual adherents. Harari is right to be frightened by Dataism; indeed, it bares a striking resemblance to Paperclipism (“the goal is more paperclips”). But having spent a whole book talking about dissolving religions, not mentioning the predissilution of Dataism is a major lapse.
Change religion slowly, and change objective reality
I am more optimistic about the future than Harari. Not optimistic enough to not worry about it – I join an AI Safety Team on Monday – but hopeful. And fundamentally, the reason I am more hopeful than Harari is that I think large chunks of our current religion (our morality) can survive intact across the next period of rapid technological change. This is not to say our current religion is ideal, but I don’t have anything better to replace it with, so it will have to do.
So how can we let religion dissolve slowly? There are two ways:
Break it into pieces, analyze how different pieces derive from other pieces. Figure out which pieces survive in our changing objective reality, and which we do actually want to keep. Try to keep them, let the others fall away.
Sometimes, adjust objective reality so that a piece that we want to keep but is in danger of dissolving doesn’t dissolve.
Most of this post has been about (1), so I won’t say much more. I’m not saying (1) is easy, and the answers are not going to be simple. A good approximation to human morality achievable today will be moderately high dimensional. Sometimes we can do the derivations and then pick the simple axioms, but even in the “murder is wrong” case the terms require definitions, and there are a lot of cases and caveats. This is not to say the simple low dimensional principle doesn’t exist, but due to the pitfalls discussed above I don’t think will find it time, safely.
What about changing reality? Take free will. As discussed above, free will is a meaningful emergent concept in situations where an agent can’t be externally predicted. This is important because the outsider attempting prediction generally has different goals. The free will approximation breaks down when information is highly asymmetric: secretive government surveillance, advanced machine learning-based advertising, etc. But information asymmetry can be fought: EFF, journalists, others. Usually it’s better to argue for this in terms of explicit consequences, but it also preserves the environment that makes free will a meaningful emergent concept, so it’s part of the story of how religion dissolves.