You can't always get what you want

(This is an expanded version of a Facebook comment, because Jeremy asked.)

Recently I came across an article about opposition to housing development in San Francisco. The headline positions of everyone involved are uninteresting: housing advocates want more affordable housing, housing developers want less. The really interesting bit is more subtle: at one point the developer says they’re still trying to figure out what the community wants and is immediately booed. What he means is that they are trying to figure out how the community would prefer to allocate a fixed amount of housing affordability across different levels of affordability: 50% of market, 80% of market, etc.

I think this is one of the most problematic failures of modern political discourse: not admitting that a desired quantity is bounded or has to be traded off against other desired quantities. The same failure applies to security vs. privacy, taxes (who do you tax vs. how much), libertarianism (freedom vs. externalities), rationality (optimal policy making vs. democratic fairness), etc., etc.

It’s important that this is a failure of discourse, not just policy by itself. No two people will have a common ordering for the importance of all desired quantities: unless each person is willing to suspend their ordering by freezing some of the quantities and consider how the others change, constructive conversation is impossible. Even if everyone did agree on the relative importance of issues, freezing some of the quantities is a vast simplification that can lead to useful insights about the best solution even if the final solution does not freeze these quantities.

Here’s a recent experience that highlights how easy it is to talk past one another even if everyone fundamentally agrees. A friend and I were discussing different algorithms for picking a set of representatives that accurately captured a group’s views. I kept using the preamble “fix a number $r$ of representatives”. My friend pointed out that this was an absurd assumption: obviously the best number of representatives is problem dependent, and sometimes we need more representatives to capture the range of opinions. I responded that if we let the number vary freely, the best answer is clearly to pick everyone as a representative, which doesn’t achieve our goal of picking representatives at all. My response came across as an attack on a weird strawman. It took us surprisingly long to figure out what the other was talking about, since from different perspectives we were both correct: the ideal representative set size varies, but freezing the set is a useful simplifying assumption to start out.

“If we fix A, then …”

The above rambling can be summarized into the following rule for communication:

Rule 1: If someone says, “If we fix $A$, then $B$”, they are not saying that $A$ should be fixed. Listen to the rest of the sentence.

For example, if I say “If we fix total taxation, …” and you start yelling before I’ve reached the end of the sentence because you strongly believe taxes should go up or down, all hope is lost. What I want to talk about is contained in the “…”. Even if you think that the level of taxes is the most important thing, you should still listen to the “…”: it might be something we both agree on.

There is an equally important flip side to Rule 1:

Rule 2: If someone follows Rule 1, it does not mean they have accepted that $A$ should be fixed.

Rule 2 is important because it is one reason why Rule 1 isn’t followed in practice. Following Rule 1 means you have to join the other party in a hypothetical world which may not conform to your views, or even to reality. If joining the conversation counts as an admission of defeat, following Rule 1 is bad. Sound bite culture is a problem here, since a quote pulled out of a weird hypothetical world may reflect negatively on the speaker when taken out of context. To use a mathematical analogy: it would be like pulling a sentence out of the middle of a proof by contradiction and yelling, “Look what they’re crazy enough to believe!”

Let’s try to apply these rules to the political examples above.

Security vs. privacy

This one is really about security vs. law enforcement access vs. privacy vs. complexity, and part of the reason everyone keeps talking past each other is the refusal (for at least one side (I am biased)) to have subconversations with only 2 or 3 dimensions at a time. Examples are trotted out in support of the view that more law enforcement access means more security, then shot down by those on the privacy side. But obviously more law enforcement access means more security in a limited sense: if someone on the privacy side doesn’t admit that, they will look like extremists to anyone on the security side. The question is how much access helps, what kind helps, when, etc.

The security side generally ignores the issue of protocol complexity, arguing for tricky split key policies allowing privacy against anyone without a court order. It’s obvious that ignoring complexity is silly, but beware: theoretical CS is on the side of the authorities. In 10 or 20 years, the complexity issue will vanish as automated proof eliminates the bug penalty for linking together multiple protocols. Moreover, there’s presumably a finite amount of money, probably less than a billion dollars, which would make a complicated split key protocol more secure than most existing software. “Complexity is bad!” is not a sufficient argument: the question is how bad, and whether the cost is worth it when traded off against the other qualities.

In any case, the complexity argument is unnecessary: the “we don’t trust the authorities” argument is future proof and strong given the existence of other countries.

Taxes: who vs. how much

Given that U.S. politics is strongly polarized between “No more taxes!” and “More taxes!”, why on Earth don’t we pass bills that freeze the total level of taxation but shift around the relative amounts? Pretty much everyone agrees that jobs are good and pollution is bad. Taxing something means people will do less of it, and a sane political system would act accordingly.

A lot of this falls to Rule 2. Politicians who have gotten elected on simplistic “No new taxes!” stances are unwilling to support bills which do not cut the overall level, because doing that would count somehow as admitting defeat. Interestingly, Norquist’s Taxpayer Protection pledge claims to not be this simplistic, explicitly allowing proposals that move taxes around without increasing the total level. The link from this claim to the pledge itself is hilariously a broken link, but once you get through their safeguards the pledge itself does seem to back up this claim:

I, NAME, pledge to the taxpayers of the state of STATE, and to the American people that I will (1) oppose any and all efforts to increase the marginal income tax rates for individuals and/or businesses; and (2) oppose any net reduction or elimination of deductions and credits, unless matched dollar for dollar by further reducing tax rates.

On the other hand, a literal interpretation of the pledge suffers from a ratcheting phenomenon where it is possible to shift tax revenue from anti-labor to anti-pollution but impossible to shift it back, since the pledge specifically targets the marginal individual tax rate. Even if this ratcheting is in a good direction, it bans experiments, changes in circumstances, and all kinds of subtlety. What if we shift taxes towards pollution but then eliminated pollution for other reasons: would taxes be allowed to shift back? Is it compatible to sign a bill that shifts taxes between income and pollution adaptively? Probably not, in which case someone who wants taxes to go up would have trouble collaborating with a pledge-taker on a tax shifting bill.

Libertarianism: freedom vs. externalities

A while ago I saw an appalling statement in a Facebook conversation: someone said they used to believe more strongly in libertarian principles before deciding the situation was more complicated, and someone else said something like “It must be relaxing to have given up consistency.”

Obviously freedom is good. But freedom is also multidimensional: there are over 7 billion dimensions of freedom even if you only count humans today and have only one dimension per human. Property rights do not cleanly choose whose freedom matters more in every situation, nor does the idea of violence vs. non-violence. The world is fuzzy, and believing in fuzziness is not inconsistency.

But obviously freedom is good, which means that if there is a way to achieve more freedom without losing too much in return, we should do that. If we could achieve a higher quality heath care system for most with less regulation, but the cost was that $N$ people had no health care, should we do it? Yes, depending on $N$, and the threshold is not zero.

Rationality

This one is the most fun. One of the things you should be willing to trade off vs. other things is rationality itself. These other things include time, simplicity, and inclusiveness. It’s irrational to spend too much time trying to be rational, or to never use faster slightly irrational heuristics. Less rational simpler models are often better: they have fewer possibly broken moving parts, are less vulnerable to overfitting, and are easier to communicate to others. A political system that weeded out irrational views would be necessarily less inclusive, which has downsides even if the weeding process was perfect: those left out would be less satisfied and more prone to revolt, etc.

Rule 0: If you do not accept that your desired quantities cannot be optimized without trading off something else, you are doomed to talk nonsense.

comments powered by Disqus