(This is an expanded version of a Facebook comment, because Jeremy asked.)
Recently I came across an article about opposition to housing development in San Francisco. The headline positions of everyone involved are uninteresting: housing advocates want more affordable housing, housing developers want less. The really interesting bit is more subtle: at one point the developer says they’re still trying to figure out what the community wants and is immediately booed.
The normal scheme for donating to charities is to divide money up among several different charities. The following argument shows why this strategy is often wrong. Both the statement and the proof will be extremely informal:
One charity theorem: Assume we have a fixed amount of money to divide between $n$ charities. Assume that utility is a smooth function of the performances of the charities, which in turn depend smoothly on the amount of money each receives.
In the scale free government post, one of the completely unresolved issues was what to do about the federalism axis. There are two scale free extremes to choose from: completely uniform democracy and pure libertarianism (i.e., anarchy). This post will ramble about the anarchy option without getting anywhere very useful.
Anarchy would only work if the universe is such that the middle ground can be efficiently simulated by ad-hoc coordinated groups.
In well designed cryptographic security systems, the attacker needs to do exponentially more work than the defender in order to read a secret, forge a message, etc., subject to appropriate hardness assumptions. Maybe this is true for many non-computer security-ish systems as well, like choosing good representatives in a voting system or avoiding overpaying for advertised merchandise, and we simply haven’t reached the level of intelligence as defenders for the exponential effort of attackers to be prohibitive.
Consider the following simplified version of Texas holdem, with two players Alice and Bob:
Alice and Bob are each dealt two private cards.
Alice posts a small blind of 1, Bob posts a big blind of 2.
Alice either folds, calls, or raises by any amount $\ge 2$.
Bob either calls or folds.
Five more shared cards are dealt, and the winner is determined as usual.
Both Alice and Bob have infinite stack sizes, so only expected value matters.
This is a followup to the previous post about health insurance elaborating on the fact that it can be bad to let individuals make choices about their insurance policy. I stated without much detail that “assuming sufficient options and perfect competition, the result of this individual choice would be exactly the same as if the insurers were allowed to use knowledge of $K$.” The “sufficient options” assumption is important (and not necessarily realistic), so more explanation is warranted.
Yes, it’s an extreme title, but it’s true. The idea of insurance is to average risk over a large group of people. If advance information exists about the outcomes of individuals, it’s impossible for a fully competitive free market to provide insurance.
In particular, free markets cannot provide health insurance.
To see this, consider a function $u : S \to R$ which assigns a utility value to each point of a state space $S$.
Good discussion with Ross today, resulting in one nice, concrete idea.
Consider the problem of suggesting policy improvements to the government. In particular, let’s imagine someone has a specific, detailed policy change related to health care, financial regulation, etc. Presumably, the people who know the most about these industries are (or were) in the industries themselves, so you could argue that they can’t be trusted to propose ideas that aren’t just self-serving.
Krugman correcting a flaw in an Obama speech about the energy bill. It’s very unfortunate that the president didn’t get this right, since externalities are the whole point behind cap and trade legislation. If he isn’t able to articulate this point consistently and correctly, he won’t (and shouldn’t) be able to convince anyone. Moreover, any bill that comes out of this process that isn’t based on people understanding (and admitting to understanding) externalities will likely be hopelessly flawed.
When people discuss the future of computers and software, a common worry is that it will become increasingly difficult to produce correct software due to the ongoing surge in complexity. A common joke is to imagine what cars would be like if they were as buggy as software. I believe these fears are groundless, and that they arise from a misunderstanding of the reason why current software is full of bugs.