Another move (now to New Zealand), another round of book disposal. I’ve been transitioning to ebooks anyways, so my physical collection will keep diminishing over time. Now I just need to lug them all to The Strand:
Update: A few more:
This is a followup to the previous post about health insurance elaborating on the fact that it can be bad to let individuals make choices about their insurance policy. I stated without much detail that “assuming sufficient options and perfect competition, the result of this individual choice would be exactly the same as if the insurers were allowed to use knowledge of $K$.” The “sufficient options” assumption is important (and not necessarily realistic), so more explanation is warranted.
Yes, it’s an extreme title, but it’s true. The idea of insurance is to average risk over a large group of people. If advance information exists about the outcomes of individuals, it’s impossible for a fully competitive free market to provide insurance.
In particular, free markets cannot provide health insurance.
To see this, consider a function $u : S \to R$ which assigns a utility value to each point of a state space $S$. For example, one of the elements of $S$ could be “you will have cancer in 23 years”. This outcome is bad, so the corresponding $u(s)$ would be a large, negative number.
I gave a short presentation on the ideas behind duck at DESRES today. Here are the slides. Caveat: I made these slides in the two hours before the presentation.
I like conciseness. Syntactic sugar increases the amount of code I can fit on a single screen, which increases the amount of code I can read without scrolling. Eye saccades are a hell of a lot faster than keyboard scrolling, so not having to scroll is a good thing.
However, I recently realized that simple absolute size is actually the wrong metric with which to judge language verbosity, or at least not the most important one.
Here are links related to a few interesting studies that came up in a discussion with Ross. I figured I’d post them here so I have somewhere to point other people:
Walter Mischel did a study where he put children in a room, gave them single marshmallow, and told them that if they held off from eating the marshmallow for a while they would get two marshmallows later. He then left the room and watched via hidden camera to see how long they would hold out. Several years later, he happened to do a follow-up study on the same kids, and discovered that the time they held out was strongly correlated to their grades, whether they went to college, SAT scores, etc. Here are some links:
Good discussion with Ross today, resulting in one nice, concrete idea.
Consider the problem of suggesting policy improvements to the government. In particular, let’s imagine someone has a specific, detailed policy change related to health care, financial regulation, etc. Presumably, the people who know the most about these industries are (or were) in the industries themselves, so you could argue that they can’t be trusted to propose ideas that aren’t just self-serving. Maybe it’s possible for someone to build a reputation of trustworthiness, but that’s hard and would ideally be unrelated to the actual ideas proposed. Instead of relying on reputation, we’ll remove the issue entirely by making the suggestion box anonymous.
Here’s a summary of the differences between typed functional languages and unsafe languages:
Kudos to anyone who knows what this means.
Krugman correcting a flaw in an Obama speech about the energy bill. It’s very unfortunate that the president didn’t get this right, since externalities are the whole point behind cap and trade legislation. If he isn’t able to articulate this point consistently and correctly, he won’t (and shouldn’t) be able to convince anyone. Moreover, any bill that comes out of this process that isn’t based on people understanding (and admitting to understanding) externalities will likely be hopelessly flawed.
Imagine a computer stored in a box with a single small hole connecting it to the outside world. We are able to run programs inside the box and receive the results through the hole. In fact, in a sense results are all we can see; if the program makes efficient use of the hardware inside, the size of the hole will prevent us from knowing exactly what went on inside the box (unless we simulate the workings of the box somewhere else, but then the box is useless).