Another move (now to New Zealand), another round of book disposal. I’ve been transitioning to ebooks anyways, so my physical collection will keep diminishing over time. Now I just need to lug them all to The Strand: Anderson, Starfarers Appel, Modern Compiler Implementation in ML Ariely, Predictably Irrational Bak and Newman, Complex Analysis Ballantyne, Capacity Ballantyne, Recursion Banks, Excession Banks, Transition Baxter, Manifold Time Bear, Darwin’s Radio Bowman, Outdoor Emergency Care Bradbury, Fahrenheit 451 Chapman, Trials of the Monkey Cherryh, Alliance Space Cherryh, Regenesis Clarke and Baxter, Firstborn Clarke and Baxter, The Light of Other Days Cramer, Einstein’s Bridge Diamond, Guns, Germs, and Steel Eggers, A Heartbreaking Work of Staggering Genius Ferguson, The Ascent of Money Foley et al.
This is a followup to the previous post about health insurance elaborating on the fact that it can be bad to let individuals make choices about their insurance policy. I stated without much detail that “assuming sufficient options and perfect competition, the result of this individual choice would be exactly the same as if the insurers were allowed to use knowledge of $K$.” The “sufficient options” assumption is important (and not necessarily realistic), so more explanation is warranted.
Yes, it’s an extreme title, but it’s true. The idea of insurance is to average risk over a large group of people. If advance information exists about the outcomes of individuals, it’s impossible for a fully competitive free market to provide insurance. In particular, free markets cannot provide health insurance. To see this, consider a function $u : S \to R$ which assigns a utility value to each point of a state space $S$.
I gave a short presentation on the ideas behind duck at DESRES today. Here are the slides. Caveat: I made these slides in the two hours before the presentation.
I like conciseness. Syntactic sugar increases the amount of code I can fit on a single screen, which increases the amount of code I can read without scrolling. Eye saccades are a hell of a lot faster than keyboard scrolling, so not having to scroll is a good thing. However, I recently realized that simple absolute size is actually the wrong metric with which to judge language verbosity, or at least not the most important one.
Here are links related to a few interesting studies that came up in a discussion with Ross. I figured I’d post them here so I have somewhere to point other people: Marshmallows and Delayed Gratification Walter Mischel did a study where he put children in a room, gave them single marshmallow, and told them that if they held off from eating the marshmallow for a while they would get two marshmallows later.
Good discussion with Ross today, resulting in one nice, concrete idea. Consider the problem of suggesting policy improvements to the government. In particular, let’s imagine someone has a specific, detailed policy change related to health care, financial regulation, etc. Presumably, the people who know the most about these industries are (or were) in the industries themselves, so you could argue that they can’t be trusted to propose ideas that aren’t just self-serving.
Here’s a summary of the differences between typed functional languages and unsafe languages:
Kudos to anyone who knows what this means.
Krugman correcting a flaw in an Obama speech about the energy bill. It’s very unfortunate that the president didn’t get this right, since externalities are the whole point behind cap and trade legislation. If he isn’t able to articulate this point consistently and correctly, he won’t (and shouldn’t) be able to convince anyone. Moreover, any bill that comes out of this process that isn’t based on people understanding (and admitting to understanding) externalities will likely be hopelessly flawed.
Imagine a computer stored in a box with a single small hole connecting it to the outside world. We are able to run programs inside the box and receive the results through the hole. In fact, in a sense results are all we can see; if the program makes efficient use of the hardware inside, the size of the hole will prevent us from knowing exactly what went on inside the box (unless we simulate the workings of the box somewhere else, but then the box is useless).