Archive for March, 2009

Coin Perfection

Saturday, March 28th, 2009

Say you want to flip a perfect coin, but the only coins available are imperfect. First, let’s consider the case where we don’t know the details of the available coins, and would prefer not to measure them.

The simplest strategy is to flip a bunch of coins and determine whether the total number of heads is even or odd. As long as there’s a reasonable number of both heads and tails, the probability of an even number of heads will be close to one half.

Here is a measure of “coin perfection” which behaves additively with respect to the parity algorithm:

G(p)=log|2p1|

I.e., if you have two coins with probabilities of heads p and q, and r is the probability that flipping both coins gives the same result, then

G(r)=G(p)+G(r)

since e G(r) =|2r1| =|2(pq+(1p)(1q))1| =|2pq+22p2q+2pq1| =|4pq2p2q+1| =|(2p1)(2q1)| =e G(p)e G(q)

In particular, G(0)=G(1)=0 and G(1/2)= as one would expect; flipping a deterministic coin gets you no closer to perfection, and flipping a single perfect coin makes the parity perfectly random regardless of the other coins.

If we know the exact probability of heads p, we can do better than this and produce a perfect coin in a finite expected number of flips. By information theory the best we can hope for is 1/H(p) where

H(p)=plgp(1p)lg(1p)

is the entropy of the coin. Unfortunately determining the exact minimum appears rather complicated, so I’m not going to try to finish the analysis right now.

Wow

Thursday, March 26th, 2009

http://www.ted.com/talks/eric_lewis_strikes_chords_to_rock_the_jazz_world.html

The energy and momentum of the ground

Tuesday, March 17th, 2009

Imagine that you throw a ball in the air, and lands on the ground and stops (i.e., the collision is inelastic). What happened to the energy and momentum? The answer for energy is completely different from that for momentum, which is a trivial but interesting illustration of the differences between them. Also we get fun with infinities.

Assume a ball of mass m and velocity v hits the ground and stops. Before it hits, it has energy 12mv 2 and momentum mv. After it hits, the energy and momentum of the ball are both zero, so by conservation they must have gone elsewhere.

Momentum is easy: the momentum went into the ground. This is possible without the ground having noticeably moved, since the ground’s mass is some huge value M. The resulting velocity of the ground is the minute mv/M0, but this zero is canceled out by the huge mass to get the finite momentum Mmv/M=mv. Thus, it is perfectly consistent to model the ground as a rigid body with infinite mass, zero velocity, and nonzero momentum p. Only 3 new variables are required to account for this ground momentum and make our system closed with respect to conservation of momentum.

Energy is different: it is impossible to transfer energy to a large, rigid object. The kinetic energy of the ground after the impact is 12M(mvM) 2=m 2v 22M0 In other words, M= and V=0 give 12MV 2=00=0 since the zeros outnumber the ones. It is impossible to store energy in the ground by moving it as a whole.

What happens instead is that the energy goes into smaller scale phenomena, typically sound waves or heat. Both cases involve small amounts of matter moving extremely rapidly, which we can roughly characterize as mass 0, velocity . The key is that we can store as much energy as we like in these tiny phenomena without ever making a dent in momentum, since 0=0.

We can summarize this situation as follows:

  1. It is possible to hide momentum in a large, motionless object without expending any energy.
  2. It is possible to hide energy in a bunch of small objects without using any momentum.

The fact that energy hides in small places makes it harder to deal with in general; there a lot of small places, which is why the law of conservation of energy has been modified many times to account for new terms no one had previously noticed. Momentum is simpler: if you lose some momentum, all you have to do is look around for the gigantic object (e.g., the Earth).

Multigrid is the future

Wednesday, March 11th, 2009

Currently, we (or at least I) don’t know how to do multigrid on general problems, so we’re stuck using conjugate gradient. The problem with conjugate gradient is that it is fundamentally about linear systems: given Ax=b, construct the Krylov subspace b,Ax,A 2x, and pick out the best available linear combination. It’s all in terms of linear spaces.

Interesting human scale physics is mostly not about linear spaces: it’s about half-linear subspaces, or linear spaces with inequality constraints. As we start cramming more complicated collision handling into algorithms, these inequalities play a larger and larger role in the behavior, and linearizing everything into a CG solve hurts.

Enter multigrid. Yes, it’s theoretically faster, but the more important thing is that the intuition for why multigrid works is less dependent on linearity: start with a fine grid, smooth it a bit, and then coarsen to handle the large scale cheaply. Why shouldn’t this work on a system that’s only half linear? There are probably a thousand reasons, but happily I haven’t read enough multigrid theory yet to know what they are.

So, I predict that eventually someone will find a reasonably flexible approach to algebraic multigrid that generalizes to LCPs, and we’ll be able to advance beyond the tyranny of linearization.

Counting votes

Wednesday, March 4th, 2009

A few days ago I was in another discussion where someone raised the question of why presidential elections are so close. In the interests of avoiding these discussions in future, here’s a histogram of the popular vote margin over all presidential elections (data from [1]):

Conclusion: presidential elections aren’t very close, and we should stop looking for explanations of why they are.

Plates

Sunday, March 1st, 2009

Here’s a combinatorial problem I found recently: given a 100 story building and two identical plates, what is the worst case number of drops required to determine the lower floor from which the plates break when dropped (if a plate drops and does not break, it is undamaged). The answer is 14, which I’ll state without proof.

In the interest of completely useless generalization, we can ask what happens if we add more plates. Specifically, what is the cost C(s) to analyze an s story building if we can buy as many plates as we like? Say each plate costs as much as one drop.

First we need to figure out how many drops it takes for a given number of plates, or (roughly) equivalently the number of floors we can analyze for a given number of plates and drops. If we count the zeroth story as a floor to make the formulas slightly prettier, the answer is that n drops and k plates can analyze a building with S(n,k) floors, where S(n,k)=(n0)+(n1)++(nk)= pk(nk) (This can be proved by induction on the number of plates.) Thus, S(n,k) is the number of subsets of [1..n] with at most k elements, which turns out to have no closed form expression (Graham et al.). Therefore, we’ll have to settle for asymptotics. The best estimates I could find are given in Worsch 1994. Since they details vary depending on the relative growth of n and k, we first need a rough idea of the growth rate of n/k.

For a building of s floors, brute force binary search requires n=k=lgs+1, so C(s)<2+2lgs and n,kn+k=C(s)<2lgs. Since at least lgs tests are required regardless of the number of plates, we have lgsn<2+2lgs. Lacking an obvious lower bound for k, I’ll assume for the moment that k=Θ(lgs)=Θ(n). Numerical evidence indicates that n/k is in fact between 2 and 3.

In the case where n/k is a constant of at least 2, Worsch 1994 gives S(n,k)=(C(x)+O(1)) nC(x) n where C(n/k)=C(x)=x 1x(xx1) x1x We can now use this approximation to find the optimal ratio x by optimizing n+k subject to S=s. For convenience, let D(x)=C(x)/C(x) be the logarithmic derivative of C. Applying Lagrange multipliers, we have E= n+kλ(C(nk) ns) En= 1λ(nC n1CD1k+C nlogC) = 1λC n(Dx+logC) Ek= 1λnC n1CDnk 2 = 1λC nDx 2 Equating derivatives to zero and solving gives logC+Dx+Dx 2=0. Filling in the details, logC= 1xlogx+x1xlogxx1 = logx+1xlog(x1)log(x1) D= CC=1x1x 2log(x1)+1x(x1)1x1 = 1x 2log(x1) logC+Dx+Dx 2= logx+1xlog(x1)log(x1)1xlog(x1)log(x1) = logx2log(x1)=0 logx= 2log(x1) x= (x1) 2 0= x 23x+1 x= 3+52 Thus, n2.62k, S(n,n/2.62)1.94 n, and C(s)(1+1/x)logs/logx1.44lgs.

Lazy vs. strict vs. lenient?

Sunday, March 1st, 2009

Here’s an interesting short paper by Wadler:

To paraphrase, the simplest model of lazy evaluation is given by Church’s original λ calculus, and the simplest model of strict evaluation is given by Plotkin’s λ v calculus. λ is problematic because it gives a poor notion of the cost of programs, and λ v is problematic because it is not complete. These flaws can be removed at the cost of slightly more complicated models, and the resulting models turn out to be extremely similar. Specifically, the only difference between them is the following law, let x = M in N -> N which holds with laziness and fails for strictness. In other words, the different between strict and lazy languages is exactly what one would expect: in a lazy model unused terms can be ignored, and in a strict model they must be evaluated to make sure they don’t loop forever.

This is a fairly obvious result: all it says is that lazy languages are the same as strict except that it’s okay to have an infinite loop if the result is unused. It’s still an interesting point to emphasize though, since it highlights the importance of trying to come up with alternate evaluate schemes that combine the advantages of lazy and strict (e.g., Tim Sweeney’s discussion of lenient evaluation).