Consider the following simplified version of Texas holdem, with two players Alice and Bob:
Alice and Bob are each dealt two private cards.
Alice posts a small blind of 1, Bob posts a big blind of 2.
Alice either folds, calls, or raises by any amount $\ge 2$.
Bob either calls or folds.
Five more shared cards are dealt, and the winner is determined as usual.
Brief summary in case you just want the data: here’s a table of exact win/loss/tie probabilities for every pair of two card preflop hands in Texas holdem:
Eugene d’Eon and I have been playing around computing Nash equilibria for extremely simplified versions of heads-up Texas holdem poker. For those who don’t know the details, in Texas holdem each player is dealt two cards face down, followed by five cards face up which are shared between all players (with betting at various points in between these cards).
I like conciseness. Syntactic sugar increases the amount of code I can fit on a single screen, which increases the amount of code I can read without scrolling. Eye saccades are a hell of a lot faster than keyboard scrolling, so not having to scroll is a good thing.
However, I recently realized that simple absolute size is actually the wrong metric with which to judge language verbosity, or at least not the most important one.
Imagine a computer stored in a box with a single small hole connecting it to the outside world. We are able to run programs inside the box and receive the results through the hole. In fact, in a sense results are all we can see; if the program makes efficient use of the hardware inside, the size of the hole will prevent us from knowing exactly what went on inside the box (unless we simulate the workings of the box somewhere else, but then the box is useless).
When people discuss the future of computers and software, a common worry is that it will become increasingly difficult to produce correct software due to the ongoing surge in complexity. A common joke is to imagine what cars would be like if they were as buggy as software. I believe these fears are groundless, and that they arise from a misunderstanding of the reason why current software is full of bugs.
I just spent a day tracking down a nasty MPI bug in some fluids code, and the cause is amusing enough to record. A process was interleaving message receives from two of its neighbors, one above and one to the right. Each message contained a band of data one grid cell thick. Both messages were sending into the same array, so the two regions intersected at the corner of the domain.
If you assume fixed word length, integers can be sorted in linear time using radix sort. This paper gives a stunningly elegant generalization of this to essentially arbitrary data structures, including lists, sets, bags, etc.:
Henglein, F. (2008). “Generic Discrimination: Sorting and Partitioning Unshared Data in Linear Time”.
Wow. The most interesting aspect of this to me is that it provides some extra hope about the practicality of functional containers. Right now everyone is using hash tables, which are fast but extremely lacking in functionality (e.
Currently, we (or at least I) don’t know how to do multigrid on general problems, so we’re stuck using conjugate gradient. The problem with conjugate gradient is that it is fundamentally about linear systems: given $Ax = b$, construct the Krylov subspace $b, Ax, A^2 x, \ldots$ and pick out the best available linear combination. It’s all in terms of linear spaces.
Interesting human scale physics is mostly not about linear spaces: it’s about half-linear subspaces, or linear spaces with inequality constraints.