I’ve used Thomas Wang’s integer hash functions for years for various purposes. Using techniques invented by Bob Jenkins for general hashing (e.g., hashes of strings), Wang derived several hash specialized for fixed size integer input. His 64-bit version is
uint64_t hash(uint64_t key) {
key = (~key) + (key << 21); // key = (key << 21) - key - 1;
key = key ^ (key >> 24);
key = (key + (key << 3)) + (key << 8); // key * 265
key = key ^ (key >> 14);
key = (key + (key << 2)) + (key << 4); // key * 21
key = key ^ (key >> 28);
key = key + (key << 31);
return key;
}
Key properties include avalanche (changing any input bit changes about half of the output bits) and invertibility. Recently I wanted to make explicit use of the inverse, in order to verify that zero would never arise as the hash of a given set of inputs. This property would allow me to initialize the hash table in question (which takes up several gigabytes) with zeros and avoid an explicit occupied bit on each entry. Thus, I needed inverse_hash(0).
Consider the following simplified version of Texas holdem, with two players Alice and Bob:
Alice and Bob are each dealt two private cards.
Alice posts a small blind of 1, Bob posts a big blind of 2.
Alice either folds, calls, or raises by any amount $\ge 2$.
Bob either calls or folds.
Five more shared cards are dealt, and the winner is determined as usual.
Both Alice and Bob have infinite stack sizes, so only expected value matters.
Brief summary in case you just want the data: here’s a table of exact win/loss/tie probabilities for every pair of two card preflop hands in Texas holdem:
Eugene d’Eon and I have been playing around computing Nash equilibria for extremely simplified versions of heads-up Texas holdem poker. For those who don’t know the details, in Texas holdem each player is dealt two cards face down, followed by five cards face up which are shared between all players (with betting at various points in between these cards). More details are available on Wikipedia.
I like conciseness. Syntactic sugar increases the amount of code I can fit on a single screen, which increases the amount of code I can read without scrolling. Eye saccades are a hell of a lot faster than keyboard scrolling, so not having to scroll is a good thing.
However, I recently realized that simple absolute size is actually the wrong metric with which to judge language verbosity, or at least not the most important one.
Imagine a computer stored in a box with a single small hole connecting it to the outside world. We are able to run programs inside the box and receive the results through the hole. In fact, in a sense results are all we can see; if the program makes efficient use of the hardware inside, the size of the hole will prevent us from knowing exactly what went on inside the box (unless we simulate the workings of the box somewhere else, but then the box is useless).
When people discuss the future of computers and software, a common worry is that it will become increasingly difficult to produce correct software due to the ongoing surge in complexity. A common joke is to imagine what cars would be like if they were as buggy as software. I believe these fears are groundless, and that they arise from a misunderstanding of the reason why current software is full of bugs.
I just spent a day tracking down a nasty MPI bug in some fluids code, and the cause is amusing enough to record. A process was interleaving message receives from two of its neighbors, one above and one to the right. Each message contained a band of data one grid cell thick. Both messages were sending into the same array, so the two regions intersected at the corner of the domain.
If you assume fixed word length, integers can be sorted in linear time using radix sort. This paper gives a stunningly elegant generalization of this to essentially arbitrary data structures, including lists, sets, bags, etc.:
Wow. The most interesting aspect of this to me is that it provides some extra hope about the practicality of functional containers. Right now everyone is using hash tables, which are fast but extremely lacking in functionality (e.g., they’re not sorted). In many languages they’re also nondeterministic across multiple program runs for security reasons, which is horrible for debugging. Maybe some future abstraction along the lines of this paper (generalized tries?) will make ordering-based containers competitive with hash tables.
Currently, we (or at least I) don’t know how to do multigrid on general problems, so we’re stuck using conjugate gradient. The problem with conjugate gradient is that it is fundamentally about linear systems: given $Ax = b$, construct the Krylov subspace $b, Ax, A^2 x, \ldots$ and pick out the best available linear combination. It’s all in terms of linear spaces.
Interesting human scale physics is mostly not about linear spaces: it’s about half-linear subspaces, or linear spaces with inequality constraints. As we start cramming more complicated collision handling into algorithms, these inequalities play a larger and larger role in the behavior, and linearizing everything into a CG solve hurts.