The Verbosity Difference

I like conciseness. Syntactic sugar increases the amount of code I can fit on a single screen, which increases the amount of code I can read without scrolling. Eye saccades are a hell of a lot faster than keyboard scrolling, so not having to scroll is a good thing. However, I recently realized that simple absolute size is actually the wrong metric with which to judge language verbosity, or at least not the most important one.

Haskell vs. C

Consciousness vs. efficiency

Imagine a computer stored in a box with a single small hole connecting it to the outside world. We are able to run programs inside the box and receive the results through the hole. In fact, in a sense results are all we can see; if the program makes efficient use of the hardware inside, the size of the hole will prevent us from knowing exactly what went on inside the box (unless we simulate the workings of the box somewhere else, but then the box is useless).

Two Quotes

These are very refreshing: From Hudak et al., “A History of Haskell” [1]: The fact that Haskell has, thus far, managed the tension between these two strands of development [as a mature language, and as a laboratory in which to explore advanced language design ideas] is perhaps due to an accidental virtue: Haskell has not become too successful. The trouble with runaway success, such as that of Java, is that you get too many users, and the language becomes bogged down in standards, user groups, and legacy issues.

Sorting in linear time

If you assume fixed word length, integers can be sorted in linear time using radix sort. This paper gives a stunningly elegant generalization of this to essentially arbitrary data structures, including lists, sets, bags, etc.: Henglein, F. (2008). “Generic Discrimination: Sorting and Partitioning Unshared Data in Linear Time”. Wow. The most interesting aspect of this to me is that it provides some extra hope about the practicality of functional containers. Right now everyone is using hash tables, which are fast but extremely lacking in functionality (e.

Lazy vs. strict vs. lenient?

Here’s an interesting short paper by Wadler: Lazy vs. strict. Philip Wadler. ACM Computing Surveys, June 1996. To paraphrase, the simplest model of lazy evaluation is given by Church’s original $\lambda$ calculus, and the simplest model of strict evaluation is given by Plotkin’s $\lambda_v$ calculus. $\lambda$ is problematic because it gives a poor notion of the cost of programs, and $\lambda_v$ is problematic because it is not complete. These flaws can be removed at the cost of slightly more complicated models, and the resulting models turn out to be extremely similar.

Strict aliasing and std::vector

Something just occurred to me about the gcc implementation of std::vector in C++. Internally, an instance of vector needs to store (1) a pointer to the start of the memory, (2) the current vector size, and (3) the current buffer size. However, (2) and (3) can be represented either as integers relative to the start pointer or as absolute pointers past the end of the active and buffer spaces. Specifically, the two representations are

User Error

Gah. I’m trying to write some straightforward combinatorial code in Haskell, and I finally got to the point where I had to start using nontrivial algorithms to make it acceptably fast. When I run those algorithms the first time, I get the following gem: *Main> drops 100 2 *** Exception: divide by zero The “drops” function is now 15 lines of mutually recursive Haskell, and the interactive interpreter conveniently informs me that there was a division by zero.