I like conciseness. Syntactic sugar increases the amount of code I can fit on a single screen, which increases the amount of code I can read without scrolling. Eye saccades are a hell of a lot faster than keyboard scrolling, so not having to scroll is a good thing.
However, I recently realized that simple absolute size is actually the wrong metric with which to judge language verbosity, or at least not the most important one.
Consider the evolution of a chunk of C++ code. We start with a single idea, and encode it as a single class to encapsulate the structure. We add a class declaration, some constructors and a destructor, perhaps even a private operator= to disallow copying. Fine. After this boilerplate, we add various methods to the class to encode the actual behavior. The class also develops a few fields, because fields let us easily share data between the related methods.
Next we have another idea. Conceptually the new idea is distinct from the original one, so we should really make a new class. However, we’ve just gone through all the work of setting up a C++ class, with it’s constructors, destructor, private operator=, access specifiers, etc., and it’d be a shame to have to redo all that effort. Maybe it won’t be so bad if we just add the new idea into the same class…
Boom. Now we have two ideas merged into the same class. You can’t pass around one idea without passing around the other. You can’t rewrite one without analyzing dependency chains to make sure the class fields doesn’t overlap between concepts. After a while, we start to forget that the ideas were ever really distinct. That’s right: the language has actually made us stupider.
You can’t blame the programmer here. We were only maximizing our local utility. We might be smart, but we’re not omniscient, and we can’t always be bothered to follow style manuals. The problem also can’t be ascribed to the overall verbosity of C++; it’s quite possible that the code would be larger if it was written in C, since C++ class syntax, fields, etc. really can make for smaller (source) code.
The problem is that the marginal cost of adding a new class is greater than the marginal cost of extending an existing class. If it was easier to make a new class, we would have done so. But we would also have made a new class if it was harder to add methods to an existing class, because then the trade-off would have been different. In other words, what matters is the difference in verbosity between the “right way” and the “wrong way”, not the absolute level of verbosity.
Therefore, the conclusion is that any new abstraction with a large startup cost but a low marginal cost is bad, because people end up merging them in disgusting ways. Examples include interfaces (adding one more method is easier than splitting one interface into two), Haskell type classes (see fail), and monads (once you’ve converted your code into monadic form, making the monad do something else is easy).
Similarly, any abstraction which merges two benefits into one language construct is also bad, even if the extra benefits are free. The best example of this is inheritance, which merges the benefits of code reuse and subtyping. If I’m making a new class, and it would be really convenient to be able to call one of them methods in an old class, I may end up inheriting from that class in order to save typing even if subtyping makes no sense. By contrast, if I’d been writing the same code in C, that function I really wanted to call would probably just be a function, and I’d just call it. Object oriented programming makes you stupider.
Happily, it’s easy to notice when you’re running into one of these language flaws. Most of us have a good sense for what the right way of doing things is. If we set out to write a new piece of code, the right way will generally be the first thing that comes to mind, but then we’ll remember that doing it the right way is hard. We’ve probably trained ourselves not to notice this conflict after years of painful compromise, so all we have to do is untrain ourselves.