## Rootstrikers conference summary

April 21st, 2013

I went to the Rootstrikers conference yesterday, which consisted of a few panel debates/discussions plus questions from the audience. I also got to hang out in a bar at a table with Lawrence Lessig for a half hour or so after the conference, which was pretty cool. I’ll summarize the conference here, and include links for anyone who wants to follow the movement or get actively involved.

Stepping back: there are two core ideas behind Rootstrikers: (1) representative democracy in the United States is being corrupted by the influence of money and money-connected lobbying, and (2) even if this corruption isn’t the most important problem to solve, it is the FIRST problem to solve, since it is blocking satisfactory progress on nearly every other issue (climate change, tax reform, health care costs, etc.). Lessig has a variety of talks laying this out; the most recent one at TED is particularly well done (I hadn’t seen it until yesterday), and I highly recommend it:

Lawrence Lessig, TED 2013

The main question is what to do if you believe this argument. I’m going to focus on what to do with a small amount of time. There are a few different efforts attacking the problem, and each one could use a small amount of time in different ways. The common thread in all of these is (1) provide public funding for elections and (2) block lobbying and large donor funding from swamping the public funding:

## The American Anti-Corruption Act

I just read the full text of it online here. The plan is to get one million “citizen co-sponsors”, then seek co-sponsors in Congress and attempt to pass the bill. It consists of a variety of explicit “no corruption” provisions, the best of which forces Congresspeople to recuse themselves from legislation regulating their significant donors. It also gives each citizen a \$100 tax rebate which they can donate to any candidate who agrees to fairly strict per-donor limits (\$500 / person, in particular); more discussion of this below. All the provisions seem good, except possibly banning Congresspeople from raising money while their house is in session, which may be unworkable and/or unnecessary given the other provisions.

If you agree with the provisions, please co-sponsor it even if you believe it is vanishingly unlikely to pass directly! If a sufficient number of people get on board, it will generate a lot of media attention even if Congress ignores it, which could further boost support for the general movement. Moreover (I asked Lessig about this at the bar), if it fails at the federal level it can be used as a model for legislation at the state level.

Signing on is easy, though you should read either the summary or hopefully the full text first. Presumably it’ll generate a small amount of email spam, which you can largely ignore until it moves to the next stage of attempted introduction to Congress. In the best case I can imagine a SOPA/PIPA-ish spamming of Congress by the entire internet when this happens; the main point of tolerating the email spam would be to know when to participate in such (call/email representatives, etc.).

## State-level efforts

Several states have passed publicly financed election laws with tremendous effect. In several, the vast majority of candidates sign on to public financing, which requires them to accept money only in small portions from individuals which are then matched by the state. Common Cause has a summary of which states provide public financing.

Other states are very close. In particular, in New York both the governor and the majority of the legislature is on board with the idea. The Brennan Center has a description of one proposed plan. If you live in New York, please email or call your representative and ask them to support this effort! Passing public financing in one of the largest states would be huge.

In California, there is a proposed bill called the California Disclose Act, which would expand disclosure requirements in a few different ways. For example, political television ads would be required to clearly state their three largest donors. You can help this effort by contacting your state representative and asking them to support it.

Other states may have similar efforts: look them up!

## Constitutional Amendments

There are two main organizations focused on a constitutional amendment for federally funded elections, Wolf PAC and Move to Amend. There are two arguments for a constitutional amendment: (1) the Supreme Court has started to block campaign finance reform as unconstitutionally infringing on the first amendment, and (2) Congress is so horrifically corrupt that there no chance of Congress-based reform. The downside is that constitutional amendments are very hard.

Wolf PAC is lobbying state legislatures to pass a resolution calling for an Article 5 Constitutional Convention, with the goal of an amendment that (1) bans corporate personhood and (2) implements federal public funding of elections. Move to Amend is similar. The corporate personhood ban makes me a bit angry, and I was rather turned off by how demagogic Move to Amend’s David Cobb sounded when he spoke at the conference. Huge individual donors (who often own corporations) seem just as bad as corporations to me, and it turns out only 11% of SuperPAC (I think) money in the last presidential election came from corporations; the rest was from individuals.

However, Wolf PAC has made some interesting progress pushing on state legislatures to call a convention, and they seem a good route if you want to contribute a small portion of time. Cenk Uygur said that often it only takes a handful of calls from constituents (say, 5 people) to turn a state legislator into a co-sponsor of their plan. The convention itself is fairly safe: once called it has the power only to propose the amendment, which must then be ratified. Moreover, there is historical precedent for Congress freaking out if this kind of thing seems likely to happen and passing amendments themselves, which could speed up the process.

Thus, with the proviso that corporate personhood may be an irrelevant emotional push-button, Wolf PAC (and possibly Move to Amend) may be a good use of a small amount of time, either to call one’s own representatives or to call people in key states to get them to call their representatives in turn.

## Or just Rootstrikers

If you’re unsure which option is best but want to follow the movement, a good default is to just sign up with the Rootstrikers mailing list, which will keep you informed of how both of it and related projects are developing.

## More conference summary

The conference itself was organized into the following four sessions:

### Different mechanisms for public financing

There was a panel discussion debating the merits of three different mechanisms for public funding: matching funds, vouchers, and tax rebates. All three in order to be meaningful would disallow candidates accepting public financing from raising additional campaign money, and would limit individual donations to a small amount (between \$100 and \$500, typically). Matching funds would multiply the individual donation by some amount (say 6x), and vouchers and tax rebates would both provide a small amount of free money per citizen to donate to the candidate of their choice. The vouchers and tax rebates have the obvious benefit that extremely poor people can still participate. Tax rebates also fit perfectly into the historical narrative of America’s anti-tax streak: the Republican advocating for tax rebates expressed this beautifully as “No taxation without representation”, meaning no taxation without representation in the funding part of elections.

One suggested advantage of matching funds is that small amounts of money could be provided to start candidates off, but this fits in just as well to the tax rebate scheme. In fact, if you believed as an individual citizen that a tax rebate scheme didn’t do enough of this, you could simply donate your rebate to a general pool of small candidate helper money established by an authorized 3rd party, which would then donate to an appropriate candidate, solving this problem without need for special provisions.

Thus, tax rebates seem like the best scheme. They are also the one included in the American Anti-Corruption Act, which is great.

### Campaign finance reform as a civil rights issue

The next discussion consisted of various people arguing that this sort of political corruption and campaign finance reform can be understood as fitting into the narrative of civil rights, since African Americans, Latinos, etc. are disproportionally not included in the tiny number of significant donors. This seems quite true, and articulating it could build support for campaign finance reform from those focused on civil rights issues. Unfortunately, due to the extremely unfortunate polarized nature of the civil rights debate, linking the two issues has the potential downside of turning away many people on the right.

Lessig expressed this concern in an interesting way in his keynote speech at the end. The founders certainly lacked a modern understanding of the dangers of discrimination based on race, gender, etc. However, one thing they did understand was class, and the constitution was explicitly intended to prevent takeover of the government by some sort of aristocracy. This has clearly failed, and the advantage of interpreting the corruption issue in this context is that it fits into a narrative that existing in a reasonably correct form from the beginning of the country.

Luckily, we can do both, so not really a conflict.

### Constitutional amendments and corporate personhood

This session was more of an actual debate, between three people advocating for a constitutional amendment as the sole focus and one person arguing for a focus on state efforts (in particular the strong possibility of public financing in New York). As mentioned, I was a bit turned off by the pro-amendment side’s emphasis on corporate personhood as opposed to the actual solution of public financing of elections. The more important details are above.

### Lessig’s keynote

Lessig then played his TED talk and then gave a related in-person talk, both of which were quite good. The TED talk is great even if you’ve seen similar talks of his, so definitely worth watching.

## The one charity theorem

March 12th, 2013

The normal scheme for donating to charities is to divide money up among several different charities. The following argument shows why this strategy is often wrong. Both the statement and the proof will be extremely informal:

One charity theorem: Assume we have a fixed amount of money to divide between $n$ charities. Assume that utility is a smooth function of the performances of the charities, which in turn depend smoothly on the amount of money each receives. In the limit of a small amount of money, it is optimal to give to only one charity. Conversely, with overwhelming probability, it is never optimal to give to more than one charity.

Proof: Let the utility function be $u\left(X\right)=u\left({x}_{1},\dots ,{x}_{n}\right)$, where ${x}_{1}+\cdots +{x}_{n}=T$ are the amounts of money given to each charity. Since $u$ is smooth and $T$ is small, we can linearize to get $u\left(X\right)\approx u\left(0\right)+\nabla u\cdot X$ where $\nabla u$ is the gradient of $u$. This linearized version is maximized by giving all money to the charity which maximizes $\mathrm{du}/{\mathrm{dx}}_{i}$. Moreover, as long as this maximum value is distinct, which occurs with overwhelming probability if we imagine that $u$ is a bit noisy, the maximum is unique.

## Practical matters

It’s important to discuss when this result applies in practice and when it doesn’t. First, uncertainty does not matter, including uncertainty about “the values of $\mathrm{du}/{\mathrm{dx}}_{i}$“. All kinds of uncertainly are simply folded into the utility function $u$, which results in more smoothness rather than less. Thus, the result applies even if you don’t know what your preferences really are; in this case, just make a good guess and give all the money to that charity.

Similarly, multiplexing in time also does not matter to some extent: even if we don’t know about the future, we can simply list “charities in the future” as one of the entries and apply the theorem. In particular, saving up money and donating it in larger chunks may be better than numerous small donations over time if one expects to have more knowledge (and therefore more accurate utility) in the future.

There are two valid reasons the theorem may not apply: failure of smoothness and failure of smallness. The smoothness is easy: two charities need a small amount of money to meet a certain goal, donating a small amount of money to both may be optimal. However, this applies only if the discontinuity can be accurately predicted. For example, Kickstarter projects have a threshold amount which must be reached to take effect, but an individual donating a small amount of money will still have a smooth utility function due to the unknown amount of other people’s donations.

The more important condition is smallness, which turns utility into a fully nonlinear function and in particular increases the likelihood of discontinuities. Smallness can arise either because the charity itself is small, or because utility depends on something intrinsic to the donation rather than the performance of the charity. If you derive personal satisfaction or reputation based on the number of charities you donate to, independent of the amount, the result does not apply (and also you are part of a problem). If the charities themselves are small, so that smaller donations to several have both a significant effect and significant diminishing returns, great. This is one of the reasons why microfinance is such a great idea.

On the other hand, even if there are too many charities for the theorem to apply for each individually, it’s possible that it does apply to entire classes of charities. It may not be rational to donate to both microfinance and any other charity, for example.

Also note that same argument applies when money is replaced with donations of time, though it’s much easier for time to have personal utility terms which break smallness.

## Rootstrikers

This argument is one of the main reasons why Rootstrikers is a good idea in principle: given a small amount of influence on a large system such as government, it is highly likely that focusing on one problem to the exclusion of all else is the only rational course. This applies to those with fulltime politically related jobs (e.g., Lessig), or to donations of money and (again to a lesser extent) donations of small amounts of time, thought, etc.

While nothing in the argument says that Rootstrikers is the best way to go about this task, or that Rootstrikers is the best organization with this plan, remember that uncertainty does not invalidate the theorem. If you are small, pick one. Do not hedge charities.

## Would anarchy work?

December 10th, 2012

In the scale free government post, one of the completely unresolved issues was what to do about the federalism axis. There are two scale free extremes to choose from: completely uniform democracy and pure libertarianism (i.e., anarchy). This post will ramble about the anarchy option without getting anywhere very useful.

Anarchy would only work if the universe is such that the middle ground can be efficiently simulated by ad-hoc coordinated groups. Recall that the goal isn’t actual anarchy, which is absurd, but a system with as few foundational rules as possible.

Here’s a This American Life episode describing a typical sham treaty between the U.S. and Dakota Indians in Minnesota which later turned into a war. I haven’t listened the whole thing yet since one part already struck me as illustrative: the initial treaty was negotiated by mistranslated a key portion from English to Dakota, so that the Dakota didn’t realize what they were going to sign. Then, during the actual signing, the Dakota were asked to sign an extra document (they thought it was another copy of the same treaty) giving away most of the settlement money for “debt” purposes.

Thus, the first prerequisite for anarchy to work is for all sides to have a fairly similar level of legal/game theoretic/political knowledge. At the moment, this isn’t remotely true at the level of individuals; the easiest example are the software license agreements we all scroll idly by. It’s also not true in mass: if it were, political advertising would have only informational effects. However, at least at the level of information, it’s possible to imagine technological or cultural improvements that could improve the situation. Cryptography is a good case (as mentioned here); a cryptographic defender can be exponentially weaker than an attacker and still remain secure.

Back to the Dakota: after the treaty, they were giving a 20 mile strip of land around a river. This was sufficient for farming but not for their traditional hunting lifestyle. Imagine they had kept enough land for hunting. This (cough, arguably) would go against the societal interest of the surrounding (new) majority population, since hunting is not an efficient use of land in terms of population density. However, Dakota on sufficient land for hunting were presumably quite self sufficient. As it happened they didn’t play the game well enough, fell into debt, and were tricked into the treaty. If they were smarter and kept clear of the surrounding economy, the only external pressure available would have been military. Let’s set that aside for a minute.

Thus, the next prerequisite for anarchy to work is some effective form of externally available pressure. There’s not much point if the external pressure has to be military (this basically reduces to normal government). The next best thing is probably boycotts or their variants (sanctions, tariffs, etc.), which large groups of people would have to jointly agree to. Setting aside the difficulties in setting up such an agreement, boycotts would not have been effective in the case of the Dakota; they would simply have laughed them off and gone back to hunting. The modern economy isn’t even close to self sufficiency for two reasons: (1) the massive amount of capital required for such high technology projects such as semiconductors and (2) comparative advantage. It would be nice if (1) would go away for fragility’s sake, but it’s also possible that (2) could soften, especially if transportation costs increase, energy and food become more local, the internet remains free, etc. These are great, but would also reduce the amount of external pressure available to combat carbon and other pollutants (imagine if the self sufficient Dakota were spewing tons of CO${}_{2}$ into the sky).

And then, of course, the Dakota were crushed by industrial military power. The main game theoretic problems that military power would pose to an anarchist structure are (1) economies of scale and centralization and (2) the imbalance between defense and attack. The effectiveness of guerrilla warfare on home turf suggests that (1) may not be a problem. If a widely dispersed and self organizing military works, it would fit right in. I don’t have anything useful to say about (2). If (1) favors guerrillas and (2) isn’t too biased towards attack, the same techniques used to structure other forms of pressure seem like they should work fine for the military.

## Toothpaste and amortized complexity

December 2nd, 2012

A past girlfriend and I would occasionally (cheerfully) quibble over the optimal strategy for extracting toothpaste. It occurred to me recently that the disagreement was fundamentally about amortized vs. worst case complexity.

Being lazy, I tend to squeeze the toothpaste out of the front of the tube, optimizing the time spent in the moment and reducing the degree of control required since pressure is exerted near the toothbrush. She would carefully squeeze the tube from the back, maintaining a flat region that would slowly grow as the toothpaste emptied. The main advantage of her strategy is that toothpaste is always at hand, and every iteration is fast. In contrast, squeezing from the front pushes toothpaste both out of the tube and backwards towards the other end. Occasionally, this must be fixed by rolling the back of the tube forwards. Each fix up step takes much longer than a normal toothpaste extraction, but the average time spent might be lower since a normal squeeze is faster. We never did the experiment, so I’m not sure which strategy actually wins on average.

However, the real problem is heterogeneous parallel computation: if two people squeeze a toothpaste tube from the front, their thresholds for when to perform a fix up step will almost certainly differ (I had significantly more finger strength). The result is unfair: the person with the lower threshold will end up doing most of the work. Relatedly, in a parallel context the time lost during a fix up step can amplify, since more than one agent can stall waiting for fix up on a shared resource to complete. Incidentally, in contrast to the computational case, for toothpaste it might be even worse if the fix up step falls to the second person.

Some of these points definitely came up, but I don’t remember the details. The important thing is that I immediately caved. :)

## Not everything happens for a reason

November 20th, 2012

The phrase “everything happens for a reason” came up in a couple contexts recently (conversation with a friend, Radiolab, etc.). It’s a good example of an obviously false statement that contains plenty of useful insight, and is interesting to think about in that context.

We’ll get the pedantic out of the way first: “everything happens for a reason” is literally true in the sense that the future happens for the reason that is the past. What people are usually implying is “everything happens because of a simple event in the future”. It isn’t worth wasting time tearing apart that absurdity.

What’s more interesting is why people say such a thing. Here are some possible related statements:

1. I choose not to regret the past.
2. Every event contains benefits.
3. We’ll make it work.

I was imagining writing more, but I think that sums it up. You do not need to be irrational to be optimistic and positive.

## Exponentially harder isn’t hard enough yet

July 2nd, 2012

In well designed cryptographic security systems, the attacker needs to do exponentially more work than the defender in order to read a secret, forge a message, etc., subject to appropriate hardness assumptions. Maybe this is true for many non-computer security-ish systems as well, like choosing good representatives in a voting system or avoiding overpaying for advertised merchandise, and we simply haven’t reached the level of intelligence as defenders for the exponential effort of attackers to be prohibitive. Extra intelligence is a ways off, but faster and simpler access to information is closer, and may have similar effects. In any case, a hopeful thought.

## Inverse of a hash function

March 18th, 2012

I’ve used Thomas Wang’s integer hash functions for years for various purposes. Using techniques invented by Bob Jenkins for general hashing (e.g., hashes of strings), Wang derived several hash specialized for fixed size integer input. His 64-bit version is

``````uint64_t hash(uint64_t key) {
key = (~key) + (key << 21); // key = (key << 21) - key - 1;
key = key ^ (key >> 24);
key = (key + (key << 3)) + (key << 8); // key * 265
key = key ^ (key >> 14);
key = (key + (key << 2)) + (key << 4); // key * 21
key = key ^ (key >> 28);
key = key + (key << 31);
return key;
}
``````

Key properties include avalanche (changing any input bit changes about half of the output bits) and invertibility. Recently I wanted to make explicit use of the inverse, in order to verify that zero would never arise as the hash of a given set of inputs. This property would allow me to initialize the hash table in question (which takes up several gigabytes) with zeros and avoid an explicit occupied bit on each entry. Thus, I needed `inverse_hash(0)`.

Our function is the composition of the functions on each line, so we need to invert each one. Multiplication by 21 and 265 is easy; both numbers are odd, and therefore have multiplicative inverses mod ${2}^{64}$. The rest of the lines are invertible because they’re Feistel functions; they break the key into two pieces, leave one piece alone, run the second function through an invertible function that depends on the first. For example, the line `key = key ^ (key >> 24)` leaves the top 24 bits alone. Once you know the top 24 bits, you can reconstruct the next 24 bits with an xor, and one more round gives the remaining bits. The full inverse is

``````uint64_t inverse_hash(uint64_t key) {
uint64_t tmp;

// Invert key = key + (key << 31)
tmp = key-(key<<31);
key = key-(tmp<<31);

// Invert key = key ^ (key >> 28)
tmp = key^key>>28;
key = key^tmp>>28;

// Invert key *= 21
key *= 14933078535860113213u;

// Invert key = key ^ (key >> 14)
tmp = key^key>>14;
tmp = key^tmp>>14;
tmp = key^tmp>>14;
key = key^tmp>>14;

// Invert key *= 265
key *= 15244667743933553977u;

// Invert key = key ^ (key >> 24)
tmp = key^key>>24;
key = key^tmp>>24;

// Invert key = (~key) + (key << 21)
tmp = ~key;
tmp = ~(key-(tmp<<21));
tmp = ~(key-(tmp<<21));
key = ~(key-(tmp<<21));

return key;
}
``````

The cleverness of the original hash function is that each invertible step is also extremely fast. The inverse is slower, but only moderately.

Finally, I did indeed luck out:

``````inverse_hash(0) = 0x7ffffbffffdfffff
``````

This isn’t a valid pentago board in my packed representation, so zero initialization works. This turned out to be obvious in hindsight: all but the first step in the hash function leaves zero alone, and the last step is complement on the lower 21 bits, which is enough to know that `inverse_hash(0)` can’t be a valid pentago board. It’s still cool to have the full inverse, though.

I tried to email Thomas Wang in case he hadn’t had the occasion to write down the inverse explicitly, but unfortunately his HP email bounced. Impermanent email addresses make me sad.

## Incremental revolution

January 30th, 2012

The previous post described possible ways of removing artificial scale parameters from a political system, the most important being a way to remove the representational scale dependency via “direct democracy plus scripting” (for which I still need a better name). This post will describe how one might try to achieve such a system. Besides the obvious reason for such a discussion, the transition from one system to another provides an excellent thought experiment to evaluate the merits of both current and future systems. Here are two proposed principles which encapsulate why:

1. A good political system should be able to take over an existing inferior one from the inside out, gradually.
2. A really good political system would be excellent at being taken over from the inside out by a better system.

The second one is supposed to sound backwards.

The first principle says that any kind of full revolution is an extreme measure, one to avoid if at all possible. Moreover, the need for revolution implies a limit on how much better the better system could be: if it was better for everyone, for example, no one would complain about the switch. Of course, this is never the case: there are always those who benefit disproportionally from the status quo. In the past (e.g., last October) violent revolutions have been necessary to overcome the resulting opposition, since the large majority of people who prefer the new system have little to no official power. However, at least in this country we have a system that at least theoretically derives all power from individuals. Overall it’s worked quite well, and the first thought in response to any problems should be whether we make progress using the existing system rather than fighting it directly.

Incidentally, Le Guin’s world of anarchists in The Dispossessed completely fails the first principle, at least according to the majority of the inhabitants, who are terrified that any interaction with the conventional capitalist parent world would destroy their society. The same applies to American fears of communism: if we really believed in the superiority of capitalism over communism, we wouldn’t have been so terrified of secret communist plots and takeovers.

The second principle says that if a system is really good, it should admit the fact that it’s highly unlikely to be the best, and should provide mechanisms for calm and efficient transition to anything better which arises without sacrificing stability. Since “better” is a wildly subject term, and will vary over place, time, issue, etc., flexibility is key.

So, how would one go about trying to get to direct democracy plus scripting? To recap the previous post, the idea is to let everyone vote directly on every issue, and recover practicality by allowing individuals to assign their votes to others, often on an issue by issue basis. The assignment mechanisms are completely open: anyone who proposes a new method for collecting and organizing votes could implement it, collect votes from those who think the new mechanism is superior, and start wielding political power. Assignment is also optional: an individual who decides to vote directly on some issue is free to do so. In practice vote assignment would be implemented by nonbinding messages from the representative to the individual (or lower level representative), so there’s no need to codify any of the particular details or mechanisms of vote assignment into the basic structure.

Trying to achieve this with anything like a constitutional amendment would be a terrible idea. First, it would fail, since it’s way too different from how government is structured currently. Second, even if it did somehow succeed, it would immediately fail, since the practical success of the scripting bit would depend on a large ecosystem of independent assignment mechanisms, small and large scale networks of friends and advisors and representatives, etc. Instead, the right approach is to start at the other end: pick a single city and a single city council seat, say, and try to elect one representative that agrees to vote exactly according to the collective will of the system. To make it easy, choose the easiest possible city and district, ideally one that’s fairly well off and has a fairly large technical population, in order to reduce the effort required to get people internet access and educate them about how it works. In the beginning the system would be very far from completely fair: not everyone would know about it, and not all that did would be able to use it. However, the fraction of people in a district who currently maintain influence over their representative is miniscule, so I doubt we’d be worse off in that regard.

The basic infrastructure, or rather the first implementation of that infrastructure, would need to be in place before this first representative is elected. Technical issues aside for the moment, part of this infrastructure is the interpersonal networks of representatives required to make the process efficient, and we need some way of bootstrapping these networks. To do this, we need a stream of hypothetical issues to vote on, which can be found either via an interested city councilperson or simply by sending people to every single city council meeting to take notes. Only once these networks are in place and functioning smoothly (gauged by asking people to vote on whether their networks are functioning smoothly) could we proceed to the next step of actually trying to elect someone.

At this point I run out of steam, since my knowledge of how politics works in practice at local levels is nonexistent. I’m not at all sure such a system really makes sense at a very small scale; for one, the issues it’s trying to tackle may be less important (more trust, less distance between voter and representative, etc.). However, I’m fairly confident that if it failed at a small scale, it would fail at a large scale, and failure is good if it highlights problems or indicates early that the entire approach is wrong.

Happily, I have at least one hiking friend who’s fairly involved in local Marin politics, and has the added benefit of being one of the rare people who seems to read this blog. Hopefully this post can at least fuel interesting conversation on an upcoming hike.

## Scale free government

January 22nd, 2012

I read The Dispossessed again recently, which is a wonderful book by Ursula K. Le Guin about a society of anarchist/revolutionaries where ideally everyone shares everything and is never compelled to do anything by anyone else. In practice all sorts of societal and structural compulsions arise, and half the book is about struggling with these internal contradictions (the other half argues how much better it is, contradictions and all, than the alternative).

Reading that kind of thing always makes me want to think about an ideal political system would like. Hopefully such an ideal system would also be simple, in the sense of having few (and therefore general) rules. Simplicity isn’t necessarily a good thing, since it’s usually a bad idea to carry anything in politics to extremes. However, all else equal simpler systems mean fewer opportunities for mistakes, easier analysis, etc. Moreover, an ideal simple system would hopefully be able to avoid extremes by way of complicated policies derived through simple (or simply regulated) decision making processes.

So, let’s talk about various ways a governmental system could be simple. To provide some sort of unification, we’ll focus down from “simple” to “scale free”, where a system is scale free if it avoids some sort of tunable parameter. The different scales heavily overlap, so there will be some repetitiveness. Here goes.

## Representational vs. direct

All current large scale democracies are (mostly) representational, in the sense that citizens vote for representatives who then vote on the actual policies. Scale dependency is typically minimized by having representatives at a variety of different scales, with various levels of power and electoral bases.

The natural way to avoid representational scale dependence is to switch to direct democracy, where each person votes on every single policy. To make this practical, representatives can be reintroduced as an optional, “convenience” feature, allowing citizens to assign their votes to other people. For extra flexibility, these assignments could be issue specific, which is useful if you believe one friend is very knowledgeable about financial policy but has terrible environmental views, for example. The vote assignments could also be hierarchical, funneling upwards from individuals through knowledgeable friends to politically active knowledgeable friends to local representatives or policy wonks or whoever else has enough time and interest to actually do the final voting. Or not: anyone would be free to vote directly if they feel like it.

The key is that all the extra hierarchical stuff would be open and extensible; anyone could propose a new scheme for aggregating votes, implement it, and start collecting power without explicit permission or legal authority. Presumably someone would set up a joke site that collected assigned votes and voted on policy based on the results of Google keyword searches, and a few people would give it their votes, and it would have actual political power. And presumably most people would assign their votes in serious ways (or not vote).

We can think of this system as “direct democracy plus scripting”. Clearly I need a better name.

A typical worry about direct democracy is that individuals are too lazy and ill-informed to make reasonable decisions on actual policy issues. However, due to the scripting we can’t be worse off than normal representative democracy if the individual is both lazy and ill-informed; they’ll assign their votes to a better informed representative. People interested in power would scramble to set up easily noticeable catchalls to collect exactly these votes, tailored specifically for people who want to strictly follow some party line (itself chosen by an arbitrary, possibly scripted method). Voters who are ill-informed but also motivated are problematic, but no more problematic than in representative democracy. Moreover, many problems associated with current instantiations of direct democracy, such as referendums, immediately disappear. First, if you don’t believe in referendums, assign all your votes to representatives. If an issue is passed by the equivalent of a referendum but most people decide it should have been decided representationally by someone more knowledgeable, they simply reassign their votes.

All the issues associated with reliable and private voting still apply: we don’t want people to buy or sell votes, for example. However, various secure schemes for voting exist (see e.g. [RS07]), and voting more often doesn’t make these any less sound. It does require some level of guaranteed occasional internet access; that requires money, but likely not very much at least in this country. Moreover, any problems with voting irregularities are massively reduced by voting more often: if a vote goes wrong, vote again.

## Federalism and nations

Our second scale dependency is the amount of power controlled by uniform, large scale policy (federal or national laws) vs. heterogeneous, small scale policy (state and local laws). At the small extreme of this scale, we have individual rights, where no other person is granted control over a certain class of actions of another. The other extreme is uniform national or global policy. We clearly need a large dose of the small extreme: individual rights promote diversity of ideas, culture, life, etc., and diversity is both fundamentally good and (perhaps equivalently) vitally important to long term survival. McCarthy has (or, sadly, had) a great quote about this further up the scale:

Civilization might recover from the damage of a nuclear war, but … it might never recover from world government, there being no chance of external intervention.

Unfortunately, for better or worse, a wide variety of issues can only be efficiently resolved by large scale uniform policy. These include externalities (global warming), public goods (infrastructure and basic scientific research), any type of insurance against predictable future events (such as health care), and education (I don’t know the best way to abstract this one).

By analogy with our previous scale elimination, we could try to eliminate the federalism scale dependence by pushing all the way to one end of the scale, and inventing a flexible scheme for finding practical middle grounds. Flexibility is key, since federalism has to be decided entirely on a case by case basis: the power of speech should be almost purely local, and the global level of carbon emissions should be, yes, global.

At the moment, I have no idea whether there’s a way to make either of these extremes work. That is, I see no obvious proof that either could not work. Pure libertarianism could work in a system where (1) no one individual has sufficient power to do dramatic harm by acting alone and (2) the vast majority of people are reasonable. In cases where a particular issue actually warrants libertarianism, we’re done. If global coordination is required, the majority of reasonable people would look at the issue and decide to collectively organize on that issue, e.g., by making their decisions based on the above flexible voting scheme. Some fraction of people would refuse to comply, and they would be appropriately shunned or ignored by the others. For example, someone who decided to burn coal would either find no one to work with, or no one to sell the energy to, because most people would have agreed that burning coal is a really awful idea. Again, I’m not saying this clearly works, only that it’s not clear that it couldn’t work with (1) a sufficiently educated populous and (2) flexible, low overhead schemes for issue specific collective organization.

The reverse extreme of pure global voting could also work, as long as the majority of people realize that local control is often a good thing. Again, possible, though I have no idea about the details.

## Time scales

Good policy must be stable over a reasonable time scale, order to ensure predictability for both those who work in government and those affected by the policy. Policy stability should not be confused with long term planning: it’s possible to make long term plans even if the policy’s are adjusting rapidly based on new knowledge. As with federalism, the correct practical time scale is entirely dependent on the issue.

Remove the time scale dependence from the system itself is easy: we set it to zero. Anyone who wants to propose a vote on an issue can do so at any time. Voters (i.e., everyone, or their hierarchically chosen representatives) need time to ponder their decisions, which they do simply by voting “no” for a while, and then either leaving it “no” or switching it to “yes”. A default vote can be set either to “no” always, signifying a preference for stability, or to “no vote”, signifying a preference for a reduced quorum on a particular issue or family of issues. These defaults would be part of the scripting level, and therefore completely extensible.

The key is that as long as the space of decisions voters can make is sufficiently rich, we’ll naturally avoid rapid flip flopping even on contentious, equally divided issues. Even on binary decisions, voter preferences will almost always be smoothly distributed from strongly in favor, to weakly in favor, through to weakly and strongly opposed. The distribution may be sigmoidal, with most voters on one side or the other, but there should be a decent population of voters in the middle. Most voters are reasonable people, and will place nonzero value on policy stability. For voters sufficiently close to the middle, the value of stability will trump their weak preferences towards one side or the other, so they’ll vote “no” or “same” or whatever else is required to prevent flipping. The resulting policy will therefore tend to change on a time scale where the noise (or rational changes) in voter preferences balance the fraction of people who value stability more. I keep picturing a galaxy viewed edge on thinking about this: a large one dimensional space with a bulge in the middle where another dimension kicks in.

Another somewhat orthogonal way to reduce time scale dependence is to convert as many issues as possible from binary decisions into smooth parameter choices. For example, the limit or price on various pollutants may need to fluctuate fairly quickly to take account of new information or temporal events (weather, etc.). As long as enough people agree that some limit is warranted, so that the necessary monitoring and infrastructure can be put in place, the limit itself can vary dynamically based on some continuous time voting scheme. One such continuous scheme would be for everyone to post a suitable ordering of the real numbers, and have the system constantly recalculate instant runoff voting as subsets of individuals change their preferences (which could themselves be scripted). The requirement that people agree on some limit is a serious one, though, and is essential in any area where enforcement or compliance involves significant overhead. Not all issues can be made smooth.

Incidentally, both these mechanisms for recovering correct time scales work by expanding the space of voter decisions. I think similar tricks will arise frequently, each different, so it’s vitally important that our system doesn’t require use to know them all in advance.

## Majority rule vs. consensus

Any decent governmental system must include mechanisms for protecting minorities and minority rights from the will of the majority. There are at least two approaches to protect minorities: federalism and consensus. Minorities can be protected under federalism by giving the majority in question some of the power over themselves directly, most importantly in the case of the individual. Of course, federalism can also damage majority rights, such as when certain states disagree with a certain majority right. However, we’ve already discussed federalism, and I’ve admitted that I don’t know how to make it safely scale free, so I’ll set it aside again.

The other mechanism is requiring a supermajority in order to enact or change a particular policy, somewhere between pure majority rule and pure consensus. There are two logical reasons to require a supermajority: a desire for stability and a belief that past decisions were more accurate than present decisions. I personally don’t believe in the second reason: that is, I think our knowledge and decision making abilities are gradually improving over time, at least if we average out the noise. Stability is critical, but we’ve already covered it in our discussion of time scales. Therefore, very tentatively, we can resolve the scale choice between majority rule and consensus by dialing it all the way to pure majority rule.

In order for pure majority rule to work, it’s vitally important that our system allow general principles with wide support to overrule individual, local decisions. For example, as least in the U.S., a vast supermajority of voters will support the general principle of free speech, but pockets of voters either in time, place, or issue may want to choose otherwise. I’m not sure whether this overruling needs to be built in at a fundamental level or not, it may be sufficient to let voters know when they’re about to vote in violation of a larger principle that has already been decided, or perhaps only that they’ve already voted for. Moreover, there are some rights issues such as abortion where both sides believe their position derives from some general principle. Of course, I’d love for decisions on issues like abortion to always be decided in accord with my personal views, but this may be too much to hope for. More stability definitely isn’t always the answer: see gay marriage.

To summarize, I think defaulting to majority rule is likely the way to go, but only if we could be sure the rights of minorities would be sufficiently protected.

## Anonymity

In our current system individual votes are anonymous, but the votes of regions as a whole and the votes of their representatives are public. It’s almost certainly a terrible idea to remove the anonymity of the individual, due to the dangers of voter coercion, buying and selling votes, etc. However, in our proposed system individual votes are the only type of voting built in; everything else is built up in a flexible, hierarchical manner. Therefore, we need to check whether we can recover the advantages conferred by non-anonymity at higher levels in our current system.

Regional vote tallies are almost certainly unimportant: the news loves these, and they’re important for analysis and understanding of the spread of opinions in a society, but in both cases exit polls are perfectly sufficient replacements.

Lack of anonymity of votes by representatives is necessary in the our current system so that voters know whether to vote people back into office, and the same applies in our scheme so that individuals know where to assign their votes. However, there’s no need to build this requirement into the system itself. It’s quite easy to set up a representative so that every high level vote is communicated back to the individual, simply by giving the representative no direct power: the representative votes by sending a message back to the individual (or rather the individual’s computer or hosted voter account) to do the actual voting. Even if we give the voters the choice of doing it some other way, such as by handing representatives cryptographic subkeys which allow them to vote anonymously, the vast majority of voters will naturally prefer to know where their votes are going, and will assign accordingly.

It’s also easy to make the choice of representative anonymous. It’s even easy to make the representatives not know how many people support them: the representative can announce their views publicly, and completely unconnected individuals can vote accordingly by reading a public website (automatically or manually). However, some degree of representative knowledge is useful to reduce wasted effort and allow responsiveness to constituents. Voters will want both, so the correct level of knowledge should develop naturally.

Incidentally, lack of anonymity of representative votes has plenty of disadvantages in our current system, since it allows lobbyists and special interest groups to indirectly buy votes. The same disadvantages apply to some degree here, but are greatly ameliorated by the ability to switch representatives quickly and choose different representatives for different issues.

## One person, one vote

Let’s end with a fun one. Right now, the concept of “one person, one vote” is a tremendously good idea, since one human individual is such a natural scale. All the other animals are too stupid to vote, and single individuals pretty much stay single individuals. All this goes out the window as soon as we get strong AI. Machines will have to start voting reasonable soon thereafter, and machines and software have a tendency to copy themselves. It’s likely bad if a machine makes a thousand copies of itself and gets a thousand votes as a result. Conversely, if a thousand machines decide to merge together into a single superintelligent, jointly decision making cluster, do they now get only one vote?

As with pretty much everything else in this post that I have no idea how to solve, this one is related to federalism. The need for “one person, one vote” goes away completely in the case of pure libertarianism, since the thousand copies simply do what they like, and the joined superindividual does likewise. If the pure libertarian solution doesn’t work (which may be likely, but as noted I see no obvious proof), I’m not sure what to do.

## Notes and conclusion

I’d love to hear other people’s opinions on this stuff, especially the direct democracy plus scripting approach. I hope we get to something like it eventually, and it might even be possible to start pushing towards it now. The first step would be to set up the infrastructure to allow people to easily assign votes. If enough people are interested in the result, the next step would be to try to elect a representative who agrees to vote in accord with the collective decision of the system. If it works, it would increase interest, and the system could expand further and eventually take over the surrounding political system from the inside out.

Incidentally, if we do get closer to an ideal political framework, the whole notion of “protest” might vanish, in that someone who wants to protest could take direct action instead. Or at least protest would take a different form, with different connotations (fighting from the inside rather than the outside).

Finally, part of this was written sitting in a booth next to a bunch of people dancing (in a class beyond my level), which turns out to be a lot more fun than writing from home. Maybe I should find more places like that.

## The problem

January 18th, 2012

From CBS News:

On the one hand, [Obama's] administration has defended a free, open Internet as it watched repressive regimes fall in the Middle East with help from social media such as Twitter. It has also been a proponent of the concept of “net neutrality,” which prevents Internet service providers from slowing online traffic that comes from file-sharing sites known to trade in pirated content.

On the other hand, Obama and other Democrats have gone to Hollywood dozens of times to raise campaign funds over the years.