The Anonymous, Recursive Suggestion Box
Good discussion with Ross today, resulting in one nice, concrete idea.
Consider the problem of suggesting policy improvements to the government. In particular, let’s imagine someone has a specific, detailed policy change related to health care, financial regulation, etc. Presumably, the people who know the most about these industries are (or were) in the industries themselves, so you could argue that they can’t be trusted to propose ideas that aren’t just self-serving. Maybe it’s possible for someone to build a reputation of trustworthiness, but that’s hard and would ideally be unrelated to the actual ideas proposed. Instead of relying on reputation, we’ll remove the issue entirely by making the suggestion box anonymous.
Now we have an anonymous suggestion box on a website. People go to it and propose ideas. There are a few good ideas, and a vast amount of bad, malicious, and nonsense ideas (including spam). Eliminating the spam is easy (I have a single, completely public email address and get roughly one spam message per day, from which I conclude that the spam issue is solved). In order to eliminate the bad or malicious ideas, we need to be able to judge their correctness in a logical manner. For this, we rely on distributed intelligence: other people are allowed to judge whether each idea is good or bad. To get loaded words out of the picture, let’s replace “good” and “bad” with the words “true” and “false”. “Ideas” become propositions of the form “Implementing this idea would be good” (yes, “good” is still there, but keep reading).
Let’s assuming voting isn’t a completely reliable system for determine the truth or falsity of ideas (otherwise, we’re done). Therefore, some true propositions will get a lot of false votes, and vice versa. To solve this, we allow people to propose arguments for or against each proposition. These have the form of some statement, like “That proposition is false because the author is a moron”, together with a more details argument for why the statement is true. Now we let people vote on two more things:
- Whether the truth of the statement would imply that the original proposition is true or false.
- Whether the argument for the truth of the statement itself is sound.
If we get enough votes in favor of both (1) and (2), we conclude that the original idea is true (or false), and discount the votes for or against the original idea.
This is the key part, so I’ll restate it. If we have propositions , , and , then enough votes for both and override any votes against . You can’t kill a good idea unless you can the arguments for it as well.
Now we have to get recursive. What if gets a lot of votes, but is actually wrong? Then you let people propose arguments for the falsity of , and so on. What if there are two competing arguments which appear to contradict? Then you let people propose arguments about why there isn’t a contradiction? There are a lot of logical issues to deal with, but people can post arbitrary arguments written in normal human languages and we have the full power of human intelligence to judge them, so we’re not limited by artificial logical restrictions. This isn’t a formal proof system.
Unfortunately, we are limited by what happens along the full recursive tree. If people lie about the propositions all the way down, and manage to flood away all the counter arguments, the system will fail. However, this is basically a problem of spam, and can be solved in the usual way. If you detect that someone is consistently voting opposite the correct answer, you flag them as malicious and discount their votes. This rule is circular, but that’s what probabilistic analysis is for: we take all the data and compute the most likely assignment of truth values to propositions and spam flags to people. There’s some threshold of validity that you need to achieve in order to such a solver to converge to the correct answer, but that level of trust is often quite low due to network effects and self-reinforcement. In other words, contradictions don’t fit together.
Since this is a website, we have to identify whether the “users” are actually people. We could do this conventionally with a system like ReCAPTCHA, but since we’re in recursive mode it’s much cooler to instead ask users to judge the correctness of randomly selected propositions. If you want to vote on whether a proposition is true or false, or propose a new proposition, you need to spent a little time judging the ideas of others. If someone comes up with a way to trick this system by writing a program that can judge the truth or falsity or arbitrary English propositions, this discussion may be obsolete (thanks to Ross for this particular bit of reasoning).
Other issues probably abound, but they can be fixed by allowing people to suggest improvements to the system. If deemed reasonable, these ideas can be implemented and tested in parallel with the existing system, resulting in a potentially large number of competing systems for determining truth values from the same data set. The data set itself could probably be made freely available (under a suitable license), so that others could build competing systems.
I don’t think this system would be all that difficult to implement. Thanks to the previous paragraph, if it reached a sufficient level of quality it would start to improve itself. Maybe that would even get scary.
Of course, if we apply this to a realm like politics, the truth or falsity of various statements will be very controversial, and different people will have legitimately different opinions. This can be solved by adding side conditions to the statements, like “If you believe in flat tax systems, we should do this” or “If you believe that health care is a basic human right, we should do that.” More importantly, however, there is a vast range of ideas that any rational person should agree to. Statements like “the proposed health care bills do not include death panels”, and “given an otherwise equivalent choice between taxing a public good and a public evil, we should tax the latter.” I think it’s fair to say that the U.S. would be better off if we could agree on the statements that don’t need side conditions.
Note: I’ve done zero checking to see if this has been proposed or implemented before (this discussion happen just now), so I’m curious if anyone knows related references or links.
Another note: presumably this would be set up as a nonprofit supported by donations of some kind. If this system actually existed, I would probably be willing to donate at least $1000.