Credible Neutrality As A Guiding Principle
Consider the following:
- People are sometimes upset at governments spending 5% of GDP to support specific public projects or specific industries, but those same people are often not upset at that same government causing much larger reallocations of capital by enforcing property rights
- People are sometimes upset at blockchain projects that directly allocate (or “premine”) many coins into the hands of recipients hand-picked by developers, but those same people are often not upset at the billions of dollars of value printed by major blockchains like Bitcoin and Ethereum into the hands of proof of work miners
- People are sometimes upset at social media platforms censoring or deprioritizing content with specific disfavored political ideologies, even ideologies that the people upset at the censorship themselves disagree with, but those same people are often not upset at the fact that ride-sharing platforms kick drivers off the platform if their ratings are too low
One possible reaction to some of these situations is to shout “gotcha!” and bask in the glory of having seemingly unmasked a hypocrite. And indeed, sometimes this reaction is correct. In my view, it is genuinely a mistake to treat carbon taxes as statist interventionism while treating government enforcement of property rights as just enforcement of natural law. It is genuinely also a mistake to treat miners working to secure a blockchain as laborers doing Real Thermodynamic Work worthy of compensation, but treat any attempt to compensate developers improving the blockchain’s code as being an act of “printing free money”.
But even if attempts to systematize one’s intuitions often go astray, deep moral intuitions like this are rarely entirely devoid of value. And in this case, I would argue that there is a very important principle that is at play, and one that is likely to become key to the discourse of how to build efficient, pro-freedom, fair and inclusive institutions that influence and govern different spheres of our lives. And that principle is this: when building mechanisms that decide high-stakes outcomes, it’s very important for those mechanisms to be credibly neutral.
Mechanisms are algorithms plus incentives
First, what is a mechanism? Here I use the term in a way similar to that used in the game theory literature when talking about mechanism design: essentially, a mechanism is an algorithm plus incentives. A mechanism is a tool that takes in inputs from multiple people, and uses these inputs as a way to determine things about its participants’ values, so as to make some kind of decision that people care about. In a well-functioning mechanism, the decision made by the mechanism is both efficient - in the sense that the decision is the best possible outcome given the participants’ preferences, and incentive-compatible, meaning that people have the incentive to participate “honestly”.
It’s easy to come up with examples of mechanisms. A few examples:
- Private property and trade. The “inputs” are users’ ability to reassign ownership through donation or trade, and the “output” is a (sometimes formalized, sometimes only implied) database of who has the right to determine how each physical object is used. The goal is to encourage production of useful physical objects and put them into the hands of people who make best use of them.
- Auctions. The input is bids, the output is who gets the item being sold, and how much the buyer must pay.
- Democracy. The input is votes, the output is who controls each seat in the government that was up for election.
- Upvotes, downvotes, likes and retweets on social media. The input is upvotes, downvotes, likes and retweets, the output is who sees what content. A game theory pedant may say that this is only an algorithm, not a mechanism, because it lacks built-in incentives; but future versions may well have built-in incentives (and past versions; see Slashdot meta-moderation)
- Blockchain-awarded incentives for proof of work and proof of stake. The input is what blocks and other messages participants produce, the output is which chain the network accepts as canonical, and rewards are used to encourage “correct” behavior.
We are entering a hyper-networked, hyper-intermediated and rapidly evolving information age, in which centralized institutions are losing public trust and people are searching for alternatives. As such, different forms of mechanisms – as a way of intelligently aggregating the wisdom of the crowds (and sifting it apart from the also ever-present non-wisdom of the crowds) – are likely to only grow more and more relevant to how we interact.
What is credible neutrality?
Now, let us talk about this all-important idea, “credible neutrality”. Essentially, a mechanism is credibly neutral if just by looking at the mechanism’s design, it is easy to see that the mechanism does not discriminate for or against any specific people. The mechanism treats everyone fairly, to the extent that it’s possible to treat people fairly in a world where everyone’s capabilities and needs are so different. “Anyone who mines a block gets 2 ETH” is credibly neutral, “Bob gets 1000 coins because we know he’s written a lot of code and we should reward him” is not. “Any post that five people flag as being bad does not get shown” is credibly neutral, “any post that our moderation team decides is prejudiced against blue-eyed people does not get shown” is not. “The government grants a 20-year limited monopoly to any invention” is credibly neutral (though there are serious challenges around the edges in determining what inventions qualify), “the government decides that curing cancer is important, and so appoints a committee to distribute $1 billion among people trying to cure cancer” is not.
Of course, neutrality is never total. Block rewards discriminate in favor of those that have special connections that give them access to hardware and cheap electricity. Capitalism discriminates in favor of concentrated interests and the wealthy, and against the poor and those who rely heavily on public goods. Political discourse discriminates against anything caught on the wrong side of social desirability bias. And any mechanism that corrects for coordination failures has to make some assumptions about what those failures are, and discriminates against those whose coordination failures it underestimates. But this does not detract from the fact that some mechanisms are much more neutral than others.
This is why private property is as effective as it is: not because it is a god-given right, but because it’s a credibly neutral mechanism that solves a lot of problems in society - far from all problems, but still a lot. This is why filtering by popularity is okay, but filtering by political ideology is problematic: it’s easier to agree that a neutral mechanism treats everyone reasonably fairly than it is to convince a diverse group of people that some particular blacklist of unallowed political viewpoints is correct. And this is why on-chain developer rewards are viewed more suspiciously than on-chain mining rewards: it’s easier to verify who’s a miner than it is to verify who’s a developer, and most attempts to identify who is a developer in practice easily fall prey to accusations of favoritism.
Note that it is not just neutrality that is required here, it is credible neutrality. That is, it is not just enough for a mechanism to not be designed to favor specific people or outcomes over others; it’s also crucially important for a mechanism to be able to convince a large and diverse group of people that the mechanism at least makes that basic effort to be fair. Mechanisms such as blockchains, political systems and social media are designed to facilitate cooperation across large, and diverse, groups of people. In order for a mechanism to actually be able to serve as this kind of common substrate, everyone participating must be able to see that the mechanism is fair, and everyone participating must be able to see that everyone else is able to see that the mechanism is fair, because everyone participating wants to be sure that everyone else will not abandon the mechanism the next day.
That is, what we need is something like a game-theoretic concept of common knowledge - or, in less mathematical terms, a widely shared notion of legitimacy. To achieve this kind of common knowledge of neutrality, the neutrality of the mechanism must be very easy to see - so easy to see, that even a relatively uneducated observer can see it, even in the face of a hostile propaganda effort to make the mechanism seem biased and untrustworthy.
Building credibly neutral mechanisms
There are four primary rules to building a credibly neutral mechanism:
- Don’t write specific people or specific outcomes into the mechanism
- Open source and publicly verifiable execution
- Keep it simple
- Don’t change it too often
(1) is simple to understand. To go back to our previous examples, “Anyone who mines a block gets 2 ETH” is credibly neutral, “Bob gets 1000 coins” is not. “Downvotes mean a post gets shown less” is credibly neutral, “prejudice against blue-eyed people means a post gets shown less” is not. “Bob” is a specific person, and “prejudice against blue-eyed people” is a specific outcome. Now of course, Bob may genuinely be a great developer who was really valuable to some blockchain project’s success, and deserves a reward, and anti-blue-eyed prejudice is certainly an idea I, and hopefully you, don’t want to see becoming prominent. But in credibly neutral mechanism design, the goal is that these desired outcomes are not written into the mechanism; instead, they are emergently discovered from the participants’ actions. In a free market, the fact that Charlie’s widgets are not useful but David’s widgets are useful is emergently discovered through the price mechanism: eventually, people stop buying Charlie’s widgets, so he goes bankrupt, while David earns a profit and can expand and make even more widgets. Most bits of information in the output should come from the participants’ inputs, not from hard-coded rules inside of the mechanism itself.
(2) is also simple to understand: the rules of the mechanism should be public, and it should be possible to publicly verify that the rules are being executed correctly. Note that in many cases, you don’t want the inputs or outputs to be public; this article goes into the reasons why a very strong level of privacy, where you cannot even prove how you participated if you want to, is often a good idea. Fortunately, verifiability and privacy can be achieved at the same time with a combination of zero knowledge proofs and blockchains; see here for more details.
(3), the idea of simplicity, is ironically the least simple. This post on “central planning as overfitting” goes into many of the arguments much more deeply, but it’s worth summarizing. The more simple a mechanism is, and the fewer parameters a mechanism has, the less space there is to insert hidden privilege for or against a targeted group. If a mechanism has fifty parameters that interact in complicated ways, then it’s likely that for any desired outcome you can find parameters that will achieve that outcome. But if a mechanism has only one or two parameters, this is much more difficult. You can create privilege for very broad groups (“demagogues”, “the rich”, etc) but you cannot target a narrow group of people, and your ability to target specific outcomes goes down further with time, as there is more and more of a “veil of ignorance” between you at time A that is creating the mechanism and your beneficiaries at time B and the specific situation they will be in that might let them disproportionately benefit from the mechanism.
And this brings us to rule (4), not changing the mechanism too often. Changing the mechanism is a type of complexity, and it also “resets the clock” on the veil of ignorance, giving you the opportunity to adjust the mechanism to favor your particular friends and attack your particular enemies with the most up-to-date information about what unique positions these groups are in and how different adjustments to the mechanism would affect them.
Not just neutrality: efficacy also matters
A common fallacy of the more extreme versions of the ideologies that I alluded to at the beginning of this post is a kind of neutrality maximalism: if it can’t be done completely neutrally, it should not be done at all! The fallacy here is that this viewpoint achieves narrow-sense neutrality at the cost of broad-sense neutrality. For example, you can guarantee that every miner will be on the same footing as every other miner (12.5 BTC or 2 ETH per block), and that every developer will be on the same footing as every other developer (with no remuneration beyond thanks for their public service), but what you sacrifice is that development becomes highly under-incentivized relative to mining. It is unlikely that the last 20% of miners contribute more to a blockchain’s success than its developers, and yet that’s what the current reward structures seem to imply.
Speaking more broadly, there are many kinds of things in society that need to be produced: private goods, public goods, accurate information, good governance decisions, goods we don’t value now but will value in the future, and so forth; the list goes on. Some of these things are easier to create credibly neutral mechanisms for than others. And if we adopt an uncompromising narrow-sense neutrality purism that says that only extremely credibly-neutral mechanisms are acceptable, then only those problems for which such mechanisms are easy to create will be solved. The community's other needs will see no systematic support at all, and so broad-sense neutrality suffers.
Hence, the principle of credible neutrality, must also be augmented with another idea, the principle of efficacy. A good mechanism is also a mechanism that actually does solve the problems that we care about. Often, this means that developers of even the most obviously credibly-neutral mechanisms should be open to critique, as it’s very possible for a mechanism to be both credibly neutral and terrible (as patents are often argued to be).
Sometimes, this even means that if a credibly neutral mechanism to solve some problem has not yet been found, an imperfectly neutral mechanism should be adopted in the short term. Premines and time-limited developer rewards in blockchains are one example of this; using centralized methods for detecting accounts that represent a unique human and filtering out others when decentralized methods are not yet available is another. But recognizing credible neutrality as something that is very valuable, and striving to get closer to that ideal over time, is nevertheless important.
If one is truly concerned about an imperfectly neutral mechanism leading to loss of trust or political capture, then there are ways to adopt a “fail-safe” approach to implementing it. For example, one can direct transaction fees and not issuance toward developer funding, creating a “Schelling fence” limiting how much funding can be made. One can add time limits, or an “ice age”, where the rewards fade away over time and must be renewed explicitly. One can implement the mechanism inside of a “layer 2” system, such as a rollup or an eth2 execution environment, that has some network effect lock-in, but can be abandoned with coordinated effort if the mechanism goes astray. When we foresee a possible breakdown in voice, we can mitigate the risks by improving freedom of exit.
Credibly neutral mechanisms for solving many kinds of problems do exist in theory, and need to be developed and improved in practice. Examples include:
- Prediction markets, eg. electionbettingodds.com as a “credibly neutral” source of probabilities of who will win near-future elections (see also Scott Alexander on this topic)
- Quadratic voting and funding as a way of coming to agreement on matters of governance and public goods
- Harberger taxes as a more efficient alternative to pure property rights for allocating non-fungible and illiquid assets (eg. see thread: capped Harberger taxes for domain names)
- Peer prediction, a much more formalized version of the “meta-moderation” mentioned above
- Reputation systems involving transitive trust graphs
We do not yet know well what versions of ideas like these, and completely new ones, will work well, and we will need many rounds of experimentation to figure out what kinds of rules lead to good outcomes in different contexts. The need to have the mechanism’s rules be open, but at the same time resistant to attack, will be a particular challenge, though cryptographic developments that allow open rules and verifiable execution and outputs together with private inputs will make some things considerably easier.
We know in principle that it is completely possible to make such robust sets of rules - as mentioned above, we’ve basically done it in many cases already. But as the number of software-intermediated marketplaces of different forms that we rely on keeps increasing, it becomes ever more important to make sure that these systems do not end up giving power to a select few - whether the operators of those platforms or even more powerful forces that end up capturing them - and instead create credible systems of rules that we can all get behind.