Suppose that you are a member of a large group that has a large number of decisions to make. It might seem that you have two basic choices.
First, allow everyone to vote on every decision. This approach produces high representativeness (at least if everyone votes), but the votes will be based on little information. Second, allow a subset of the group to make each decision. This approach reduces representativeness, but allows for more informed decision makers.
Democratic institutions combine these two basic approaches in elaborate ways to overcome the trade-off between unrepresentative and uninformed decision making. All enfranchised citizens select a few citizens to serve as legislators, for example, and legislators divide into committees. For different types of decisions, we accept different trade-offs. Three-judge panels are unrepresentative but informed, so in theory we allow them to resolve legal questions but not to change national policy.
None of these solutions is perfect, and we face the usual perils of republican decision making: ignorant voters, special interests, legislative inertia, activist judges, and executive policies highly sensitive to the quadrennial preferences of a small number of voters in places like Florida and Ohio. But we may well structure voting regimes reasonably efficiently given the fundamental trade-off.
There is, however, a way of overcoming this basic trade-off using prediction markets rather than votes. We can commit to selecting someone at random from our group, or from a subset of it, to say what the decision should be. We will require this person to listen to detailed arguments and to produce a detailed explanation. But this will not be our decision. Instead, our decision will be based on the forecast generated by a “normative prediction market” predicting what this person will conclude is best.
Moreover, we don’t even need to have someone conduct this evaluation for every decision. We can use a pseudo-random number generator to pick only, say, one-tenth of the decisions for ex post evaluation. Before we make the random selection, we run a conditional normative market, where the condition is that the decision is selected for ex post evaluation. But every time, it is the market’s prediction that we will use as the decision.
A summary of the steps: (1) Subsidized conditional market predicts decision. (2) This prediction determines the group’s actual decision. (3) Random number from 0 to 1 is drawn; if it’s greater than 0.10, all money from market is returned. (4) Person is picked at random from group, and must eventually announce what he or she would have decided. (5) This evaluation is used to determine payouts in the conditional market.
This is a radically new way of making decisions, and I emphasize in the book that there are strong reasons not to transform radically our democratic institutions. I use dramatic examples (e.g., prediction market legislatures, trial by market) to illustrate the approach colorfully, but I don’t believe we should rewrite the Constitution. Normative markets could serve as useful inputs into more traditional decision making (change step 2 above to “This prediction provides a recommended decision.”), or be used in private settings.
All I want to show here is that this approach, which could also be used in private settings, helps overcome the trade-off between unrepresentative and uninformed decision making.
If the prediction market is sufficiently subsidized, then the prediction can be highly informed. Since we only have a few decisions that need ex post evaluation, and only one decision maker per decision, we can demand a lot of the ex post decision maker, who will then become informed too. Picking a random citizen might not be the best strategy, since an informed dolt is still a dolt, so we might have the ex post evaluator randomly drawn from a body akin to a judiciary (experts selected by indirectly elected representatives).
Meanwhile, the system provides a virtual representativeness. Traders don’t know who the actual ex post decision maker would be, so they will average the anticipated decisions of a broad ideological range of potential decision makers. We may be able to increase representativeness still further by delaying decisions a decade or so, so it won’t matter if we happen to have an unbalanced set of ex post decision makers at any one time.
Critically, it doesn’t matter if the actual ex post decision maker makes a foolish or unrepresentative decision. What matters is the average expected decision, because it is only the prediction of the ex post evaluation that determines policy.
All Related Posts (on one page) | Some Related Posts:
- The Biggest Prediction Market of the Year:
- Predictocracy vs. Futarchy:
- Why Normative Markets?...
- Nineteenth-century prediction markets:
- Normative Prediction Markets:
- Predicting Decisions and Their Effects:...
- Prediction Markets vs. Conventional Wisdom:
- An Intro to Prediction Markets and the Liquidity Problem:
- Michael Abramowicz, Guest-Blogging: