Ranked Choice Voting

We’re all familiar with how a traditional election works: each person votes for their favorite candidate. The votes are then added up and whoever has the most votes wins. Simple enough.

But there are also obvious problems with this. Consider a race with with five candidates. Two of the candidates get 15% of the vote each, two others get 20% each, with the final candidate receiving 30%. Under the traditional system (called First Past the Post, or FPTP) the last candidate would win with only 30% of the vote. That doesn’t feel right. Voters who are concerned that voting for their favorite candidate may ‘steal’ votes from the ‘electable’ candidate are familiar with another problem with this system (also known as the ‘spoiler effect’).

Ranked Choice Voting

But there are other methods of voting besides FPTP. One method, known as Ranked Choice Voting (RCV), continues to gain traction. RCV is a method of voting where – instead of simply voting for your top candidate and electing the person who receive the most votes – each voter ranks their choice of candidates. If one candidate receives more than 50% of everyone’s top choice, they win. If not, however, the candidate who got the fewest top-choice votes is eliminated, and the election is re-run.

For example, say Jill, Tom, and Alice are running under RCV. Of people’s top-choice pick, Jill got 40% of the votes, Tom 36%, and Alice 24%. Under traditional voting – what’s known as “First Past the Post” (FPTP), Jill got the most votes so she would win. Under RCV, however, since no one got more than 50%, Alice (the one with the least votes) would be eliminated from consideration and the top choice votes would then be counted again.

To be clear, it’s not the case that Alice’s 25% would be given to Tom or Jill. Instead, you take every ballot that had listed Alice as their top choice, look at who they ranked as their second choice, and count that person as their vote. For example, say two-thirds of the people who ranked Alice first (i.e. 2/3rd of 24%, or 16%) had Jill as their second choice, and one-third (1/3rd of 24%, or 8%) had Tom second. In this case, Jill would have 40% of the vote (from the first time around), plus an additional 16% (from the second time), for a total of 56%. Because Jill now has over 50% of the vote, she wins.

There are many benefits of RCV over the traditional First Past the Post (FPTF) methods. It avoids the spoiler effect and gets closer to ensuring that the candidate that’s chose maximizes the satisfaction of the electorate. Several places (like NYC) already use RCV, and non-partisan groups like Fair Vote are advocating for it. For those looking for social proof, it has been endorsed by many Nobel Prize winners, political thought leaders, political scientists, and, yes, politicians from both parties, including John McCain and Barack Obama.

Benefits and Tradeoffs

I certainly think that Ranked Choice Voting is a better system than First Past the Post. But that doesn’t mean there aren’t downsides. There is a great article that runs through the different voting systems and their pros and cons, and the Fair Vote site makes additional arguments for why RCV is better than the other systems. (I’m personally a fan of what’s called score voting, where each candidate is rated on a scale – similar to Amazon ratings – and the candidate with the highest rating wins. That said, RCV is the alternative system with both the most popular support in the U.S. and the most “real world” use, as its the primary form of voting in Australia, Ireland, New Zealand, and many others.).

If you have a choice to support or petition for Ranked Choice Voting over FPTP, I’d recommend it. No voting system is perfect, yet few procedural changes (perhaps along with moving to open primaries and bipartisan redistricting) have more downstream consequences for our form of government. Choose wisely.

Which Way To Miss?

Policy, as in many areas of life, is about tradeoffs. To take a simple example, consider arguments that some conservatives and progressives might make regarding welfare. I’ve heard friends that lean progressive say things like “How can we let someone who is really trying and down on their luck go hungry? We need to increase the availability of SNAP [food stamps].” On the other side, I’ve heard friends that lean conservative say some version of “I’ve seen people who get food stamps just waste them on things like cookies, cake, soda and chips – we need to reduce their use.” Who is right?

The answer, of course, is that they both are. People come in all shapes and sizes. They also vary in their behaviors, values, and ethics. This is what makes policy so difficult: you have one policy, but how people behave in response to that policy can vary widely.

There are various ways to deal with this. One is to refine the policy. For example, current SNAP policy does not allow folks to buy alcoholic beverages with those funds. This can work well when there is fairly broad agreement that such a refinement makes sense. But this can easily end up getting very complicated as you attempt to refine further and further until you end up with a complex mess that is difficult for the consumer to understand and for the regulator to enforce, and where the interaction effects between the various rules can cause unintended outcomes. (Tax policy, anyone?)

Error Types

Beyond some basic “common sense” refinements, however, a better approach is simply to acknowledge that any policy is going to have some “error” in it, and to ask which type of error is more acceptable, and how much? This is basically the same thing as thinking about Type I and Type II model errors in hypothesis testing.

Using the example above, would you rather someone get food stamps that didn’t really need them or not give someone food stamps that really did need them? To be clear, not everyone may agree on the answer to this question, but at least we’re now starting to have a real conversation.

Let’s say you think that it’s better to err on the side of being generous, even if it means some abuse of your generosity will happen. What ratio are you wiling to accept? For example, if there’s one abuse for every 10,000 people you truly help, that seems reasonable. What if it’s 5 people helped for every 1 abuse? 1 to 1? What if it’s 5 abuses for every 1 person truly helped? What if it’s 10,000?

Standards of Proof

Some parts of our legal system are already explicitly like this (or at least try to be). In the justice system there are known various ‘standards of proof’ that are required depending upon what’s going on. For example, a police officer is required to have a ‘reasonable suspicion’ before stopping and questioning an individual. A ‘probable cause’ is required to issue a search warrant or arrest someone. A ‘preponderance of evidence’ or ‘clear and convincing evidence’ is required in civil court (and sometimes in criminal). And ‘proof beyond a reasonable doubt’ is the standard required for a criminal charge.

Source: DefenseWiki

By placing such a high bar for evidence, we as a society have made the choice that we would rather let a guilty party go free than convict an innocent one. According to Wikipedia, it is estimated that between 2.3 and 5 percent of all U.S. prisoners are innocent. Is that an acceptable error rate? That’s an open question, but at least its a tractable one.

I’m not saying that the details of individual policies don’t matter. Clearly they do. And of course there are other real considerations, such as cost. But when there is disagreement it may help to start the conversation by asking “which type of error are we more willing to make?” and “by how much”?