Posted on June 26, 2009
How ranked voting can enhance election integrity over current election procedures
Even without major voting equipment changes, ranked voting methods often can be implemented at a local level in a way that is as secure as any non-ranked method. But what is more desirable is to have adoption of ranked voting actually improve election integrity by allowing ballot-level audits (e.g. the machine reports a ballot that ranks the candidates in order as B, C, A, but the audit reveals that the original paper ballot ranks them D, C, A), not just precinct-level sum audits (e.g. candidate B has seven more votes on the machine total than the on the paper ballots). Long-term, transparent and trustworthy elections can be best achieved on a wide scale in tandem with other changes in how we administer elections – such as providing jurisdictions with an option to use publicly owned equipment with open source software and by having all counts verified in full with independent software and manual audits.
A key design element underlying the election integrity attainable with ranked voting is distinguishing between recording voters' rankings, and tallying them. These two tasks are appropriately handled by distinct voting system modules -- each subject to an appropriate audit and confirmation procedure.
The first module should simply create an output file that is in a non-proprietary and easily read format such as a text file, that is an accurate record of how each voter marked his or her ballot. Some vendors' systems perform some normalization of ballot data such as closing up skipped rankings or inserting generic codes for overvotes rather than showing exactly which two candidates received the same ranking, and while not ideal, since they can still be subjected to the preferred ballot-level manual audit procedure described below, they do not impair election audit integrity. The audit procedure used by San Francisco simply assures that the number of choices for each candidate at each ranking match the machine record. This audit method is useful for detecting machine errors, but is not adequate for detecting extremely sophisticated fraud (since whether a 2nd choice for candidate D appears on a ballot that ranks candidates A or C first can matter). In short, simply checking precinct-level vote totals, or totals for each ranking may have some value, but are not a comprehensive auditing procedure for ranked ballots.
The preferred audit procedure which can detect both machine error and sophisticated fraud involves selecting a random sample of voting machines and doing a manual comparison of the paper ballots read by that machine with the machine record, to assure that the machines are properly recording the voters' marks. This is more definitive than traditional audits, in that a group seeking to commit fraud would need to have the ability to alter or substitute both paper ballots and the machine record. A traditional vote-for-one election audit may not detect fraud that successfully substitutes a portion of the paper ballots (traditional ballot box stuffing). These audited ranked-ballot files should then be made publicly available (as done in instant runoff voting elections in San Francisco and Burlington by posting them on the Internet).
The second module of the election system performs the algorithm for tallying the election (such as eliminating bottom candidates and transferring votes to next choices, etc.) The goal here is to assure that this tally can be done by both the vendor's system and by independent software or using a standard spreadsheet or database program to confirm the tally. Beyond merely auditing the tally, this allows any person who wishes, to completely re-do the tally from scratch, using any software of their choosing, or even by printing out the ballot rankings and sorting them by hand, accomplishing a complete recount. The reliability of any such recount/re-tally is, of course, dependent on the manual audit of the actual optical scan ballots, showing that the machine record of ballot rankings is correct. Double-checking the IRV tallies using independent software is regularly done for San Francisco and Burlington elections.
While the audit procedures are necessarily different than with vote-for-one elections, there is no conflict between supporting election integrity and the use of ranked voting methods. The founder of the election integrity movement in Ireland that blocked the adoption of DRE voting equipment in Ireland, for example, is also an ardent supporter of ranked voting (read her statement here). As she points out, IRV elections can be run independently of software as indeed they are in Ireland
Proposed "best practices" for auditing a ranked-ballot election
Because ranked voting elections on optical scan equipment involve two independent steps – capturing rankings and performing the ranked voting tally – it is possible, and in fact preferable, to manually audit the election using the stored rankings rather than precinct-summable vote totals. We will give a brief explanation of how this can be done, and we’ll provide a more detailed example in the appendix of this paper.
Let’s suppose you are trying to manually audit the ranked voting results from a precinct with 1,000 votes cast, and you have a printout of the 1,000 electronic records of the rankings from that precinct. You simply pick up the first ballot in your stack, find an electronic record that corresponds to that ballot, and put a check mark next to the ranking. For example, if the machine reports that 73 ballots ranked the candidates in order B, D, A, C, then after examining the paper ballots there should be 73 check marks next to that ranking combination on the audit form. Go through all your ballots, and if you’ve got one check mark next to every ranking and you don’t have any extra ballots, you’ve verified the storage of the rankings. If there is a discrepancy and you didn’t make an error during the audit, the voting equipment failed to store the rankings correctly. [More details appear in the appendix, along with an explanation of how to audit the application of the ranked voting tallying method to those records.]
The machine record of rankings should be made publicly available, ideally through Internet posting, both prior to and following the audit, along with instructions for how to conduct a tally using independent software, or manual procedures. The ability of opposing campaigns, the press, and any one else who wishes, to perform their own independent tallies using manually audited ballot records creates confidence in the results.
An even higher level of election integrity and transparency can be achieved following the example of the Election Transparency Project of Humboldt County (CA). As part of their audit procedure they re-scanned all ballots using commercial off-the-shelf scanners, and used software to detect voter marks on each ballot. This allows for the potential of having any "problem" ballots displayed on a screen for election judges to rule on voter intent (as humans can distinguish stray marks, etc. more effectively than computers). The Humboldt County Election Transparency Project proved its value in its first use, discovering a flaw in the Premier GEMS software used in conjunction with the county's Premier voting machines that resulted in the failure to include 197 ballots in the election results. This same model of pixel scanning of ballots will be typical in the next generation of optical scan voting machines, which will further enhance security and the ease of conducting ranked-ballot elections.
In this statement and accompanying appendices we have attempted to address claims made about election security and ranked-choice voting. We know the commitment to election integrity of individuals making these claims is rock solid, but they have made a key wrong assumption – that a manual audit of a ranked voting election requires precinct-summable vote totals for all possible voting combinations – that caused them to reach incorrect conclusions about the implications on election integrity of ranked voting elections. We have also attempted to give very concrete examples of how one can audit a ranked voting election by verifying the machine record through a manual audit and verifying the tally based on published electronic records of each ranking.
The public instant runoff voting elections held in San Francisco, Burlington, Vermont and Aspen Colorado were the most transparent, verifiable public elections ever held up to that point. They were in fact more secure even than elections that are counted purely by hand because they combine paper ballots with electronic records of every single ballot. Burlington even made the ranked voting tally code and program available for the public to use, inspect, test and modify.
The key to election integrity is the combination of redundant paper and electronic records, along with public observation of all processes, and rigorous post-election manual audits. It is this redundancy that enhances error detection and makes fraud so much more improbable, because the perpetrator would need to have the ability to falsify both the machine and the paper record of each ballot, and make them match. All of the San Francisco instant runoff voting elections since 2004 have set new standards for election integrity for both ranked voting and non-ranked voting elections, by making the voters' marks on each individual ballot public. The country would be well-served if other jurisdictions adopted these standards for transparency and verifiability of elections.
APPENDIX: Addressing Specific Claims about Election Integrity
Some election integrity activists have mistakenly claimed that ranked voting methods are incompatible with election integrity measures. We present seven such claims with our brief responses and then provide more detail and provide sources and concrete examples of how to manually audit ranked voting elections.
Summary of claims and responses
Claim #1: “Manual audits of ranked voting elections aren’t possible because of the large number of possible ranking combinations.” Fact: San Francisco has demonstrated one way how to do this in 2004 and every subsequent election, and the number of possible combinations is irrelevant.
Claim #2: “San Francisco did not release the election data required to manually audit the election.” Fact: San Francisco releases in a timely manner all of the data needed to manually audit the election.
Claim #3: “Manual counts of ranked voting elections with more than four candidates are “incredibly complex.” Fact: In places like Australia where all elections use ranked voting, people don’t think the counting process is “incredibly complex”; they think it’s the only way to count ballots. In fact, they can’t imagine not counting second choices when there is no majority winner.
Claim #4: “Average members of the public aren’t capable of verifying a ranked voting election by comparing electronic records of rankings with the original paper ballots.” Fact: It can be done with pencil and paper – nothing harder than making check marks on a list of rankings. A detailed example is included below.
Claim #5: “San Francisco does not have public oversight of post-election manual audits, the reconciliation of printed, voted, spoiled and unused ballots, and ballot security.” Fact: All aspects of San Francisco’s election are publicly observable, all the data is public, and the publicly observable official canvas reconciles all voted, unused and spoiled ballots.
Claim #6: “IRV is very destructive to the integrity of US elections.” Fact: When implemented the way it has been done in San Francisco, Burlington or Pierce County, ranked voting in fact boosts the integrity of our elections. If we applied the same provisions to non-ranked voting elections, we’d have much more secure elections.
Claim #7: “Prior to solving the spoiler and two-party domination problems, our first priority should be to ensure the fundamental integrity of election results, regardless of which voting method is used.” Fact: When properly implemented, ranked voting boosts election integrity and verifiability.
Detailed analysis of claims about ranked voting and election integrity
1a. Claim: An activist in the election integrity area with a math background has mistakenly suggested that there is no procedure that is a valid check on the accuracy of any IRV election because IRV elections are not precinct summable "unless all of the sum from i=0 to N-1 [where N is the number of candidates] of the N!/i! vote counts in each precinct that are possible for each election contest (and that is a huge number as the number of candidates increases, are all publicly reported PRIOR to beginning the random selection of the precincts.”
The Facts: Because ranked voting elections on optical scan equipment involve two independent steps – capturing rankings and performing the ranked voting tally – it is possible, and in fact preferable, to manually audit the election using the stored rankings rather than precinct-summable vote totals. We will give a brief explanation of how this can be done, and we’ll provide a more detailed example later in this paper.
Let’s suppose you are trying to manually audit the ranked voting results from a precinct with 1,000 votes cast, and you have a printout of the 1,000 electronic records of the rankings from that precinct (these can be sorted and grouped by ranking combination to make finding matching ballots easier). You simply pick up the first ballot in your stack, find an electronic record that corresponds to that ballot, and put a check mark next to the ranking. Go through all your ballots, and if you’ve got one check mark next to every ranking and you don’t have any extra ballots, you’ve verified the storage of the rankings. If there is a discrepancy and you didn’t make an error during the audit, the voting equipment failed to store the rankings correctly. [More details appear below, along with an explanation of how to audit the application of the ranked voting tallying method to those records.]
Furthermore, there are two other major problems with the formula. First, it greatly overstates the number of voting combinations possible in San Francisco or Pierce County. That’s because these jurisdictions only allow three rankings. Since they store codes for undervotes and overvotes, the number of possible combinations is approximately (N+2)^3. For 10 candidates, the formula gives nearly 10 million when the actual number is closer to 1,700. For 18 candidates (the number of candidates in the 2007 San Francisco mayoral election), the formula’s number is 17 x 10^15 when the true number is around 8 x 10^3, a factor of 12 orders of magnitude.
Second, even if you wanted to store precinct-summable vote totals, the number of possible combinations is irrelevant; what matters is the number of combinations actually used. The number of ranking combinations used is obviously never bigger than the number of votes cast in a precinct. In San Francisco in 2007, with 18 candidates, the greatest number of votes cast in a precinct was 461. That’s your worst-case scenario, but the greatest number of ranking combinations actually used in a precinct was only 131. The total number of rankings used citywide, in over 500 precincts with 150,000 ballots cast, was 1,684, which is a great deal smaller than the 8,000 possible combinations. It is certainly possible to manually audit 131 precinct-summable vote totals in a precinct or 1,684 citywide, although it’s easier to use the method we described above.
[Source: Official SF Department of Elections ballot image data, analyzed by author, data available on request.]
1b. One activist has suggested it would be extremely difficult or impossible to manually audit an IRV election when there are for each precinct or voting machine a huge number of possible ranking combinations (that same formula of the sum from i = 0 to N-1 of N!/i. This statement implies that manually auditing a ranked voting election is either not possible or so burdensome that it takes 30 days or longer (several opponents of ranked voting have referenced the month it took for final results in San Francisco in 2007) because the number of possible voting combinations for each precinct or machine is equal to the sum from i = 0 to N-1 of N!/i! where N is the number of candidates.
The Facts: The number of voting combinations is irrelevant to the manual auditing of a ranked voting election, as demonstrated above, and the assertion that the large number of combinations makes a manual audit burdensome is false, as we will show here and below. San Francisco manually audited its ranked voting elections in 2004, 2005 and 2006 in the same amount of time as non-ranked voting tallies. There was no ranked voting tally in 2007 because all affected races were settled by absolute majorities in the first round; delays in producing final results in 2007 were due to a 100% manual tally of four ballot measures races and had nothing to do with ranked voting Later in this paper we describe in detail how one can manually audit a ranked voting election and why manual audits of ranked voting elections that also have electronic records of all rankings are in fact much more thorough than manual audits of non-ranked voting elections.
San Francisco’s post-election audit of ranked voting elections included two parts. First, the city manually tallies raw first, second and third totals as though these were three separate races. Then, the city performs a true manual ranked voting tally of the ballots from the randomly-selected precincts.
[Source: http://www.sfgov.org/site/elections_index.asp and Ranked Choice Voting Procedures for the ES&S Optech Eagle equipment, first released June 2002, section 4.5: 1% Manual Recount.]
2. It has been claimed on the Internet that San Francisco never publicly reported ranking numbers, and has thus failed to take the first fundamental step required for all auditing in any field, which is to `commit the data” prior to beginning the audit.
The Facts: San Francisco in fact reports all the key data that allows independent verification of the official counts as well as post-election audits. This data includes:
- Raw first choice totals reported in all the election reports
- The complete set of ranked voting rankings, sortable by precinct
- The round-by-round ranked voting tally (in the years, unlike 2007, when a ranked voting tally actually occurs), and
- Statement of vote showing precinct and absentee totals for all candidates in all precincts
All of this data is released starting on election night and updated through the counting of absentee and provisional ballots.
3. One election integrity activist has suggested that it would be incredibly complex to do a hand count of an IRV election with more than four candidates in an election administered by more than one jurisdiction
The Facts: Many election reformers and administrators have participated in such counts or observed the manual counts in Cambridge, MA (pre-1997), the ranked voting count for president of Ireland, local or federal elections in Australia, local election in Ireland and Northern Ireland, and so on. Indeed, in Australia’s 2007 parliamentary elections, the median number of candidates in their 150 House of Representatives elections was more than seven. Hand counting ranked voting elections is not “incredibly complex.” You simply sort ballots by first choice, count the pile, add up the votes for all candidates, determine the candidate to eliminate, redistribute ballots for the eliminated candidate to each voter’s next choice, tally the new totals, and repeat the process. Because most ballots stay with the voter’s first choice in most elections, a ranked voting hand count is only marginally more difficult than a plurality hand count in most cases. If, as commonly happens, the two front-runners receive a combined 90% of the vote, you only have to re-tally 10% of the ballots.
In 2004, a team of volunteers counted the ballots in San Francisco’s first public ranked voting election. This election, for student member of the Board of Education, took place in October 2004, just a couple weeks before the Department of Elections implemented ranked voting for the first time in November 2004. Nearly 8,000 students from all of San Francisco’s public and private high schools cast ballots. There were 12 candidates running, and it took 10 rounds of ranked voting to determine the winner. Counting the ballots took 10 volunteers about 8 hours.
In 2009 Burlington conducted a manual recount of its ranked voting election for mayor due to a legal request of the candidate who finished second. After the hand count was half finished, showing that the rankings on the paper ballots matched the machine record (a few votes changed, as in any recount when human eyes can discern voter intent where a scanner could not), the losing candidate withdrew his request, stating that he was convinced the voting machines and the tally were correct. The results of this manual recount are on the city's web site.
4. It has been suggested that average members of the public are not capable of verifying a ranked voting election by comparing electronic records of rankings with paper ballots.
The Facts: Here is how it can be done. Let’s assume that you have 100 paper ballots from a precinct and the corresponding 100 electronic records. The first 25 records from Burlington’s 2006 ranked voting election for mayor in Ward 2 look like this, with the first record showing that in Ward 2, memory card 1, ballot #1, ballot style 10001, the first choice was candidate C03, the second choice was candidate C04 and the third choice was candidate C02.
Here are the codes corresponding to the candidates:
.CANDIDATE C01, “Louie The Cowman Beaudin”
.CANDIDATE C02, “Kevin J. Curley”
.CANDIDATE C03, “Bob Kiss”
.CANDIDATE C04, “Hinda Miller”
.CANDIDATE C05, “Loyal Ploof”
.CANDIDATE C06, “Write-ins”
So how does your “average American” verify this data? As with any election audit, it is only the individuals authorized by law to observe or handle the ballots who can know for sure what is marked on each ballot. This should be a public process with representatives from all candidates and parties included.
Here’s the easiest way to do it. First, print out the rankings. Then, pick up the first ballot from your stack, find a ranking in the data that corresponds to the marks on the ballot and put a check mark next to that ranking on your print out. Place that ballot face down, pick up the next ballot, and continue the process until you’ve checked every ballot against the records.
At this point, if every record has a check next to it and there are no remaining ballots, you’ve verified the rankings. If not, you need to double-check your work and determine if you made a mistake or if the equipment stored the rankings incorrectly. This is called “reconciling the discrepancies” and must be done anytime you do a manual audit of an election (if there are any discrepancies).
Now, once you have the complete set of rankings for the entire race, you can literally tally them by hand (tick marks on sheets of paper), drop them into a spreadsheet for easy sorting and counting, or use a variety of free-open source software packages to do the tally. Steps for doing this are detailed in this spreadsheet.
So here are some quick answers to related ranked voting auditing questions:
“And what exact voting system is required that enables election officials to conveniently publicly post all its ballot records and exactly how may the average citizen verify the integrity of your alleged “audit” procedure?”
All three of the biggest vendors (ES&S, Diebold/Premier, and Sequoia) have developed ranked voting capability on their existing scanners, although they have been resistant to making it a standard feature in all their machines.
“Does your audit procedure require that most U.S. jurisdictions purchase new voting systems?”
No. As San Francisco did and Pierce County is doing, they just have to upgrade the firmware and software for ranked voting functionality.
“Does your audit procedure require that the average US citizens acquire new computer skills and equipment and programs that they are currently lacking?”
No. Pencil and paper worked to count and audit ranked voting elections for a hundred years, but spreadsheets are easier and more verifiable.
5. An election integrity activist has stated that no states use all of the following fundamental measures that are required to ensure election integrity, and therefore ranked voting should not be implemented:
- public access to all election records and data necessary to evaluate the integrity of the electoral process,
- observable post-election independent manual audits of vote count accuracy,
- post-election ballot reconciliation of all printed, counted, unused, and spoiled ballots with voter process records, and
- public oversight of ballot security.
The Facts: This is false and misleading. While many states have not put such procedures in place, San Francisco in fact does all four of these things, and the measure on which it is weakest – public oversight of ballot security – has nothing to do with ranked voting. How does San Francisco meet these standards?
- It publicly releases all necessary electronic records as described above that, when combined with the paper ballots, allow a full manual tally of the election as well as an independent audit of all election results and audits.
- Under state law, all election procedures are publicly observable, including the required post-election manual audits, pre-election logic and accuracy test.
- The official canvas, which is required by law and is publicly observable, includes the reconciliation of all printed, counted, unused and spoiled ballots and voter records.
- All aspects of ballot transport, counting and storage are publicly observable. Access to certain areas inside city hall and staging grounds is limited, but some members of the public (official election observers, grand jury members) are allowed to go everywhere.
Pierce County, Washington’s November 2008 implementation of ranked voting on a different vendor’s equipment (Sequoia) included all of these features, too.
[Source: Email from Pierce County Auditor Pat McCarthy on June 25, 2008]
6. Some election integrity activists who are worried that ranked voting “incentivizes” touchscreen voting and may lead to paperless voting machines.
The Facts: None of the jurisdictions adopting ranked voting are moving towards touch screen voting, and none of the leading advocates of ranked voting promote touchscreen voting technologies. Because ranked voting elections on optical scan equipment can result in an electronic, auditable record of every ballot that can be compared to the paper ballot, ranked voting elections then become more secure and more verifiable than non- ranked voting elections on optical scan machines that lack that electronic, auditable record. The improved security is based on the fact that there is a complete redundant record of every ballot (electronic and paper), so that to avoid detection, a group attempting to commit fraud would need to have access and the ability to alter both the machine record, and the paper ballots, and make sure the two match, whereas in a traditional audit, altering the paper ballots alone may be sufficient for committing fraud in a recount.
7. Some have suggested our first priority should be to ensure the fundamental integrity of election results, regardless of whether our current voting methods result in undemocratic results, “spoiler candidacies” and under-representation of large numbers of Americans.
The Facts: Everyone of course should focus on what they think is most urgently needed to protect and expand democracy. Someone with this view should continue to work on ensuring fundamental election integrity. But poor analysis and understanding of ranked voting elections is causing some activists to fight ranked voting on a ground – election integrity – that ranked voting actually improves when properly implemented.