Audit FAQs

FREQUENTLY ASKED QUESTIONS
ABOUT THE DEMOCRACY COUNTS AUDIT SYSTEM

Q. How do you protect secrecy of the ballot in self-reporting by voters of how they voted (in the app)?

A. This pertains only to the exit voting module of the app, not the other functions, as the data collection for the other functions does not involve ballot secrecy.

Ballot secrecy is a dogma in western democracies, not a need; people commonly brag about how they voted and they don’t put themselves in danger. Nevertheless we accede to the requirement. In countries where vote selling is common, or where local thugs might demand to see a person’s vote, the QR code (or whatever method we end up using) would not be turned on. That would probably be a local client/admin decision.

All electronic systems, including DRE etc, can expose the link between the voter and the vote, even if the sign-in and vote data are stored separately, because their clocks​ register both at the same or sequential times. Our app stores the data and votes in separate databases, separated widely in the cloud. We are talking about delaying by a few seconds the storage of one or the other in order to break the synchrony on the clock.

We do have one instance in which we may have to keep the voter and vote together temporarily (whether for seconds or hours we don’t know yet). That is where a “foreign” voter votes in another precinct than their own. In that case we don’t want to register their vote in the foreign precinct, and we don’t want to discard it, but we haven’t yet developed a way to get their vote over to their home precinct. The two ideas we’re entertaining at the moment are a) hold voter and vote together until their precinct is identified, but build security around them (everything is encrypted anyway so it’s a matter of sending the decryption password over and when the packages arrive opening them up and performing the regular operations); b) upload all voter registration and precinct data in advance (eventually we’ll be able to do this on a routine basis), provide a way for the voter to ID his precinct and/or automatically identify and verify it, and simply deposit voter and vote in the proper databases.

We will probably provide a way for the voter to verify that her vote was recorded properly. There are various methods but the most elegant one so far is to print enough QR code papers for the voters at a poll, hand one to each voter participating in the exit vote, and then when they are logging in (anonymously) to the vote part of the app (it’s separate from the ID part, which requires a separate login), they would use the mobile voting device to scan the QR code. They could then use their own device later to access one time the image of their vote.

Q. Are reforms, structural or procedural, required to let your system work? If so, would they be expensive?

A. Not in the United States. Activities such as ours fall under its protection of the First Amendment to the US Constitution, so we do not need official permission or cooperation to do our work. (Obviously it will smooth our functioning when officialdom is on board with what we are doing.)

Q. How can self-reporting of votes by voters be called an “audit” when it fails to include so many votes cast?

A. The votes that are cast will fall into separate categories, one category for each candidate or measure choice: If 98% of Candidate X’s voters participate, but only 40% of Y’s, then our sensitivity will be very high for X and very low for Y. One can readily imagine this happening, especially as candidates and parties learn to politicize the act of participating in this part of the audit.

Audit procedures vary across industries and fields. Audits of public companies by audit firms regulated by the audit agency of the federal government, the PCAOB, randomly sample transactions to see if any suspicious transactions pop up. If none of the sampled transactions are questionable the rest will be assumed to be okay. Risk-limiting election audits look at varying small proportions of randomly selected votes (one to five percent), depending on the closeness of the election, and recount them; any error is extrapolated to the entire vote and a decision then made as to whether to recount all the votes. Our exit-vote audit is not a sample like these but an actual replication of the vote, like a parallel election. It does not involve extrapolation from a sample because the “sample” will approach 100 percent, making extrapolation unnecessary.

The exit vote function is one part of a larger set of functions; other audit functions work differently.  

Q. Will promoting self-reporting of votes in name of detecting fraud in how votes are counted increase lack of confidence that votes matter? In turn, will this lead to lower voter turnout due to stoking of fears about uncounted ballots?

A. The fear that proving discrepancies (when they occur) will discourage voting is a straw man; any effects can be easily countered in the public mind by saying plainly that finally, finally, there is a way to make sure that every vote counts, so nonvoters have even less reason to stay home. Knowing that their votes can’t be stolen (at least without being discovered) is an incentive to vote – or at least it takes away a disincentive.

Our experience during our June 2016 pilot was that most people, when they heard that we were testing a system to hold the election system accountable, were eager to participate. That is an empirical indication of the opposite of cynicism. Of course these were the voters, not the nonvoters. How the nonvoters will behave is a separate question. Many “apathetic” nonvoters already lack confidence in the system – if not in the count’s accuracy, then in the translation of their votes into policy. What’s the point, they say:  “The results in my life don’t change irrespective of who is elected.”

​It is also true that in an election with a million votes, one vote is mathematically meaningless. Votes matter in aggregate, of course, so collective behaviors are important, which makes collective psychology important, in turn making the political debate about the audit important. We expect the audit’s opponents to be predominantly those interests that are threatened by accountability, and we will be surprised if they don’t do their best to discourage participation and impugn its credibility. One should also expect the same interests to impugn the credibility of risk-limiting audits. Their argument would be that the official system is accurate and that accountability measures are illegitimate, especially as they involve small samples. Whether those arguments result in more or less cynicism or discouragement about voting is an empirical question.

Interestingly, this straw man is raised most frequently by election integrity advocates. Some seem to believe that if people are in any way disabused of their magical thinking about the importance of their individual vote, or if they come to believe that their votes are being stolen, then they will lose their faith in the system and not vote. These questioners are caught in a dilemma: They believe in accuracy and accountability, the obtaining of which requires that the public be assumed to be discerning enough to be trusted with information that the system is vulnerable and needs to be safeguarded. On the other hand they are afraid that proof of theft and significant error will discourage voters, the public being too infantile and undiscerning to be trusted with the truth of our systems’ shortcomings. The way out of the dilemma is to provide proof of error at the same time as offering a solution.

Q. How does reporting of problems using your app differ from voter protection outreach programs? For instance, lawyers and volunteers often observe polls and report on irregularities in election administration, denials of right to cast ballot, etc.

A. The incident report function in our app is designed to help participants in voter protection programs report the irregularities they encounter. Indeed, the lawyers and volunteers could/should use our app to make their reports. Why? Because instead of having them filter in slowly, probably on paper, the reports by lawyers and volunteers will be available on our platform for instant analysis by the voter protection program managers, which will allow attorneys to address problems much faster.

Ordinary citizens can also download the app and file incident reports. These are kept separate, but both are available in the same instant reports, so the voter protection program managers can assess their reliability better.

This system adds speed and data volume, thereby improving the response speed of the lawyers charged with countering the irregularities and allowing them to see emergent problems in real time.

Q. How do voters know to whom they are surrendering their identity and the information about whom they voted? Can’t a voter reasonably suspect the information will be used for reasons other than those advertised and that for whom they voted will not be kept private?

A. Our system is designed to make it very difficult penetrate the secrecy of the ballot, but as we know the ultimate safeguard of any system is the integrity of the operators. We intend to build a reputation and a brand that demonstrates our integrity. No company, organization or agency can do better than that. Even many official systems can be forced to reveal who voted for whom. We at least are independent and without conflicts of interest. The people who manage elections in government are partisan politicians with conflicts of interest; if anyone has a reason to violate ballot secrecy they do.

Q. What laws would forbid and/or penalize the owners of the information sent via the app from disclosing who voted and for which candidates? If the information can be shared or sold for reasons other than use in assuring the election results are accurate without legal penalty, shouldn’t voters know that it is legally not safe to share the information (i.e. that the secrecy of their ballot cannot be protected)?

A. Laws vary across jurisdictions. In the design of our system we have adhered to the highest standard of secrecy consistent with the realities of software design. If someone believes that it’s not safe to participate in our system they have the right to decline to participate. We will represent what we are doing with accuracy and integrity. There will always be skeptics and cynics, however.

In any event, people share their data all the time knowing that legal penalties are iffy at best, and people are right to be cynical: witness Facebook and Equifax. So people do a rough cost-benefit calculus, weigh what they know of the organization, and decide (if they have a choice) whether or not to use the service.  

The questions of “legally not safe” and “cannot be protected” are very different questions. The first question assumes that the lack of legal sanctions determines the weakness of technical protection. We will provide technical protection, which will be verified by independent experts, so that irrespective of the lack of sanctions the voters’ vote will not be traceable to them.

Q. Will the app involve voters taking pictures of their ballots? Does this violate state laws that forbid taking such pictures?

A. No it doesn’t. Though the project that became Democracy Counts started with this initial idea, we quickly abandoned it because of concerns about a) the low probability that sufficient numbers of voters would download an app and use it properly to provide adequate sensitivity, b) the possibility that such pictures could be used to prove to vote buyers that the person selling his/her vote has voted as agreed, and c) the difficulty of administering a feature that might be legal in some jurisdictions and illegal in others.

Q. How does this self-reporting amount to a risk limiting audit?

A. These two concepts are not commensurate. The question does not make sense as stated.

In any event, we are not conducting a risk limiting audit, we are conducting a different kind of audit. It is inappropriate to posit an RLA as the proper metric for a system that operates in a wholly different way.

Q. In what way can random samples show failure to count specific votes?

A. ​Random samples (of anything) never reveal information about specific data occurrences. The best they can do is make it increasingly probable (with increasing sample size, or with differentiated sampling techniques of subpopulations) that rare data occurrences will be captured in the sample population. For instance, if one in a thousand votes were switched, the odds of a 100-vote random sample capturing that switch would be 10%. If the sampling method is not random, e.g., if the authority charged with the audit has a measure of control over the choice of precincts or ballot inventories being audited, or a subpopulation sample is poorly designed/executed, the probability will shrink even more.

In order to obtain samples large enough to make it highly probable that we would capture low levels of fraud we would have to have access to the actual ballots and be allowed to sample randomly. In jurisdictions where the authorities are engaged in or aware of fraud we don’t expect that that access will ever be granted, except perhaps under legal duress after fraud was already strongly suspected.

Statistics as a tool was invented to make it feasible, i.e., easier and less expensive, to learn about larger populations without having to capture 100% of the data in the population. Confidence intervals were invented in order to estimate the probability that the true result was outside the sample’s result. Though it is possible with a sufficiently large (and expensive) sample to make the confidence interval very small, the possibility of the true result being outside the sample’s result never completely goes away. Furthermore, unless the sample is a large proportion of the total population (which is Census-level expensive in populations larger than a few thousand), and the sampling method demonstrably random (which is pretty much impossible to achieve without compulsion in a politicized environment), then the sample is subject to charges of response bias, i.e., that critical subpopulations refused to participate and that the extrapolations are therefore suspect.

These are why we have chosen not to use random samples in our system but instead to attempt to capture the entire population and its data.

Q. Is there a threshold of self-reporting (e.g. above 70 per cent of all votes in a precinct) that makes the self-reporting minimally reliable as a way to suggest failure to count votes?

A. In exit polling a random sample of 1000 or so respondents is adequate for even very large populations to produce narrow confidence intervals (1.5% or so). We are not doing polling with our exit vote function, however, because we are not extrapolating from samples: We are attempting to duplicate the data for the entire voting population. Our exit vote function’s sensitivity to discrepancies depends on the size of the actual discrepancies (which are unknown a priori, of course) and how close we get to 100 percent of the true total vote for the candidate suffering the errors.

​We assume that most fraud is calculated to provide narrow margins of victory over the defrauded candidate, i.e., wide enough to guarantee that the automatic recount provision of state law won’t be triggered, but not so wide as to provoke suspicion. Our system has to be highly sensitive to these narrow margins. The sensitivity will vary by locale, according to what the fraud margin is, but it’s safe to say that our exit vote function will have to be in the 95%-plus range, because flipping 2.5% of votes produces a 5% swing. Ergo, if we are going to be sensitive to a 5% swing we have to have better than a 95% participation rate. (The other system functions do not require this degree of sensitivity.)

Achieving this level of sensitivity is an expensive and difficult challenge, but it is the only type of independent, direct evidence that courts are likely to accept as sufficiently probative to justify ordering injunctions and investigations.

Q. By what methodology is a prediction made from limited samples that there was a problem counting all votes accurately?

A. This question is inapposite. Limited samples and predictions based on samples are conducted by pollsters, and election authorities might use them in their audits, but our system does not involve the use of limited samples, and we don’t engage in prediction.

 



 

Please support our work. Help restore integrity to elections and accountability to government. Click here to donate.

 

verify-vericount-vote-logo

 

WordPress.com.

Up ↑

%d bloggers like this: