Page 30 - Towards Trustworthy Elections New Directions in Electronic Voting by Ed Gerck (auth.), David Chaum, Markus Jakobsson, Ronald L. Rivest, Peter Y. A. Ryan, Josh Benaloh, Miroslaw Kutylowski, Ben Adida ( (z-lib.org (1)
P. 30
22
E. Gerck
Often in elections, the available choices need to be defined and controlled
per voter and per group of voters, which raises privacy concerns. For example,
when using different ballot styles with different choices due to jurisdiction, geo-
political or other differences; when ballot rotation is used to assure fairness of
option placement in a list of options (different voters see the same options but
with a different top to bottom sequence); when law requires (as in some US
states to assure non-discrimination compliance) that each voter’s ethnicity must
be registered; or when allowing more than one language or media (e.g., audio,
Braille printing, large fonts).
To prevent coercion and vote buying, the choice of voting method is also
significant. For example, while postal mail voting cannot prevent voter coer-
cion, precinct-based voting creates a protected environment where voter coercion
may be prevented. Online voting, even if not precinct-based, may also prevent
coercion [18].
However, even when voting is private and the voting method allows coercion to
be preventable, the voter privacy requirement may have further and subtle con-
sequences. For example, the ballots cast should not be disclosed to anyone (not
even during or after tallying); just the tallied results can be disclosed. The rea-
son is that choice patterns that are likely to be statistically unique in an election,
and yet include a desired outcome, can be defined and then used as a “voter pat-
tern fingerprinting” mechanism to identify a single voter, or all voters of a small
group (e.g., one family), which may influence an election by means of coercion
and vote buying. 29 Although without identifying voters, disclosing the cast bal-
lots can also be passively used to influence next elections, as a detailed glimpse
into voter demographics that can later be used with gerrymandering (footnote
20) and yet finer methods such as social pressure. Conversely, if the cast ballots
are disclosable (e.g., to the election operators) then they should be made public
to all, so that all stakeholders could equally benefit from their analysis.
To clearly define the concept of voter privacy, we previously [40] discerned
not only different types of voter privacy but also different “strengths”. The list
below presents this classification, and additional comments, ranked from lowest
to highest privacy strength.
Policy privacy: Exemplified by election systems that depend on election offi-
cials and/or separated machines in order to protect voter privacy. Policy privacy
cannot prevent the operators or attackers from penetrating both systems and re-
joining the information. It also cannot prevent a court order that would mandate
rejoining the information in the servers.
Computational privacy: Exemplified by election systems that rely upon a
quorum of verifiers or election operators, in blind signatures, mix-servers or ho-
momorphic encryption, such that not less than N people working together (which
defines a threshold of collusion) can compromise voter privacy. Such systems rely
not only on absence of design flaws, but also on absence of a compromise to the
computational platform (e.g., a virus that would record all N keys for later use, a
29
This method has been used by organized crime (private communication).

