Until now we assumed that players always do what they are supposed to. But this is not always a reasonable assumption, since a party may have an interest in doing something different from what he is instructed to do.
There are two fundamentally different ways in which players could deviate from expected behaviour: First, they could choose their inputs in a way different from what was expected when the protocol was designed. Second, while executing the protocol, they could do something different from what the protocol instructs them to do. We will consider the two issues separately:
Choice of Inputs
Consider the matchmaking application from above as an example. Let us assume that Alice is not really interested in Bob. We argued above that since her input should then be , the output is always
and she does not learn whether Bob was interested. The reader may have noticed that there seems to be a way Alice could cheat Bob, if she is willing to choose an input that does not represent her actual interests: she could pretend to be interested by choosing
, in which case the output will be
so she now learns Bob’s input and breaks his privacy.
A moment’s thought will convince the reader that there is no way we could possibly solve this issue by designing a more secure protocol: whatever the protocol is, a player can of course always choose to execute it with any input he wants to. Since this book is about protocol design, this issue of choice of inputs is out of scope for us.
Therefore, if Bob is worried that Alice might behave as we just described, the only answer we can give is that then he should not play the game at all. If he goes ahead, this has to be based on an assumption that Alice (as well as he himself) has an interest in choosing inputs in a “reasonable way”. A possible justification for such an assumption might be that if Alice really thinks Bob is a looser, then the prospect of being stuck with him the rest of the night should be daunting enough to make her choose her input according to her actual preference.
More seriously (and generally): to do secure computation, we have to assume that players have an incentive to provide inputs that will lead to a “meaningful” result they would like to learn. If one can describe and quantify these incentives, it is sometimes possible to analyse what will happen using a different mathematical discipline called game theory.
Deviation from the Protocol
Regardless of how the inputs are chosen, it might be that deviating from the protocol could enable a player to learn more information than he was supposed to get, or it could allow him to force the computation to give a wrong result. For instance, this way he could appoint himself the winner of an auction at a low price. This is of course undesirable, but (in contrast to the issue of input choice) this is indeed an issue we can handle.
One solution is to add mechanisms to the protocol which ensure that any deviation from the protocol will be detected. To exemplify this we look at Secure Addition.
In Secure Addition we first ask each party to distribute shares of their secret. Looking at , we ask it to pick shares
such that
and then send
to
and send
to
. Here there are two ways to deviate:
First, could pick
such that
. This is not a problem, as it just corresponds to having used the input
, and as mentioned we cannot (and should not) prevent
from being able to pick any input it desires. The second way to deviate is that
could send
to
and send
to
with
. This is more serious, as now the input
of
is not well-defined. This might or might not lead to an attack, but it is at least a clear deviation from the protocol. There is, however, a simple way to catch this deviation: when
and
receive their shares from
, then
sends its value of
to
and
sends its own value of
to
. Then they check that they hold the same value. In a similar way
and
can check that
sends consistent shares and
and
can check that
sends consistent shares. In general, having players reveal more information to each other could make a protocol insecure. However, in this case no new information leaks because a player only send information which the receiver should already have.
After the sharing phase, we then ask the parties to add their shares and make the sums public. Looking again at , we ask it to compute
and
and make these public. Here
might deviate by sending, e.g.,
. This could lead to a wrong result,
. Note, however, that
will compute
and
and make these public. So, both
and
are supposed to make
public. Hence, the players can simply check that
and
make the same value of
public. And similarly they can check that the two versions of
and the two versions of
are identical.
This means Secure Addition has the following property: if any single party does not do what he is supposed to, the other two players will always be able to detect this. This idea of checking other players to ensure that the protocol is followed is something we will see many times in the following.