What if Players Do Not Follow Instructions?

[table of contents]

Until now we assumed that players always do what they are supposed to. But this is not always a reasonable assumption, since a party may have an interest in doing something different from what he is instructed to do.

There are two fundamentally different ways in which players could deviate from expected behaviour: First, they could choose their inputs in a way different from what was expected when the protocol was designed. Second, while executing the protocol, they could do something different from what the protocol instructs them to do. We will consider the two issues separately:

Choice of Inputs

Consider the matchmaking application from above as an example. Let us assume that Alice is not really interested in Bob. We argued above that since her input should then be 0, the output is always 0 and she does not learn whether Bob was interested. The reader may have noticed that there seems to be a way Alice could cheat Bob, if she is willing to choose an input that does not represent her actual interests: she could pretend to be interested by choosing a=1, in which case the output will be ab \bmod p = b so she now learns Bob’s input and breaks his privacy.

A moment’s thought will convince the reader that there is no way we could possibly solve this issue by designing a more secure protocol: whatever the protocol is, a player can of course always choose to execute it with any input he wants to. Since this book is about protocol design, this issue of choice of inputs is out of scope for us.

Therefore, if Bob is worried that Alice might behave as we just described, the only answer we can give is that then he should not play the game at all. If he goes ahead, this has to be based on an assumption that Alice (as well as he himself) has an interest in choosing inputs in a “reasonable way”. A possible justification for such an assumption might be that if Alice really thinks Bob is a looser, then the prospect of being stuck with him the rest of the night should be daunting enough to make her choose her input according to her actual preference.

More seriously (and generally): to do secure computation, we have to assume that players have an incentive to provide inputs that will lead to a “meaningful” result they would like to learn. If one can describe and quantify these incentives, it is sometimes possible to analyse what will happen using a different mathematical discipline called game theory.

Deviation from the Protocol

Regardless of how the inputs are chosen, it might be that deviating from the protocol could enable a player to learn more information than he was supposed to get, or it could allow him to force the computation to give a wrong result. For instance, this way he could appoint himself the winner of an auction at a low price. This is of course undesirable, but (in contrast to the issue of input choice) this is indeed an issue we can handle.

One solution is to add mechanisms to the protocol which ensure that any deviation from the protocol will be detected. To exemplify this we look at Secure Addition.

In Secure Addition we first ask each party to distribute shares of their secret. Looking at \mathsf{P}_1, we ask it to pick shares r_{1,1}, r_{1,2}, r_{1,3} such that x_1 = r_{1,1} + r_{1,2} + r_{1,3} \bmod p and then send r_{1,1},r_{1,3} to \mathsf{P}_2 and send r_{1,1},r_{1,2} to \mathsf{P}_3. Here there are two ways to deviate:

First, \mathsf{P}_1 could pick r_{1,1}', r_{1,2}', r_{1,3}' such that x_1 \ne r_{1,1}' + r_{1,2}' + r_{1,3}' \bmod p. This is not a problem, as it just corresponds to having used the input x_1' = r_{1,1}' + r_{1,2}' + r_{1,3}' \bmod p, and as mentioned we cannot (and should not) prevent \mathsf{P}_1 from being able to pick any input it desires. The second way to deviate is that \mathsf{P}_1 could send r_{1,1},r_{1,3} to \mathsf{P}_2 and send r_{1,1}',r_{1,2} to \mathsf{P}_3 with r_{1,1}' \ne r_{1,1}. This is more serious, as now the input x_1 of \mathsf{P}_1 is not well-defined. This might or might not lead to an attack, but it is at least a clear deviation from the protocol. There is, however, a simple way to catch this deviation: when \mathsf{P}_2 and \mathsf{P}_3 receive their shares from \mathsf{P}_1, then \mathsf{P}_2 sends its value of r_{1,1} to \mathsf{P}_3 and \mathsf{P}_3 sends its own value of r_{1,1} to \mathsf{P}_2. Then they check that they hold the same value. In a similar way \mathsf{P}_1 and \mathsf{P}_3 can check that \mathsf{P}_2 sends consistent shares and \mathsf{P}_1 and \mathsf{P}_2 can check that \mathsf{P}_3 sends consistent shares. In general, having players reveal more information to each other could make a protocol insecure. However, in this case no new information leaks because a player only send information which the receiver should already have.

After the sharing phase, we then ask the parties to add their shares and make the sums public. Looking again at \mathsf{P}_1, we ask it to compute s_2 and s_3 and make these public. Here \mathsf{P}_1 might deviate by sending, e.g., s_2' \ne s_2. This could lead to a wrong result, v = s_1 + s_2' + s_3 \bmod p. Note, however, that \mathsf{P}_3 will compute s_1 and s_2 and make these public. So, both \mathsf{P}_1 and \mathsf{P}_3 are supposed to make s_2 public. Hence, the players can simply check that \mathsf{P}_1 and \mathsf{P}_3 make the same value of s_2 public. And similarly they can check that the two versions of s_1 and the two versions of s_3 are identical.

This means Secure Addition has the following property: if any single party does not do what he is supposed to, the other two players will always be able to detect this. This idea of checking other players to ensure that the protocol is followed is something we will see many times in the following.

%d bloggers like this:
search previous next tag category expand menu location phone mail time cart zoom edit close