# Dual Execution with Asymmetric Privacy

TLSNotary uses the `DEAP`

protocol described below to ensure malicious security of the overall protocol.

When using DEAP in TLSNotary, the `User`

plays the role of Alice and has full privacy and the `Notary`

plays the role of Bob and reveals all of his private inputs after the TLS session with the server is over. The Notary's private input is his TLS session key share.

The parties run the `Setup`

and `Execution`

steps of `DEAP`

but they defer the `Equality Check`

.
Since during the `Equality Check`

all of the `Notary`

's secrets are revealed to User, it must be deferred until after the TLS session with the server is over, otherwise the User would learn the full TLS session keys and be able to forge the TLS transcript.

## Introduction

Malicious secure 2-party computation with garbled circuits typically comes at the expense of dramatically lower efficiency compared to execution in the semi-honest model. One technique, called Dual Execution [MF06] [HKE12], achieves malicious security with a minimal 2x overhead. However, it comes with the concession that a malicious adversary may learn $k$ bits of the other's input with probability $2_{−k}$.

We present a variant of Dual Execution which provides different trade-offs. Our variant ensures complete privacy *for one party*, by sacrificing privacy entirely for the other. Hence the name, Dual Execution with Asymmetric Privacy (DEAP). During the execution phase of the protocol both parties have private inputs. The party with complete privacy learns the authentic output prior to the final stage of the protocol. In the final stage, prior to the equality check, one party reveals their private input. This allows a series of consistency checks to be performed which guarantees that the equality check can not cause leakage.

Similarly to standard DualEx, our variant ensures output correctness and detects leakage (of the revealing parties input) with probability $1−2_{−k}$ where $k$ is the number of bits leaked.

## Preliminary

The protocol takes place between Alice and Bob who want to compute $f(x,y)$ where $x$ and $y$ are Alice and Bob's inputs respectively. The privacy of Alice's input is ensured, while Bob's input will be revealed in the final steps of the protocol.

### Premature Leakage

Firstly, our protocol assumes a small amount of premature leakage of Bob's input is tolerable. By premature, we mean prior to the phase where Bob is expected to reveal his input.

If Alice is malicious, she has the opportunity to prematurely leak $k$ bits of Bob's input with $2_{−k}$ probability of it going undetected.

### Aborts

We assume that it is acceptable for either party to cause the protocol to abort at any time, with the condition that no information of Alice's inputs are leaked from doing so.

### Committed Oblivious Transfer

In the last phase of our protocol Bob must open all oblivious transfers he sent to Alice. To achieve this, we require a very relaxed flavor of committed oblivious transfer. For more detail on these relaxations see section 2 of Zero-Knowledge Using Garbled Circuits [JKO13].

### Notation

- $x$ and $y$ are Alice and Bob's inputs, respectively.
- $[X]_{A}$ denotes an encoding of $x$ chosen by Alice.
- $[x]$ and $[y]$ are Alice and Bob's encoded
*active*inputs, respectively, ie $Encode(x,[X])=[x]$. - $com_{x}$ denotes a binding commitment to $x$
- $G$ denotes a garbled circuit for computing $f(x,y)=v$, where:
- $Gb([X],[Y])=G$
- $Ev(G,[x],[y])=[v]$.

- $d$ denotes output decoding information where $De(d,[v])=v$
- $Δ$ denotes the global offset of a garbled circuit where $∀i:[x]_{i}=[x]_{i}⊕Δ$
- $PRG$ denotes a secure pseudo-random generator
- $H$ denotes a secure hash function

## Protocol

The protocol can be thought of as three distinct phases: The setup phase, execution, and equality-check.

### Setup

- Alice creates a garbled circuit $G_{A}$ with corresponding input labels $([X]_{A},[Y]_{A})$, and output label commitment $com_{[V]_{A}}$.
- Bob creates a garbled circuit $G_{B}$ with corresponding input labels $([X]_{B},[Y]_{B})$.
- For committed OT, Bob picks a seed $ρ$ and uses it to generate all random-tape for his OTs with $PRG(ρ)$. Bob sends $com_{ρ}$ to Alice.
- Alice retrieves her active input labels $[x]_{B}$ from Bob using OT.
- Bob retrieves his active input labels $[y]_{A}$ from Alice using OT.
- Alice sends $G_{A}$, $[x]_{A}$, $d_{A}$ and $com_{[V]_{A}}$ to Bob.
- Bob sends $G_{B}$ and $[y]_{B}$ to Alice.

### Execution

Both Alice and Bob can execute this phase of the protocol in parallel as described below:

#### Alice

- Evaluates $G_{B}$ using $[x]_{B}$ and $[y]_{B}$ to acquire $[v]_{B}$.
- Defines $check_{A}=[v]_{B}$.
- Computes a commitment $Com(check_{A},r)=com_{check_{A}}$ where $r$ is a key only known to Alice. She sends this commitment to Bob.
- Waits to receive $[v]_{A}$ from Bob
^{1}. - Checks that $[v]_{A}$ is authentic, aborting if not, then decodes $[v]_{A}$ to $v_{A}$ using $d_{A}$.

At this stage, a malicious Bob has learned nothing and Alice has obtained the output $v_{A}$ which she knows to be authentic.

#### Bob

- Evaluates $G_{A}$ using $[x]_{A}$ and $[y]_{A}$ to acquire $[v]_{A}$. He checks $[v]_{A}$ against the commitment $com_{[V]_{A}}$ which Alice sent earlier, aborting if it is invalid.
- Decodes $[v]_{A}$ to $v_{A}$ using $d_{A}$ which he received earlier. He defines $check_{B}=[v_{A}]_{B}$ and stores it for the equality check later.
- Sends $[v]_{A}$ to Alice
^{1}. - Receives $com_{check_{A}}$ from Alice and stores it for the equality check later.

Bob, even if malicious, has learned nothing except the purported output $v_{A}$ and is not convinced it is correct. In the next phase Alice will attempt to convince Bob that it is.

Alice, if honest, has learned the correct output $v$ thanks to the authenticity property of garbled circuits. Alice, if malicious, has potentially learned Bob's entire input $y$.

^{1}

This is a significant deviation from standard DualEx protocols such as [HKE12]. Typically the output labels are *not* returned to the Generator, instead, output authenticity is established during a secure equality check at the end. See the section below for more detail.

### Equality Check

- Bob opens his garbled circuit and OT by sending $Δ_{B}$, $y$ and $ρ$ to Alice.
- Alice, can now derive the
*purported*input labels to Bob's garbled circuit $([X]_{B},[Y]_{B})$. - Alice uses $ρ$ to open all of Bob's OTs for $[x]_{B}$ and verifies that they were performed honestly. Otherwise she aborts.
- Alice verifies that $G_{B}$ was garbled honestly by checking $Gb([X]_{B},[Y]_{B})==G_{B}$. Otherwise she aborts.
- Alice now opens $com_{check_{A}}$ by sending $check_{A}$ and $r$ to Bob.
- Bob verifies $com_{check_{A}}$ then asserts $check_{A}==check_{B}$, aborting otherwise.

Bob is now convinced that $v_{A}$ is correct, ie $f(x,y)=v_{A}$. Bob is also assured that Alice only learned up to k bits of his input prior to revealing, with a probability of $2_{−k}$ of it being undetected.

## Analysis

### Malicious Alice

On the Leakage of Corrupted Garbled Circuits [DPB18] is recommended reading on this topic.

During the first execution, Alice has some degrees of freedom in how she garbles $G_{A}$. According to [DPB18], when using a modern garbling scheme such as [ZRE15], these corruptions can be analyzed as two distinct classes: detectable and undetectable.

Recall that our scheme assumes Bob's input is an ephemeral secret which can be revealed at the end. For this reason, we are entirely unconcerned about the detectable variety. Simply providing Bob with the output labels commitment $com_{[V]_{A}}$ is sufficient to detect these types of corruptions. In this context, our primary concern is regarding the *correctness* of the output of $G_{A}$.

[DPB18] shows that any undetectable corruption made to $G_{A}$ is constrained to the arbitrary insertion or removal of NOT gates in the circuit, such that $G_{A}$ computes $f_{A}$ instead of $f$. Note that any corruption of $d_{A}$ has an equivalent effect. [DPB18] also shows that Alice's ability to exploit this is constrained by the topology of the circuit.

Recall that in the final stage of our protocol Bob checks that the output of $G_{A}$ matches the output of $G_{B}$, or more specifically:

$f_{A}(x_{1},y_{1})==f_{B}(x_{2},y_{2})$

For the moment we'll assume Bob garbles honestly and provides the same inputs for both evaluations.

$f_{A}(x_{1},y)==f(x_{2},y)$

In the scenario where Bob reveals the output of $f_{A}(x_{1},y)$ prior to Alice committing to $x_{2}$ there is a trivial *adaptive attack* available to Alice. As an extreme example, assume Alice could choose $f_{A}$ such that $f_{A}(x_{1},y)=y$. For most practical functions this is not possible to garble without detection, but for the sake of illustration we humor the possibility. In this case she could simply compute $x_{2}$ where $f(x_{2},y)=y$ in order to pass the equality check.

To address this, Alice is forced to choose $f_{A}$, $x_{1}$ and $x_{2}$ prior to Bob revealing the output. In this case it is obvious that any *valid* combination of $(f_{A},x_{1},x_{2})$ must satisfy all constraints on $y$. Thus, for any non-trivial $f$, choosing a valid combination would be equivalent to guessing $y$ correctly. In which case, any attack would be detected by the equality check with probability $1−2_{−k}$ where k is the number of guessed bits of $y$. This result is acceptable within our model as explained earlier.

### Malicious Bob

Zero-Knowledge Using Garbled Circuits [JKO13] is recommended reading on this topic.

The last stage of our variant is functionally equivalent to the protocol described in [JKO13]. After Alice evaluates $G_{B}$ and commits to $[v]_{B}$, Bob opens his garbled circuit and all OTs entirely. Following this, Alice performs a series of consistency checks to detect any malicious behavior. These consistency checks do *not* depend on any of Alice's inputs, so any attempted selective failure attack by Bob would be futile.

Bob's only options are to behave honestly, or cause Alice to abort without leaking any information.

### Malicious Alice & Bob

They deserve whatever they get.