Briefly noted: Gradients Leak Training Data

In their NeurIPS-2019 paper Zhu et al. describe an attack against Federated Learning (FL): they show how gradients — which are exchanged either in a client-server or a peer-to-peer FL setting — leak the (private) training data. This works as follows: The attacker (e.g., a malicious participant in a peer-to-peer FL setting) creates a few random “dummy” inputs and outputs. Given the Neural Network model and its weights,, the attacker runs a few forward and backward passes and derives their corresponding “dummy” gradients. The attacker then minimizes the distance between the “dummy” gradients and the actual gradients by gradually adapting the inputs and outputs via optimization. After a number of iterations, this results in the real inputs and outputs, i.e. the (private) training data. This algorithm is shown in Figure 1 taken from the original paper:

DLG algorithm

Figure 1: Deep Leakage from Gradients (DLG).

The authors show that their attack — which they call Deep Leakage from Gradients (DLG) — is successful both for Computer Vision tasks, in which they reconstruct images pixel-wise, and Natural Language Processing tasks, in which they reconstruct sentences token-wise, as shown in Figure 2 and Figure 3 taken from the original paper:

Pixel-wise reconstruction of training data images by DLG.

Figure 2: Pixel-wise reconstruction of training data images by DLG.

Token-wise reconstruction of training data sentences by DLG

Figure 3: Token-wise reconstruction of training data sentences by DLG.

The authors suggest 3 defences against DLG: Gradient perturbation, gradient precision lowering, and gradient compression. They find that gradient perturbation using Gaussian and Laplacian noise with a scale higher than 0.01 successfully defends against DLG, but significantly deteriorates model accuracy. Gradient precision lowering, e.g. by using half precision fails to defend against DLG. Gradient compression, i.e. pruning gradients with small magnitudes to zero, successfully defends against DLG if more than 20% of all gradients are pruned. Moreover, the authors note that simply using large batch sizes renders DLG impossible. Encrypting gradients is also a viable defense against DLG.

In practice this means:

If you rely on FL because of its privacy-preserving properties, you will have to make sure the gradients don’t leak (private) training data, e.g. by pruning or encrypting them.

Contact Us

Not all important information
are in tables.

Make use of what is in the texts and graphics of your reports. Contact us!