[Tech] New results in cache side-channel attacks and countermeasures

In the DRAC project, we aim to propose hardware solutions to protect against side-channel attacks. One of such threats are cache side-channel attacks, which consist in using contention property of caches and access latencies to derive information on co-running victim processes. The most efficient protection measure up-to-date, namely randomization-based protected caches (RPCs), shuffles cache addresses to distribute them randomly in the cache. However, since RPCs become insecure if attackers are allowed many accesses under the same key, they have to be rekeyed, usually with some periodicity. This rekeying period has been traditionally set in a heuristic manner, and this has unfortunately led to successful attacks on existing mechanisms.
 

diagram
Figure 1: Simplified outline of an access-based cache side-channel attack. A) An attacker co-exists with a victim process in the same machine. B) The attacker prepares the cache, filling some cache sets. C) The victim performs some activity, evicting attacker data in the process. D) The attacker re-accesses her data, obtaining side-channel information from latency measures of data evicted by the victim.


 

 

A previous work of ours [1], presented in the latest Conference on Cryptographic Hardware and Embedded Systems (CHES 2022), sets out to address this issue. In this work, we proposed a cryptographic security model that captures security in this setting. Moreover, using our results, it is possible to choose a rekeying period and a randomization mechanism so that security is provably enforced against this model. 

In a recent extension of our work [2], we embark on a different path. Instead of looking for rekeying periods that guarantee some security, we study which rekeying periods allow for successful attacks. We study the success probability of state-of-the-art attacks to RPCs with the Least Recently Used and Random replacement policies. This result yields the extra takeout that attacks on RPCs can break security with arbitrarily large success probability, given enough cache accesses under the same keying material.

Additionally, in this new version we provide formal proof of a result that extends security guarantees to various epochs, i.e., to the case where different keys are used. All results in the article are implemented as Python scripts and made publicly available [3]. Given a set of cache parameters, these scripts produce the largest rekeying period for which we can guarantee security for the largest number of accesses, either during one epoch or in several epochs.

References

[1] Jordi Ribes-González, Oriol Farràs, Carles Hernández, Vatistas Kostalabros, and Miquel Moretó. A Security Model for Randomization-based Protected Caches. IACR Transactions on Cryptographic Hardware and Embedded Systems, 2022(3), 1–25, 2022. https://doi.org/10.46586/tches.v2022.i3.1-25
[2] Jordi Ribes-González, Oriol Farràs, Carles Hernández, Vatistas Kostalabros, and Miquel Moretó. A Security Model for Randomization-based Protected Caches. In Cryptology ePrint Archive, Paper 2022/440, 2022. https://eprint.iacr.org/2022/440.
[3] Jordi Ribes-González, Oriol Farràs, Carles Hernández, Vatistas Kostalabros, and Miquel Moretó. Artefactes per "A Security Model for Randomization-based Protected Caches". https://doi.org/10.5281/zenodo.6397296.