Project Details
Securing Neural Network Accelerators against Electrical-Level Attacks (SecureNN)
Applicant
Professor Mehdi B. Tahoori, Ph.D., since 7/2024
Subject Area
Computer Architecture, Embedded and Massively Parallel Systems
Security and Dependability, Operating-, Communication- and Distributed Systems
Security and Dependability, Operating-, Communication- and Distributed Systems
Term
since 2022
Project identifier
Deutsche Forschungsgemeinschaft (DFG) - Project number 501300923
Artificial Neural Network (ANN) algorithms are increasingly used across various fields, ranging from analysis in the natural sciences to applications in mobile phones, over autonomous cars, to medical evaluations and marketing purposes. These algorithms benefit greatly from being accelerated through customized hardware, which increases processing speed and reduces total energy for higher efficiency. Thus far, the main research focus on ANN accelerator hardware has been to increase the computing efficiency, while security against malicious users or external attackers is not yet well explored, compared to traditional computer architectures. A few publications have already shown proof-of-concept that the hardware of ANN accelerators can be attacked. Among other attack vectors, they are vulnerable to side-channel and fault attacks on the electrical level. Fault attacks use active influence through power supply or power management logic, and side-channels use statistical methods on power measurements to extract information. In some multi-user systems, these attacks can be performed remotely through software, and in other cases measurement equipment found in electronic labs can be used. Traditional architectures and cryptosystems typically need to protect secret encryption keys, processed in small time frames and only a small amount of data to be protected. In contrast, neural networks are vulnerable within longer time windows and often process larger data sets where privacy is a concern. Furthermore, their various internal model parameters also need protection from reverse engineering, since they might contain company secrets. Under these circumstances, novel countermeasures have to be found.The goal of this proposal is to find appropriate countermeasures for side-channel and fault attacks on hardware neural network accelerators, to protect against known and unknown attack vectors. We aim to protect privacy and safety and thus trustworthiness in such accelerator devices, which is especially demanded in critical application scenarios. First of all, we will analyze existing attacks and extend them with new attack vectors. We will develop defenses against all of the attack vectors through circuit-level, system-level, and user-level techniques, applied during design time, deployment time, or runtime. When combining our resulting techniques as a whole, the chance of a successful attack will be diminished and the resilience of the whole system will be increased. In circuit-level and system-level techniques we will mainly apply knowledge from the hardware architecture side, while in user-level techniques we will additionally incorporate machine learning knowledge. Our results will be an important key aspect to allow secure deployment of neural network accelerator hardware in critical application domains.
DFG Programme
Research Grants
Ehemaliger Antragsteller
Dr.-Ing. Dennis Gnad, until 6/2024