Disrupting the Intelligence of a Machine: Understanding Neural Network Vulnerabilities through Voltage Glitching

Researcher(s)

  • Matthew Ward, Electrical Engineering, University of Delaware

Faculty Mentor(s)

  • Chengmo Yang, Electrical and Computer Engineering, University of Delaware

Abstract

Neural networks have experienced remarkable growth over the past decade, and this progress raises concerns about their susceptibility to security threats. Despite varying complexities and safety measures, a common vulnerability shared by all neural networks is their reliance on hardware, which may suffer from various types of errors. In this project, we investigate the resilience of a neural network to hardware voltage glitch attacks. Voltage glitch attacks involve briefly shorting the power supply to the microprocessor responsible for running the neural network for only nanoseconds at a time. To comprehensively study the impact of the attacks, we varied the length of these shorts and when they occurred. While some attacks caused only minor variations in the NN’s outputs, others could lead to more catastrophic impacts such as runtime errors and complete failures. Notably, certain attacks exhibited high repeatability, consistently triggering resets or producing incorrect outputs, indicating the network’s vulnerability to this type of attack. Ultimately, this research sheds light on why neural network accelerators must be built to be resilient to all forms of interference, including physical attacks. Safeguarding against these attacks will be critical as NN’s real-life applications continue to expand.