Researcher(s)
- Carson Alberding, Computer Engineering, University of Delaware
Faculty Mentor(s)
- Chengmo Yang, Electrical & Computer Engineering, University of Delaware
Abstract
Autonomous driving systems rely on a precise, pixel-level scene understanding to make intelligent decisions in complex environments. Semantic segmentation, which labels each pixel of an image according to its class (e.g., road, vehicle, person) are typically run on hardware accelerators for visualization. However, experiments have shown that fault-injection attacks can cause major corruption of the segmentation map, which could lead to detrimental consequences if safety-critical classes are not visualized properly. To address this challenge, this work presents a real-time fault detection and correction strategy which is to be implemented in segmentation models before the final output. This work was conducted across different models (ENet, ERFNet, FPN). First, an analysis was conducted of how faulty values arise in the raw data and in the segmentation map itself. Several metrics are introduced to identify the presence of faults within a dataset. A fine-grained search was performed by splitting the models at specific layers in the encoder and decoder which proved useful for fault detection. Next correction algorithms were developed for different models to mitigate the effect of any glitch. These algorithms extrapolate data from surrounding values in order to reconstruct the pixel classes. Given that only lightweight operations such as median filtering are used, the hardware implementation is expected to incur minimal overhead. The results demonstrate a significant improvement of class IOU across several images with varying glitch parameters, highlighting the resilience of the recovery methods. In many cases, safety-critical classes such as pedestrians or traffic signs were completely restored by these methods, demonstrating the necessity for lightweight, real-time safeguards from these attacks.