Security and Interpretability in Automotive Systems

by   Shailja Thakur, et al.

The lack of any sender authentication mechanism in place makes CAN (Controller Area Network) vulnerable to security threats. For instance, an attacker can impersonate an ECU (Electronic Control Unit) on the bus and send spoofed messages unobtrusively with the identifier of the impersonated ECU. To address the insecure nature of the system, this thesis demonstrates a sender authentication technique that uses power consumption measurements of the electronic control units (ECUs) and a classification model to determine the transmitting states of the ECUs. The method's evaluation in real-world settings shows that the technique applies in a broad range of operating conditions and achieves good accuracy. A key challenge of machine learning-based security controls is the potential of false positives. A false-positive alert may induce panic in operators, lead to incorrect reactions, and in the long run cause alarm fatigue. For reliable decision-making in such a circumstance, knowing the cause for unusual model behavior is essential. But, the black-box nature of these models makes them uninterpretable. Therefore, another contribution of this thesis explores explanation techniques for inputs of type image and time series that (1) assign weights to individual inputs based on their sensitivity toward the target class, (2) and quantify the variations in the explanation by reconstructing the sensitive regions of the inputs using a generative model. In summary, this thesis ( presents methods for addressing the security and interpretability in automotive systems, which can also be applied in other settings where safe, transparent, and reliable decision-making is crucial.


CANOA: CAN Origin Authentication Through Power Side-Channel Monitoring

The lack of any sender authentication mechanism in place makes CAN (Cont...

On the Resilience of Biometric Authentication Systems against Random Inputs

We assess the security of machine learning based biometric authenticatio...

Interpretable Classification Models for Recidivism Prediction

We investigate a long-debated question, which is how to create predictiv...

An Efficient Ensemble Explainable AI (XAI) Approach for Morphed Face Detection

The extensive utilization of biometric authentication systems have emana...

Security Theater: On the Vulnerability of Classifiers to Exploratory Attacks

The increasing scale and sophistication of cyberattacks has led to the a...

Theory-Based Inductive Learning: An Integration of Symbolic and Quantitative Methods

The objective of this paper is to propose a method that will generate a ...

Please sign up or login with your details

Forgot password? Click here to reset