Our vision is to develop methodologies for designing intelligent autonomous decision-making systems that are secure and resilient against malicious adversaries and natural failures.
To do so, we look into these sytems from a security perspective, under various adversary models. Specifically, we develop techniques to assess the risk (i.e., impact and likelihood) of adversaries and failures, and propose methodologies to design and systematically deploy defense measures to prevent, detect, and mitigate malicious attacks and natural disruptive events. In our research, we combine methodologies from cybersecurity, control theory, optimization and machine learning, game-theory and networked systems.
Have a look at some of our recent themes below.
Selected themes
Security metrics for control systems
The aim within this theme is to to create novel methodologies addressing cybersecurity problems under uncertainty in learning and control systems. A core element of this research is the development of novel probabilistic risk metrics and optimization-based design methods that jointly consider the impact and the detectability constraints of attacks, as well as model uncertainty and prior beliefs on the adversary model.
Team members: Sribalaji C. Anand, Anh Tung Nguyen, André M. H. Teixeira
“Optimal Detector Placement in Networked Control Systems under Cyber-attacks with Applications to Power Networks”.
A. T. Nguyen, S. C. Anand, A. M. H. Teixeira, and A. Medvedev. IFAC World Congress, 2023 ABSBIB
This paper proposes a game-theoretic method to address the problem of optimal detector placement in a networked control system under cyber-attacks. The networked control system is composed of interconnected agents where each agent is regulated by its local controller over unprotected communication, which leaves the system vulnerable to malicious cyber-attacks. To guarantee a given local performance, the defender optimally selects a single agent on which to place a detector at its local controller with the purpose of detecting cyber-attacks. On the other hand, an adversary optimally chooses a single agent on which to conduct a cyber-attack on its input with the aim of maximally worsening the local performance while remaining stealthy to the defender. First, we present a necessary and sufficient condition to ensure that the maximal attack impact on the local performance is bounded, which restricts the possible actions of the defender to a subset of available agents. Then, by considering the maximal attack impact on the local performance as a game payoff, we cast the problem of finding optimal actions of the defender and the adversary as a zero-sum game. Finally, with the possible action sets of the defender and the adversary, an algorithm is devoted to determining the Nash equilibria of the zero-sum game that yield the optimal detector placement. The proposed method is illustrated on an IEEE benchmark for power systems.
“A Zero-Sum Game Framework for Optimal Sensor Placement in Uncertain Networked Control Systems under Cyber-Attacks”.
A. T. Nguyen, S. C. Anand, and A. M. H. Teixeira. IEEE Conference on Decision and Control (CDC), 2022 ABSBIB
This paper proposes a game-theoretic approach to address the problem of optimal sensor placement against an adversary in uncertain networked control systems. The problem is formulated as a zero-sum game with two players, namely a malicious adversary and a detector. Given a protected performance vertex, we consider a detector, with uncertain system knowledge, that selects another vertex on which to place a sensor and monitors its output with the aim of detecting the presence of the adversary. On the other hand, the adversary, also with uncertain system knowledge, chooses a single vertex and conducts a cyber-attack on its input. The purpose of the adversary is to drive the attack vertex as to maximally disrupt the protected performance vertex while remaining undetected by the detector. As our first contribution, the game payoff of the above-defined zero-sum game is formulated in terms of the Value-at-Risk of the adversary’s impact. However, this game payoff corresponds to an intractable optimization problem. To tackle the problem, we adopt the scenario approach to approximately compute the game payoff. Then, the optimal monitor selection is determined by analyzing the equilibrium of the zero-sum game. The proposed approach is illustrated via a numerical example of a 10-vertex networked control system.
“Risk assessment and optimal allocation of security measures under stealthy false data injection attacks”.
S. C. Anand, A. M. H. Teixeira, and A. Ahlén. IEEE Conference on Control Technology and Applications (CCTA), 2022 ABSBIB
This paper firstly addresses the problem of risk assessment under false data injection attacks on uncertain control systems. We consider an adversary with complete system knowledge, injecting stealthy false data into an uncertain control system. We then use the Value-at-Risk to characterize the risk associated with the attack impact caused by the adversary. The worst-case attack impact is characterized by the recently proposed output-to-output gain. We observe that the risk assessment problem corresponds to an infinite non-convex robust optimization problem. To this end, we use dissipative system theory and the scenario approach to approximate the risk-assessment problem into a convex problem and also provide probabilistic certificates on approximation. Secondly, we con-sider the problem of security measure allocation. We consider an operator with a constraint on the security budget. Under this constraint, we propose an algorithm to optimally allocate the security measures using the calculated risk such that the resulting Value-at-risk is minimized. Finally, we illustrate the results through a numerical example. The numerical example also illustrates that the security allocation using the Value-at-risk, and the impact on the nominal system may have different outcomes: thereby depicting the benefit of using risk metrics.
“A Single-Adversary-Single-Detector Zero-Sum Game in Networked Control Systems”.
A. T. Nguyen, A. M. H. Teixeira, and A. Medvedev. IFAC Conference on Networked Systems (NecSys), 2022 ABSBIB
This paper proposes a game-theoretic approach to address the problem of optimal sensor placement for detecting cyber-attacks in networked control systems. The problem is formulated as a zero-sum game with two players, namely a malicious adversary and a detector. Given a protected target vertex, the detector places a sensor at a single vertex to monitor the system and detect the presence of the adversary. On the other hand, the adversary selects a single vertex through which to conduct a cyber-attack that maximally disrupts the target vertex while remaining undetected by the detector. As our first contribution, for a given pair of attack and monitor vertices and a known target vertex, the game payoff function is defined as the output-to-output gain of the respective system. Then, the paper characterizes the set of feasible actions by the detector that ensures bounded values of the game payoff. Finally, an algebraic sufficient condition is proposed to examine whether a given vertex belongs to the set of feasible monitor vertices. The optimal sensor placement is then determined by computing the mixed-strategy Nash equilibrium of the zero-sum game through linear programming. The approach is illustrated via a numerical example of a 10-vertex networked control system with a given target vertex.
“Risk-averse controller design against data injection attacks on actuators for uncertain control systems”.
S. C. Anand and A. M. H. Teixeira. American Control Conference, Atlanta, Georgia, USA, 2022 ABSBIB
In this paper, we consider the optimal controller design problem against data injection attacks on actuators for an uncertain control system. We consider attacks that aim at maximizing the attack impact while remaining stealthy in the finite horizon. To this end, we use the Conditional Value-at-Risk to characterize the risk associated with the impact of attacks. The worst-case attack impact is characterized using the recently proposed output-to-output ℓ 2 -gain (OOG). We formulate the design problem and observe that it is non-convex and hard to solve. Using the framework of scenario-based optimization and a convex proxy for the OOG, we propose a convex optimization problem that approximately solves the design problem with probabilistic certificates. Finally, we illustrate the results through a numerical example.
“Security Metrics for Control Systems”.
A. M. H. Teixeira.
in Safety, Security and Privacy for Cyber-Physical Systems, R. M. G. Ferrari and A. M. H. Teixeira, Eds. Cham: Springer International Publishing, 2021, pp. 1–8 BIB
Secure Federated Machine Learning
Federated machine learning (FedML) has proven to be a suitable approach for privacy-preserving machine learning across a large number of heterogeneous devices. Our group addresses concerns related to security and privacy in federated machine learning against model poisoning and information leakage attacks. The approach is centered around developing new theories and methodologies to achieve two main aims: secure aggregation of local models under poisoning attacks; private distributed aggregation of local models.
Team members: Usama Zafar, Salman Toor, André M. H. Teixeira
“Scalable federated machine learning with FEDn”.
M. Ekmefjord et al. Symposium on Cluster, Cloud and Internet Computing, Taormina, Italy, 2022 ABSBIB
Federated machine learning promises to overcome the input privacy challenge in machine learning. By iteratively updating a model on private clients and aggregating these local model updates into a global federated model, private data is incorporated in the federated model without needing to share and expose that data. Several open software projects for federated learning have appeared. Most of them focuses on supporting flexible experimentation with different model aggregation schemes and with different privacy-enhancing technologies. However, there is a lack of open frameworks that focuses on critical distributed computing aspects of the problem such as scalability and resilience. It is a big step to take for a data scientist to go from an experimental sandbox to testing their federated schemes at scale in real-world geographically distributed settings. To bridge this gap we have designed and developed a production-grade hierarchical federated learning framework, FEDn. The framework is specifically designed to make it easy to go from local development in pseudo-distributed mode to horizontally scalable distributed deployments. FEDn both aims to be production grade for industrial applications and a flexible research tool to explore real-world performance of novel federated algorithms and the framework has been used in number of industrial and academic R&D projects. In this paper we present the architecture and implementation of FEDn. We demonstrate the framework’s scalability and efficiency in evaluations based on two case-studies representative for a cross-silo and a cross-device use-case respectively.
Secure artificial pancreas
Artificial pancreas are envisioned medical systems whose function is to automatically regulate the blood glucose levels in patients with diabetes, with little to none human initervention. At the core of these systems we have an intellligent device autonomously deciding how much synthetic insulin and glucagon to infuse into the body through infusion pumps, based on data received from sensors located thoughout the body measuring, for instance, blood glucose levels in real-time. Data exchange among the controlling device, the pumps, and the sensors is critical. The whole system must operate safely, even in the presence of adversaries tampering with the communication or devices.
In this line of research, we develop schemes to monitor the sensor reading to detect anomalies, and distinguish them from natural unknown disturbances, such as meal intakes, physical exercise, among others.
Team members: Fatih Emre Tosun, André M. H. Teixeira
“Detection of Bias Injection Attacks on the Glucose Sensor in the Artificial Pancreas Under Meal Disturbances”.
F. E. Tosun, A. M. H. Teixeira, A. Ahlén, and S. Dey. American Control Conference, Atlanta, Georgia, USA, 2022 ABSBIB
The artificial pancreas is an emerging concept of closed-loop insulin delivery that aims to tightly regulate the blood glucose levels in patients with type 1 diabetes. This paper considers bias injection attacks on the glucose sensor deployed in an artificial pancreas. Modern glucose sensors transmit measurements through wireless communication that are vulnerable to cyber-attacks, which must be timely detected and mitigated. To this end, we propose a model-based anomaly detection scheme using a Kalman filter and a χ 2 test. One key challenge is to distinguish cyber-attacks from large unknown disturbances arising from meal intake. This challenge is addressed by an online meal estimator, and a novel time-varying detection threshold. More precisely, we show that the ordinary least squares is the optimal unbiased estimator of the meal size under certain modelling assumptions. Moreover, we derive a novel time-varying threshold for the χ 2 detector to avoid false alarms during meal ingestion. The results are validated by means of numerical simulations.