Our vision is to develop methodologies for designing intelligent autonomous decision-making systems that are secure and resilient against malicious adversaries and natural failures.
To do so, we look into these sytems from a security perspective, under various adversary models, develop techniques to assess the risk (i.e., impact and likelihood) of such adversaries and failures, and propose methodologies to design and systematically deploy defense measures to prevent, detect, and mitigate malicious attacks and natural disruptive events. In our research, we combine relevant methodologies from cybersecurity, control theory, optimization and machine learning, reinforcement learning, game-theory and networked / distributed systems.
Here are some themes that we currently work on:
Security metrics for control systems: the aim within this theme is to to create novel methodologies addressing cybersecurity problems under uncertainty in learning and control systems. A core element of this research is the development of novel probabilistic risk metrics and optimization-based design methods that jointly consider the impact and the detectability constraints of attacks, as well as model uncertainty and prior beliefs on the adversary model.
Secure artificial pancreas: Artificial pancreas are envisioned medical systems whose function is to automatically regulate the blood glucose levels in patients with diabetes, with little to none human initervention. At the core of these systems we have an intellligent device autonomously deciding how much synthetic insulin and glucagon to infuse into the body through infusion pumps, based on data received from sensors located thoughout the body measuring, for instance, blood glucose levels in real-time. Data exchange among the controlling device, the pumps, and the sensors is critical. The whole system must operate safely, even in the presence of adversaries tampering with the communication or devices. In this line of research, we develop schemes to monitor the sensor reading to detect anomalies, and distinguish them from natural unknown disturbances, such as meal intakes, physical exercise, among others.
Secure Federated Machine Learning: Federated machine learning (FedML) has proven to be a suitable approach for privacy-preserving machine learning across a large number of heterogeneous devices. Our group addresses concerns related to security and privacy in federated machine learning against model poisoning and information leakage attacks. The approach is centered around developing new theories and methodologies to achieve two main aims: secure aggregation of local models under poisoning attacks; private distributed aggregation of local models.