Research

I am interested in building fundamental connections between the techniques used in the control and machine learning communities. Specifically, my current research focuses on developing new analysis/design/validation tools that ensure an efficient and safe integration of control and learning techniques for next generation intelligent systems such as self-driving cars, advanced robotics, and smart buildings. In addition, I have recently started the AGI4Engineering (A4E) initiative that aims to develop engineering AGI.

Artificial General Intelligence (AGI) for Engineering

The field of large language models (LLMs) is advancing rapidly, and industry pioneers envision developing engineering AGI systems capable of solving the most challenging design problems across diverse disciplines. Although LLMs have shown great promise in textbook-style problem solving, real-world engineering design poses a fundamentally different challenge that goes far beyond textbook knowledge. It demands the synthesis of domain knowledge, navigation of complex trade-offs, and management of the tedious processes that consume much of practicing engineers’ time. Today, a wide gap still separates LLMs from practicing engineers, and our research focuses on closing this gap. Our efforts along this research direction are detailed on the A4E website.

System/Control Tools for Certifiable Robustness of Deep Neural Networks and Large Foundation Models

Deep learning techniques have been used to achieve the state-of-the-art performance for a variety of applications (computer vision, natural language processing, and Go). However, the robustness properties of deep learning models have not been fully understood. This research focuses on tailoring various system/control tools for inducing strong robustness guarantees on deep neural networks and large foundation models. I am currently investigating how to train large-scale Lipschitz neural networks with certified robustness guarantees through the lens of robust control.

Generative AI for Decision-Making and Control in Complex Environments

Generative AI techniques such as diffusion and large language models have the potential to improve decision-making and control in complex environments. I am currently studying how to design trustworthy autonomous agents via integrating generative AI and conventional control methods with theoretic guarantees in terms of stability, safety, and robustness.

Combining Robust Control and Deep Reinforcement Learning for Safety-Critical AI

I am interested in reconciling robust control with deep reinforcement learning for safety-critical AI applications. I am currently investigating the convergence properties of model-free policy gradient methods for various standard robust control tasks. I am also developing new robust reinforcement learning methods by leveraging ideas from robust control and risk sensitive control. Finally, I am working on tools and principles for designing perception-based control systems with robustness guarantees.

Optimization is Control

Optimization can be viewed as a control problem. By viewing the gradient of the cost function as a plant one wants to control, the optimization problem becomes an output regulation control problem. Consequently, first-order optimization methods can be viewed as controllers regulating the plant. For example, gradient descent, Nesterov’s method, and heavy-ball method can all be viewed as special cases of the proportional-integral-derivative (PID) controllers. My current research focus is on translating “controllers” into “optimization methods” for large-scale unconstrained and constrained optimization problems in machine learning, control, robotics, smart buildings, and power systems.