Bin Hu

Bin Hu  Assistant Professor
Department of Electrical and Computer Engineering
Coordinated Science Laboratory
University of Illinois at Urbana-Champaign

binhu7@illinois.edu
145 CSL
1308 W Main St
Urbana, IL 61801

About Me

I am an assistant professor in the Department of Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign and affiliated with the Coordinated Science Laboratory. My research focuses on building fundamental connections between control and machine learning. Currently I am most interested in

  • System/control tools for certifiable robustness of deep neural networks and large foundation models

  • Generative AI for decision-making and control in complex environments

  • Connections between robust control and reinforcement learning

  • Control-theoretic tools for analysis and design of iterative algorithms in optimization and learning

I received the B.Sc. in Theoretical and Applied Mechanics from the University of Science and Technology of China in 2008, and received the M.S. in Computational Mechanics from Carnegie Mellon University in 2010. I received the Ph.D. in Aerospace Engineering and Mechanics at the University of Minnesota in 2016, under the supervision of Peter Seiler. Between July 2016 and July 2018, I was a postdoctoral researcher in the Wisconsin Institute for Discovery at the University of Wisconsin-Madison. At Madison, I was working with Laurent Lessard and closely collaborating with Stephen Wright. In 2021, I received the NSF CAREER award and the Amazon research award.

News

  • 02/2024: Our paper "COLD-Attack: Jailbreaking LLMs with Stealthiness and Controllability" has been posted on arXiv (joint work with Xingang Guo, Fangxu Yu, Huan Zhang, and Lianhui Qin). In this paper, we adapt the Energy-based Constrained Decoding with Langevin Dynamics (COLD), a state-of-the-art, highly efficient algorithm in controllable text generation, and develop the COLD-Attack framework which unifies and automates the search of adversarial LLM attacks under a variety of control requirements such as fluency, stealthiness, sentiment, and left-right-coherence.

  • 01/2024: Two papers have been accepted to ICLR 2024.

  • 09/2023: Big thanks to AFOSR for funding our project on "Built-in Certificates and Automated Verification of Learning-Based Control"!

  • 09/2023: Three papers have been accepted to NeurIPS 2023.

  • 01/2023: Our paper "A Unified Algebraic Perspective on Lipschitz Neural Networks" has been accepted as Spotlight in International Conference on Learning Representations (ICLR) 2023. In this paper, we leverage the quadratic constraint approach from control theory to design novel Lipschitz neural netwrok structures, leading to improved certified robustness on classification tasks such as CIFAR10/100 and TinyImageNet.

  • 10/2022: Our article "Towards a Theoretical Foundation of Policy Optimization for Learning Control Policies" has been posted on arxiv. This is an invited article which is going to appear in the next issue of Annual Review of Control, Robotics, and Autonomous Systems.

  • 09/2022: My student Xingang Guo and I managed to prove that direct policy search can be guaranteed to achieve global convergence on the H-infinity state-feedback synthesis problem! Our results are summarized in a new paper entitled "Global Convergence of Direct Policy Search for State-Feedback H-Infinity Robust Control: A Revisit of Nonsmooth Synthesis with Goldstein Subdifferential", which has been accepted to NeurIPS 2022 and posted on arxiv.

  • 05/2022: Our paper "Provable Acceleration of Heavy Ball beyond Quadratics for a Class of Polyak-Lojasiewicz Functions when the Non-Convexity is Averaged-Out" has been accepted to International Conference on Machine Learning (ICML) 2022.

  • 05/2022: Our paper "Policy Optimization for Markovian Jump Linear Quadratic Control: Gradient Method and Global Convergence" has been accepted to IEEE Transactions on Automatic Control.

  • 03/2022: Our paper "Connectivity of the Feasible and Sublevel Sets of Dynamic Output Feedback Control with Robustness Constraints" has been posted on arxiv. This paper brings new insights for understanding policy optimization in the output feedback setting.

    Update: The above paper has been accepted to IEEE Control Systems Letters.

  • 01/2022: Our paper "Revisiting PGD Attacks for Stability Analysis of High-Dimensional Nonlinear Systems and Perception-Based Control" has been posted on arxiv. This paper tailors PGD attacks from the adversarial learning literature as scalable analysis tools for approximating the region of attraction (ROA) of nonlinear control systems with large-scale neural network policies and/or high-dimensional image observations.

    Update: The above paper has been accepted to IEEE Control Systems Letters.

  • 11/2021: Our paper "Model-Free μ Synthesis via Adversarial Reinforcement Learning" has been posted on arxiv. This paper builds a connection between adversarial reinforcement learning and the famous DK iteration algorithm from robust control.

    Update: The above paper has been accepted to American Control Conference (ACC) 2022.

  • 09/2021: Our paper "Derivative-Free Policy Optimization for Linear Risk-Sensitive and Robust Control Design: Implicit Regularization and Sample Complexity" has been accepted to NeurIPS 2021.

  • 03/2021: Delighted to receive the 2020 Amazon Research Award. Big thanks to Amazon!

  • 02/2021: Thrilled to receive the NSF CAREER award on "Interplay between Control Theory and Machine Learning." Big thanks to NSF!

  • 09/2020: Our paper "On the Stability and Convergence of Robust Adversarial Reinforcement Learning: A Case Study on Linear Quadratic Systems" has been accepted to NeurIPS 2020.

  • 03/2020: Our paper "Analysis of Biased Stochastic Gradient Descent Using Sequential Semidefinite Programs" has been accepted to Mathematical Programming. A full-text view-only version of the final paper can be found here.

  • 10/2019: Our paper "Policy Optimization for H2 Linear Control with H-infinity Robustness Guarantee: Implicit Regularization and Global Convergence" has been posted on arxiv. This paper studies the implicit regularization mechanism in policy-based reinforcement learning for robust control design.

    Update I: A conference version of the above paper has been accepted to L4DC 2020. (one of 14/131 papers selected for oral presentation)

    Update II: The journal version of the above paper has been accepted to SIAM Journal on Control and Optimization (SICON).

  • 06/2019: Our paper "Characterizing the Exact Behaviors of Temporal Difference Learning Algorithms Using Markov Jump Linear System Theory" has been posted on arxiv. This is my first paper on analyzing reinforcement learning algorithms using control theory!

    Update: The above paper has been accepted to NeurIPS 2019. The arxiv version of the paper has been revised.

  • 08/2018: I started as an Assistant Professor in the Electrical and Computer Engineering Department at the University of Illinois at Urbana-Champaign.