Safety for learned control systems: data-driven proofs via neural certificates

Four environments where we can apply neural certificates for safe multi-agent control: a narrow alleyway, a large city, a coastal waterway, and a busy logistics hub.

For almost as long as people have been designing control systems, they have been asking “how can we prove that our controller will be safe?” For simple systems (e.g. linear or low-dimensional polynomial dynamics), we might be able to hand-analyze the system and prove that it will be safe, but this manual analysis quickly becomes untenable as the systems dynamics and controller architecture become more complex. In particular, recent years have seen a wide range of control architectures based on deep learning, either for learning from demonstrations (imitation learning) or for solving nonlinear optimal control problems (reinforcement learning). Although these learning-based controllers have demonstrated empirical success on a wide range of problems, they can be prone to difficult-to-diagnose failures and vulnerable to adversarial examples. These issues beg the question: how might we verify the safety of a complex, learning-based controller; or better yet, how can we design these controllers with built-in safety constraints.

The problem of verifying the safety of a controller has a long history in control theory. One of the earliest (and still today most common) approaches is through the use of a Lyapunov function. Originally developed in the 19th century by Russian mathematician Aleksandr Lyapunov, a Lyapunov function generalizes the notion of energy dissipation. Just as a mechanical system that continuously dissipates energy must eventually come to rest, any dynamical system with a Lyapunov function (defined as a non-negative scalar function of state) that is decreasing along trajectories must eventually reach an equilibrium. As a result, in order to prove that a control system (of arbitrary complexity) is stable, all we need to do is find a valid Lyapunov function to certify that stability.

Unfortunately, Lyapunov functions (and other types of certificate, such as barrier functions, which certify that a system will avoid an unsafe set) are notoriously difficult to find by hand, requiring a mixture of intuition and lucky guesswork to find the right functional form and parameters. Although various methods have been proposed to automate the search for a valid Lyapunov function (simulation-guided synthesis, sum-of-squares optimization, etc.), these methods scale poorly and often do not apply to systems with more complicated dynamics. To provide a more scalable, general-purpose certificate synthesis framework, researchers in robotics, control theory, and machine learning have developed a new suite of techniques to learn the certificate using a neural network. In this so-called neural certificate paradigm, instead of just learning a control policy, we are able to simultaneously learn a control policy and a neural certificate to serve as a data-driven proof of that controller’s correctness. These methods can also be applied to systems with non-neural control policies, in which case they are learned after the controller has been designed; in either case, the goal is to provide the certificate network as a statistical proof of the controller’s correctness, or even to use the certificate to guide the learning process itself and ensure the safety of the underlying controller.

To provide an introduction to this emerging area, we recently published a survey of neural certificate methods in IEEE Transactions on Robotics: “Safe Control With Learned Certificates: A Survey of Neural Lyapunov, Barrier, and Contraction Methods for Robotics and Control”. Our survey provides a brief introduction to certificate theory, demonstrates how neural certificates can be applied to a wide range of challenging control problems (including an open-source implementation of the framework), and discusses interesting open questions for future research.

In addition to this survey, our lab at MIT has developed neural-certificate-based controllers for a range of challenging problems in controls and robotics. Here, we highlight two recent papers.

Scalable certification of large networked systems

Since neural certificates are a fundamentally data-driven technique, they rely on the ability to sample from the state space of the system you wish to certify. As a result, scaling to large distributed or networked systems is an open challenge. One recent publication from our group, the Reliable Autonomous Systems Lab at MIT (REALM; led by Prof. Chuchu Fan) helps address this scalability issue by decomposing a large networked system into several smaller sub-systems, learning a stability certificate for each of these subsystems, then composing those certificates into a proof for the overall system. This approach, developed by REALM PhD student Songyuan Zhang and collaborators, was demonstrated on examples ranging from aerial robot swarms to power grids. This paper will be presented at the 2023 Learning for Dynamics and Control Conference (L4DC), and more information can be found on the project webpage.

Certification and planning for hybrid systems

Although hybrid systems are prevalent in robotics and industrial control, they pose unique challenges for certifying the safety of a control system; for example, a well-known fact about hybrid systems is that even if the individual modes are stable, the overall system may still be unstable! In a separate 2023 L4DC paper, REALM PhD student Yue Meng has proposed a method for first using neural Lyapunov functions to certify regions of attraction for each mode of a hybrid system, then planning a trajectory that connects these regions of attraction to ensure the long-term safety of the hybrid system. This has allowed him to demonstrate safe learned control for a range of hybrid control problems based in locomotion and autonomous driving on changing terrain. More information on this work can be found on the project webpage.

Additional work on certified control

In addition to these two recent papers, our lab has applied neural certificates to a range of other problems in control. If you find any of these problems interesting, check out the open-source implementations and get in touch!

AuthorsCharles Dawson is a PhD student in the Reliable Autonomous Systems Lab at MIT. His work focuses on scalable, practical verification and design algorithms for autonomous systems, ranging from neural certificates to adversarial optimization and probabilistic programming.

Chuchu Fan is an Assistant Professor in the Department of Aeronautics and Astronautics (AeroAstro) and Laboratory for Information and Decision Systems (LIDS) at MIT. Her research group REALM at MIT works on using rigorous mathematics, including formal methods, machine learning, and control theory, for the design, analysis, and verification of safe autonomous systems.

Sources:

C. Dawson, S. Gao and C. Fan, “Safe Control With Learned Certificates: A Survey of Neural Lyapunov, Barrier, and Contraction Methods for Robotics and Control,” in IEEE Transactions on Robotics.

C. Dawson, B. Lowenkamp, D. Goff and C. Fan, “Learning Safe, Generalizable Perception-Based Hybrid Control With Certificates,” in IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 1904-1911, April 2022.

Y. Meng and C. Fan, “Hybrid Systems Neural Control with Region-of-Attraction Planner,” in Learning for Decision and Control 2023.

Y. Meng, Z. Qin and C. Fan, “Reactive and Safe Road User Simulations using Neural Barrier Certificates,” 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 2021.

S. Zhang, Y. Xiu, G. Qu, and C. Fan, “Compositional Neural Certificates for Networked Dynamical Systems,” in Learning for Decision and Control 2023.

DisclaimerAny views or opinions represented in this blog are personal, belong solely to the blog post authors and do not represent those of ACM SIGBED or its parent organization, ACM.