[HSCC 23 Best Paper] Distributionally Robust Strategy Synthesis for Switched Stochastic Systems
Cyber-physical systems (CPS), such as automated vehicles or robotics systems, are often characterized by two common features: i) they are safety-critical, hence formal guarantees of correctness are required, ii) they are affected by uncertainty, which is generally modeled through stochastic noise. Various formal verification and synthesis algorithms have been developed for these systems, with approaches including stochastic barrier functions and abstractions to finite Markov models. However, these approaches assume that both the dynamics and the noise distribution of the system are well known, which is often an unrealistic assumption due to e.g., unmodeled dynamics, distributional shifts, or data-driven components. To close this gap recent works have started to employ machine learning algorithms, including neural networks and Gaussian processes, to devise formal control strategies in the case where the dynamics are (partially) unknown or simply too complex to be modeled. Nevertheless, these works do not consider the case when the distribution of the system is uncertain and may only be captured through a set of plausible distributions that have certain properties in common. This leads to the fundamental research question that we address with our contribution: how can we derive formal guarantees of safety for stochastic systems whose noise distribution is uncertain?
In order to address this question we consider a general model of discrete-time switched stochastic process
x[k+1] = f(x[k], u[k])+v[k]
where v[k] is a random variable modeling an additive noise term distributed according to an unknown probability distribution that is only known to lie within an ambiguity set of plausible distributions. This ambiguity set is determined in terms of closeness to a nominal distribution for the noise, where closeness is quantified via an appropriate metric; in our case, we consider the Wasserstein distance due to its widespread use in statistics and distributionally robust decision-making. Intuitively, x[k] is a controlled stochastic process driven by an additive noise process v[k], whose distribution is uncertain and is close to a nominal one. Systems with such uncertain distributions arise for instance in i) data-driven settings, where one can use tools from concentration of measure to build ambiguity sets that contain the noise distribution with high confidence; and, ii) general distributionally robust settings, where there is only approximate knowledge of the distribution and one wants to synthesize control strategies that are robust against distributional shifts.
Once we have defined the model dynamics, we can now precisely define the problem we address in our paper: for a time horizon K, an initial state x a safe set X, and a target region Xtgt select a feedback strategy such that, Preach( X, Xtgt,K | x, πx), the probability of reaching Xtgt while remaining safe within K time steps, is maximized. While our focus is on reach-avoid specifications, this is not limiting. In fact, probabilistic reach-avoid specifications are the key building block for model-checking algorithms of various temporal logics, such as PCTL or LTL, which present a formal way to specify the performance requirements in complex CPS.
Planning Against Uncertain Uncertainty
Our approach to compute Preach( X, Xtgt,K | x, πx) is highlighted in Figure 1. We abstract the original system into a finite-state uncertain Markov decision process (MDP), which is an MDP where transition probabilities between states are not known exactly, but lie in some set. In particular, we use the fact that transition probabilities between states are uncertain to formally account for both the uncertainty coming from the abstraction process and that coming from the uncertainty in the noise distribution. Consequently, once the abstraction is computed, the problem of synthesizing a strategy for the original system becomes equivalent to synthesizing a strategy for the uncertain MDP abstraction.
The main technical contribution of the paper is indeed to show that an optimal strategy for the resulting uncertain MDP can be computed by solving a set of linear programs, thus guaranteeing efficiency.
The synthesis algorithm is illustrated in the second and third blocks in Figure 1: The second block shows an example of an uncertain MDP. Although this block only illustrates upper and lower bounds on the transition probabilities between two states (for visual purposes), a key aspect of our approach is that it also captures couplings among the transition probabilities across different states, which are induced by the optimal transport constraints of the Wasserstein ambiguity set. The result of the synthesis process is shown in the third block: by solving a max-min game, we synthesize a strategy that chooses action $a$ at state q and accounts for the worst-case probability of transitioning between states with respect to the reach-avoid property. This strategy is guaranteed to be robust with respect to the uncertainty in the system as it maximizes at each step the probability of satisfying the specification under the worst-case distribution. Consequently, when we refine the resulting strategy into a feedback strategy for x[k], we also obtain upper and lower bounds on the probability that the original system satisfies the reach-avoid specification.
An Illustrative Example
To illustrate our framework we tested it on various systems with uncertain noise dynamics. In what follows, we consider a 2-D kinematic model of a unicycle in which we control its heading. We consider four different settings: 1) the noise distribution is known exactly, 2-3-4) the noise distribution lies within an ambiguity set of size respectively 5e-3, 1e-2 and 1.5e-2 with respect to a nominal distribution.
For each of the settings, we synthesize an optimal strategy that from each state drives the system to the target region while avoiding obstacles. In Figure 2, the contours illustrate the resulting (lower bound of the) probability of satisfying the specification. We observe that when the noise distribution is known exactly, from each state we can satisfy the reach-avoid specification with probability ≈ 1. However, when the noise distribution becomes uncertain, then the guarantee of satisfying the specification decreases and the bigger the ambiguity set, the harder it becomes to obtain high guarantees of correctness. Once again, this highlights the importance of accounting for the uncertainty in the noise distribution.
Current efforts are focusing on the development of faster, tailored synthesis algorithms, that increase the scalability of our approach. Preliminary results already show a speedup between two and three orders of magnitude when compared to off-the-shelf linear programming routines. Additionally, we are exploring how to obtain formal performance guarantees in data-driven settings, by exploiting concentration of measure results for Wasserstein ambiguity sets.
Ibon Gracia is a PhD student in ARIA systems Lab, University of Colorado Boulder. His work focuses on formal, data-driven control of autonomous systems using tools from optimal transport and Markovian abstractions.
Dimitris Boskos is an Assistant Professor at the Delft Center for Systems and Control (DCSC), TU Delft. His main research interests include distributionally robust decision-making and multi-agent control.
Luca Laurenti is an Assistant Professor at the Delft Center for Systems and Control (DCSC), TU Delft. His main research interests lie at the intersection of control theory, artificial intelligence, and stochastic systems.
Manuel Mazo Jr is Associate Professor at the Delft Center for Systems and Control (DCSC), TU Delft. His main research interests are networked and symbolic control.
Disclaimer: Any views or opinions represented in this blog are personal, belong solely to the blog post authors and do not represent those of ACM SIGBED or its parent organization, ACM.