# [HSCC 23 Test of Time Award] Verification of Linear Hybrid Automata through Set Propagation

In the domain of cyber-physical systems, linear hybrid automata have emerged as a pivotal tool for modeling and verifying complex behaviors that involve both event-driven (discrete) and time-driven (continuous) changes. This blog post highlights the significant strides made in symbolic analysis techniques, particularly in the efficiency and scalability of these methods. The concept of Hybrid Automata emerged in the early ’90s at the intersection of computer science and control theory. Tom Henzinger’s seminal work in 1996 laid the foundation for Linear Hybrid Automata (LHA), presenting them as an ideal candidate for symbolic, set-based analysis.

Over the past three decades, linear hybrid automata analysis has seen significant advancements in algorithmic verification and efficiency. On the occasion of the HSCC Test of Time Award for the paper “*Algorithmic Verification of Hybrid Systems Past HyTech*“, published at HSCC 2005, this blog post summarizes some of these advancements, offering insights into how the area has evolved.

**Linear Hybrid Automata (LHA)**

Hybrid Automata are a modeling paradigm that combines finite state machines with differential equations. This is crucial for capturing processes that involve both discrete event-based and continuous time-based behavior. Linear Hybrid Automata (LHA) are characterized by linear predicates over continuous variables and differential inclusions dependent only on discrete states, not continuous states. They are effective abstractions for complex natural and technical processes.

The following figure shows the LHA model of an **adaptive cruise controller** for a platoon of 4 cars, where the relative car positions are modeled by variables x_1,…,x_4 and evolve nondeterministically within given constraints on their speed. When two of them get too close, the controller switches to a recovery state, where the speeds of all cars are adjusted. If the distance falls below a minimal threshold, i.e., a collision is possible, the controller enters an error state. With the help of symbolic analysis, it can be decided whether or not a given set of parameters can lead to a collision or not, i.e, whether it is safe:

Jha, Sumit & Krogh, Bruce & Weimer, James & Clarke, Edmund. (2007).

Reachability for Linear Hybrid Automata Using Iterative Relaxation Abstraction.

We are interested in safety properties, which guarantee that states designated as bad are not reachable from a given set of initial states. This problem is also simply referred to as reachability, and it can be solved by computing the set of reachable states, or an abstraction of that set, as long as the abstraction is precise enough to separate the reachable from the bad states.

**Implementing Symbolic Analysis**

The symbolic analysis techniques we describe aim at representing and computing with sets in ways that optimize efficiency and scalability. Over the years, the efficiency of symbolic analysis of LHA has significantly improved, thanks to new representation methods and computational advances. One critical aspect is the representation of continuous sets, where convex polyhedra have played a central role.

### Efficient handling of polyhedra in symbolic analysis.

Convex polyhedra are geometrical objects that can be represented and manipulated efficiently for many operations of interest for symbolic analysis. A convex polyhedron can be represented, as a so-called H-polyhedron, as the set of points, in then-dimensional space of reals, that satisfies a given, finite, conjunction of linear inequalities. Alternatively, it can be represented as the convex hull of a finite set of vertices and rays, as a so-called V-polyhedron. Convex polyhedra are often represented using the Double Description (DD) approach, which combines the constraint (H-polyhedra) and generator (V-polyhedra) representations to enable simple implementations of many semantic operators; the conversion between the two representations is rather expensive, but it can benefit from clever use of incremental computation techniques. Some significant efficiency improvements have been obtained in recent years, among which are the following.

- The traditional DD approach only targets topologically closed polyhedra: strict inequality constraints are supported indirectly by the addition of a slack variable; while effective, this approach doubles the size of the V-polyhedra representation and hence significantly degrades the overall efficiency of all the semantic operators. A novel DD representation and a conversion algorithm have been proposed recently (and implemented in the PPLite library) that can encode strict inequalities without slack variables, leading to much smaller representations and consequential efficiency gains (Becchi and Zaffanella, 2020).
- Cartesian factoring is another technique that, in some interesting and rather common cases, can greatly reduce the space needed to represent a polyhedron using generators. Informally, the space dimensions of the polyhedron P are partitioned into a sequence of k blocks (B1, …, Bk) so that each linear constraint in the H-polyhedra representation mentions the dimensions of a single block Bi; then, the polyhedron P is factored in a corresponding sequence of k polyhedra (P1, …, Pk); this factoring allows for exponential savings in the size of the corresponding V-polyhedra representations. While the idea was initially proposed by Halbwachs
*et al.*in 2003, suitable implementations have been made available only more recently in software libraries Elina and PPLite.

### Efficient detection of redundancies.

A naive implementation of a fixpoint-based reachability analysis can involve a very high number of redundant computations, meaning that their results are necessarily *subsumed* by previously computed results: the detection and removal of these redundancies is a key factor for scalability. When working at the symbolic level, redundancy is typically detected by checking containment between convex polyhedra, which is generally a rather expensive operation. To reduce these costs, an effective heuristics is to systematically pair each polyhedron P with lightweight abstractions (e.g., its bounding box B and the volume of B), which allow for the implementation of semi-decision algorithms that can quickly yield a negative answer to a containment check. As a simple example, knowing that the polytopes P1 and P2 are paired to bounding boxes B1 and B2, having volumes v1 and v2, we can efficiently derive that P1 is not a subset of P2 by simply checking that v1 > v2; if this (constant time) check fails, we can check if box B1 does not contain box B2, incurring a cost which is linear in the number of space dimensions; if this second check fails too, we fallback to the more costly inclusion test on P1 and P2. This technique, named *boxed polyhedra*, proved to be effective. For example, in one of the considered benchmarks, ~95% of all the containment checks are decided by only looking at the lightweight abstractions. The availability of a quick test for inclusion also enables more aggressive, speculative heuristics for the detection of redundancies in the fixed point algorithm, such as *waiting list filtering*: whenever adding a new polyhedron P to the list of abstract states that are waiting to be processed, it is possible to remove from the same list all polyhedra Q that are included in P. Note that in this case we detect *future redundancies*: both P and Q still have to be processed, but we can anticipate that any abstract state that would be obtained when processing Q will be subsumed by the abstract states that will be obtained when processing P (due to the monotonicity of the computation).

### Overapproximations for scalability: using less precise operators

Despite the efficiency improvements mentioned above, the computation of the reachable set may face the state explosion problem. Hence, we sometimes need to resort to overapproximations, trading some precision to achieve scalability. To illustrate one of the possible abstraction techniques, consider the problem of overapproximating the union of two convex sets with a single convex set. Depending on the context, several approaches can be adapted: the following figure shows the convex hull (middle), which is precise but still suffers from exponential complexity, and the constraint hull (con-hull, right), which is less precise but scales linearly in the number of faces.

## Experiments

We illustrate the influence of the different heuristics using an instance of a benchmark called Distributed Controller. The benchmark models a distributed controller for a robot, reading data from 3 multiple sensors and processing them according to multiple priorities. As the following table indicates, some heuristics reduce the cost of each iteration (symbolic successor computation), while others reduce the total number of iterations by converging faster to a fixed point.

Techniques | Iterations | Time/Iteration | Total Runtime |

baseline | 63805 | 21.6 | 1379.4 |

waitlist filtering | 9652 | 9.6 | 93.1 |

waitlist filtering + boxing | 9652 | 1.1 | 10.3 |

con-hull | 189 | 3.7 | 0.7 |

con-hull+factoring | 189 | 2.1 | 0.4 |

The following table illustrates the advances made over different years by showing the total runtime for different instances of the Adaptive Cruise Controller benchmark. Further details and results are available in a report of the ARCH-COMP verification competition and on the website of the ARCH workshop, which hosts a number of relevant benchmarks.

discr. | states | ||||

year | Tool | 32 | 64 | 128 | 256 |

2011 | PHAVer/SX | 9.4 | 461.0 | – | – |

2018 | PHAVer-lite | 1.0 | 38.1 | – | – |

2019 | PHAVerLite | 0.1 | 0.6 | 4.3 | 47.1 |

**Conclusion**

The above summary highlights the critical role of polyhedral computations, algorithmic improvements, and efficient abstractions in speeding up the verification of safety properties in LHAs. *For more details, readers are encouraged to refer to the paper “Symbolic Analysis of Linear Hybrid Automata – 25 Years Later,” on which this post is based. *

As we reflect on the journey of Linear Hybrid Automata over the past quarter-century, it’s clear that the field is not just about complex computations but also about making these systems more accessible and understandable. The advances discussed above are a step toward more efficient and reliable system design, a critical aspect in our increasingly automated world.

**Authors**:

Goran Frehse is a professor in Computer Science at ENSTA Paris, Institut Polytechnique de Paris.

Enea Zaffanella is an associate professor in Computer Science at the University of Parma, working at the Department of Mathematical, Physical and Computer Sciences.

## Sources:

Goran Frehse. PHAVer: Algorithmic Verification of Hybrid Systems Past HyTech. HSCC 2005: 258-273

G. Frehse, M. Giacobbe, E. Zaffanella. Symbolic Analysis of Linear Hybrid Automata – 25 Years Later. In J-F Raskin, K Chatterjee, L Doyen & R Majumdar (eds), Principles of Systems Design: Essays Dedicated to Thomas A. Henzinger on the Occasion of His 60th Birthday. LNCS vol. 13660, Springer

Anna Becchi, Enea Zaffanella. PPLite: Zero-overhead encoding of NNC polyhedra. Inf. Comput. 275: 104620 (2020)

**Disclaimer**: *Any views or opinions represented in this blog are personal, belong solely to the blog post authors and do not represent those of ACM SIGBED or its parent organization, ACM.*