Provably-Safe Neural Network Training Using Hybrid Zonotope Reachability Analysis

Long Kiu Chung, Shreyas Kousik

Train your neural network with constraints!


Novel Loss Function that Quantifies Collision

Input Set

Output Set (Iteration 0)

Output Set (Iteration 2)

Output Set (Iteration 4)

We designed a loss function that quantifies how much the input set should scale before the exact output set (image of the input set for the neural network) just touches the unsafe set.


Synthesize Forward-Invariant Controllers

Input Set

Output Set
(4× Speedup)

Our method can handle set containment constraints by treating the complement of the constraints as the unsafe set.


Reach-Avoid for Black-Box Dynamical Systems

Before Training

Training with Our Loss Function
(2× Speedup in Training Time)

Compared to PARC and NeuralPARC, our forward reachable tube formulation does not scale with the number of timesteps, nor do we assume translation invariance.


Abstract

Even though neural networks are being increasingly deployed in safety-critical control applications, it remains difficult to enforce constraints on their output, meaning that it is hard to guarantee safety in such settings. While many existing methods seek to verify a neural network’s satisfaction of safety constraints, few address how to correct an unsafe network. The handful of works that extract a training signal from verification cannot handle non-convex sets, and are either conservative or slow. To begin addressing these challenges, this work proposes a neural network training method that can encourage the exact image of a non-convex input set for a neural network with rectified linear unit (ReLU) nonlinearities to avoid a non-convex unsafe region. This is accomplished by reachability analysis with scaled hybrid zonotopes, a modification of the existing hybrid zonotope set representation that enables parameterized scaling of non-convex polytopic sets with a differentiable collision check via mixed-integer linear programs (MILPs). The proposed method was shown to be effective and fast for networks with up to 240 neurons, with the computational complexity dominated by inverse operations on matrices that scale linearly in size with the number of neurons and complexity of input and unsafe sets. We demonstrate the practicality of our method by training a forward-invariant neural network controller for a non-convex input set to an affine system, as well as generating safe reach-avoid plans for a black-box dynamical system.


BibTex

@article{chung2025provably,
title={Provably-Safe Neural Network Training Using Hybrid Zonotope Reachability Analysis},
author={Chung, Long Kiu and Kousik, Shreyas},
journal={arXiv preprint arXiv:2501.13023},
year={2025}
}