Barrier Certificates: Don't Move Unless You Absolutely Have To!

\pagestyle{fancy}

The big picture

Barriers certificates are intimately connected to the following “safety” control problem. Given a system $\dot{x}=f(x,u)$ where $f$ is Lipschitz continuous and a connected set $\mathcal{S}$, how do we make sure that the system stays in $\mathcal{S}$ forever? Mathematically, we want $\mathcal{S}$ to be forward-invariant, i.e,

\begin{align} x(0)\in\mathcal{S}\Rightarrow \forall t\geq 0 \quad x(t)\in\mathcal{S} \end{align}

Note that the Lipschitz continuity of $f$ is a technical condittion to ensure that solutions exists over a sufficiently long time interval $[0;T]$, and the condition $\forall t\geq 0$ can be replaced by $\forall t\in[0;T]$. Forward invariance puts a constraint on the state $x(t)$, which is typically not that easy to handle.

Concrete problems in which forward invariance appear are numerous, e.g., controlling sugar concentration in artifical pancreas, preventing collisions in a robotic swarm, etc.

Barrier certificates

Barrier certificates, which were developed in Russia but have come back in force over the last 5 years, are a way to address the problem. The key tool to address barrier certificates is the deceptively innocuous looking Comparison Lemma.

\begin{lemma}[Comparison Lemma] Let $\dot{z}=F(z)$, $z\in\mathbb{R}$, $z(0)=z_0$, $F$ locally Lipschitz, so that a unique solution exist everywhere. If $\dot{z}’\geq F(z’)$, $z’(0)=z_0$ then $z’(t)\geq z(t)$ $\forall t\geq 0$. \end{lemma}

A short digression: Lyapunov functions and the comparison lemma Given $\dot{x}=f(x,u)$, $x\in\mathbb{R^n}$, say we have to design $u(x)$ that asymptotically stabilizes the system to $x=0$. One solution is to consider a Lyapunov function $V:\mathbb{R}^n\to\mathbb{R}$. such that

and to pick $u(x)$ s.t. $\dot{V}<0$? One can think of $V(x)$ as the energy in the system and the control $u(x)$ would ensure that it only decreases. Since we would effectively like to design $u$ so that $\frac{\partial V}{\partial x}f(x,u)<0$. Often, we actually like that $V\to 0$ faster than $e^{-\alpha t}$ for some $\alpha>0$.

Assume (by magic) that we have picked $V$ such that Then, $V(t) = e^{-\alpha t}V(0)$. By the comparison lemma, it is sufficient to pick a control $u(x)$ such that $\dot{V}\leq -\alpha V (\leq 0)$.

Example of application: Consider the system with the following dynamics (an airplane): Consider the Lyapunov fuction $V(x)=\frac{1}{2}x^2$. Assume that we are looking for a controller that spends as little energy as possible but ensures that convergence to stability is fast enough. Formally we are trying to solve the optimization Note that $\dot{V} = x\dot{x}$ so that it must hold that or equivalently, If $x^4-\frac{\alpha}{2}x^2>0$, the optimal $u$ is zero. If $x^4-\frac{\alpha}{2}x^2\leq 0$ then the optimal $u$ must satisfy $xu=x^4-\frac{\alpha}{2}x^2$ to minimize the energy. Hence our controlled would be a switching controller such that

This is a switching controller that does nothing not of the time. The controller only kicks in when $x$ gets small and the dynamics are not sufficient to drive the system to zero fast enough by themselves.

Side remark Lyapunov fucntions are magnificent: make asymptotic claims for local constraints.

Back to the safety problem Consider set $\mathcal{S}$ such that $x\in\mathcal{S}\Leftrightarrow h(x)\geq 0$ where $:\mathbb{R}^n\to\mathbb{R}$, $h\in\mathcal{C}^1$ ($h(x)=0\Leftrightarrow x\in\partial{\mathcal{S}}$). $h$ is called the barrier function.

Pick $u(x)$ such that $h(x(t))\geq 0$ $\forall t\geq 0$. What if (by magic)

If our controller ensures that $\forall t\geq 0$ $\dot{h}\geq -\gamma h$, i.e., we stay “above” the constraint, then $\mathcal{S}$ is rendered forward-invariant.

Assume from now on $\dot{x} = f(x)+g(x)u$ (control affine form). Then where we have introduced the Lie derivatives to simplify notation. The barrier certificate becomes This looks like $A(x)u\geq -b(x)$ where $u$ is the decision variable. This is just a linear inequality constraint that we can add to our previous minimization problem.

Example Robot collisions in the robotarium

Robot 1 driven by $\dot{x}_1=u_1\in\mathbb{R}^2$ and Robot 2 driven by $\dot{x}_2=u_2\in\mathbb{R}^2$.

Assume that the user constroller is $u_d = [u_{1,d}, u_{2,d}]^T$. We try to minimize the disturbance added to the controller while adding the constraint that robots don’t collide, i.e., we need to solve Let’s pick a simple $h$: $h(x) = \Vert x_1-x_2 \Vert^2-\Delta^2$.

Note that $\dot{x}=2(x1-x_2)^T(\dot{x}_1-\dot{x}_2)=2(x1-x_2)^T(u_1-u_2)$. The barrier certificate becomes

If is of course possible to choose more complicated $h$. In particular, note that $\dot{h}$ should contain the control $u$, which can make things a bits tricky with more complex dynamics. As an example the Robotarium at Georgia Tech uses often the barrier function $\dot{h}=-\gamma h^3$.

Final remarks: This does not work in all situations, since the quadratic program might not have solutions. Often the challenge is to ensure that solutions exist. For example, one may try to show that a nominal collision avoidance solution always works.