Reactive and Safe Simulations using Neural Barrier Certificates
We proposed a reactive agent model which can ensure safety without comprising the original goals, by learning only high-level policies from expert data and a low level controller guided by jointly learned control barrier function.
Density Constrained Reinforcement Learning
We leverage the duality between density functions and Q functions to develop an effective algorithm to solve the density-constrained RL problem optimally and the constraints are guaranteed to be satisfied.
Large-Scale Multi-Agent Control with Neural Contraction Metric
We study the multi-agent reach-avoid control problem for large-scale systems where each agent should avoid collisions with other agents and obstacles while reaching the goals.
Optimal Discrete-Continuous Planning For Linear Hybrid Systems
We present a hybrid automaton planning formalism and propose an optimal approach that encodes this planning problem as a Mixed Integer Linear Program (MILP) by fixing the action number of automaton runs.
Uncertainty-Aware Safe Planning using Neural Contraction Metric
We consider the problem for a robot to explore an environment with an unknown, state-dependent disturbance to the dynamics and forbidden areas. Results show that our method conducts shorter exploration time with fewer collisions.
Learning Certified Control Using Contraction Metric
We solve the problem of finding a certified control policy that drives a robot from any given initial state and under any bounded disturbance to the desired reference trajectory, with guaranteed bounds on the tracking error.
Safe Multi-agent control via Decentralize Neural Barrier Function
We study the multi-agent safe control problem where agents should avoid collisions to static obstacles and collisions with each other while reaching their goals.