Diverse controllable diffusion policy with signal temporal logic

Researchers in REALM are enhancing simulations for self-driving cars by studying and developing more realistic and varied behaviors that also follow traffic rules. Their approach combines mathematical logic, specifically Signal Temporal Logic (STL), with advanced machine learning techniques.

Authors: Yue Meng and Chuchu Fan
Citation: IEEE Robotics and Automation Letters, Volume: 9, Issue: 10, October 2024

Abstract:
Generating realistic simulations is critical for autonomous system applications such as self-driving and human-robot interactions. However, driving simulators nowadays still have difficulty in generating controllable, diverse, and rule-compliant behaviors for road participants: Rule-based models cannot produce diverse behaviors and require careful tuning, whereas learning-based methods imitate the policy from data but are not designed to follow the rules explicitly. Besides, the real-world datasets are by nature “single-outcome,”making the learning method hard to generate diverse behaviors. In this letter, we leverage Signal Temporal Logic (STL) and Diffusion Models to learn controllable, diverse, and rule-aware policy. We first calibrate the STL on the real-world data, then generate diverse synthetic data using trajectory optimization, and finally learn the rectified diffusion policy on the augmented dataset. We test on the NuScenes dataset and our approach can achieve the most diverse rule-compliant trajectories compared to other baselines, with a runtime 1/17X to the second-best approach. In the closed-loop testing, our approach reaches the highest diversity, rule satisfaction rate, and the least collision rate. Our method can generate varied characteristics conditional on different STL parameters in testing. A case study on human-robot encounter scenarios shows our approach can generate diverse and closed-to-oracle trajectories.