
BEAVR: Bimanual, multi-embodiment, accessible, virtual reality teleoperation system for robots
Researchers at ARCLab have created a robotic teleoperation and learning software for controlling robots, recording datasets, and training physical AI models. The work aims to solve two problems in the world of robotic manipulation: the lack of a well-developed, open-source, accessible teleoperation system that can work out of the box, and no performant end-to-end control, recording, and learning platform for robots that is completely hardware agnostic.
Author: Alejandro Posadas-Nava, Alejandro Carrasco, Richard Linares, and Victor Rodriguez-Fernandez
Citation: Accepted for presentation at ICCR Kyoto 2025
Abstract:
BEAVR is an open-source, bimanual, multi-embodiment Virtual Reality (VR) teleoperation system for robots, designed to unify real-time control, data recording, and policy learning across heterogeneous robotic platforms. BEAVR enables real-time, dexterous teleoperation using commodity VR hardware, supports modular integration with robots ranging from 7-DoF manipulators to full-body humanoids, and records synchronized multi-modal demonstrations directly in the LeRobot dataset schema. Our system features a zero-copy streaming architecture achieving ≤35\,ms latency, an asynchronous “think-act'” control loop for scalable inference, and a flexible network API optimized for real-time, multi-robot operation. We benchmark BEAVR across diverse manipulation tasks and demonstrate its compatibility with leading visuomotor policies such as ACT, DiffusionPolicy, and SmolVLA. All code is publicly available, and datasets are released on Hugging Face.