Active Velocity Estimation using Light Curtains
via Self-Supervised Multi-Armed Bandits

Ji Zhang
CMU

RSS 2023

Short Talk (~5 min)
Long talk (~25 min)

Velocity estimation on a moving robot
Velocity estimation on a moving robot with a dynamic (human) obstacle. Left: velocity estimates of dynamic objects (colorful) and static objects (white). Middle: RGB camera image. Right: pose of the robot estimated using SLAM.

Multi-armed bandits
Fast walking

Click here for interactive visualization
Velocity estimation using multi-armed bandits under fast walking. Right-top: the strategy selected by the multi-armed bandit. Right-bottom: the value function of each strategy; higher value is better. During exploitation, the strategy with the highest value is selected. During exploration, all strategies are selected with equal probability.
Relaxed walking

Click here for interactive visualization
Velocity estimation using multi-armed bandits under relaxed walking. Right-top: the strategy selected by the multi-armed bandit. Right-bottom: the value function of each strategy; higher value is better. During exploitation, the strategy with the highest value is selected. During exploration, all strategies are selected with equal probability.

Obstacle avoidance
Obstacles avoidance using light curtains only. ORB-SLAM is used to estimate the pose of the robot using light curtains. Dynamic occupancy grids are used to estimate the occupancy and velocities of obstacles in the scene from light curtains. These estimates are input to a planning algorithm that produces safe paths to avoid obstacles. The robot is running fully autonomously using light curtians for SLAM, estimation and planning. All components interact with each other naturally and continuously at their own independent speeds.
Our method handles both static and dynamic obstacles. It successfully plans a safe and optimal path that avoids both static and dynamic obstacles.

Mapping and Reconstruction
SLAM using the light curtain device, on a robot moving inside a building. Light curtain placements provide depth. This is input to ORB-SLAM to obtain the pose of the robot at each timestep and reconstruct the scene.
Reconstructed map from the top-down view.

Paper

Citation
 
Siddharth Ancha, Gaurav Pathak, Ji Zhang, Srinivasa Narasimhan,
and David Held. "Active Velocity Estimation using Light Curtains via Self-Supervised Multi-Armed Bandits."
In Proceedings of Robotics: Science and Systems (RSS), July 2023.

Paper
Bibtex

@inproceedings{ancha2023rss,
    title     = {Active Velocity Estimation using Light Curtains via Self-Supervised Multi-Armed Bandits},
    author    = {Siddharth Ancha AND Gaurav Pathak AND Ji Zhang AND Srinivasa Narasimhan AND David Held}, 
    booktitle = {Proceedings of Robotics: Science and Systems}, 
    year      = {2023}, 
    address   = {Daegu, Republic of Korea}, 
    month     = {July}, 
}

Code
Code contribution #1: dyn_map: standalone package for dynamic occupancy grids

We modularized our code such that dyn_map is a standalone package for representing, visualizing and updating dynamic occupancy grids. It does not contain any dependencies on light curtains! This package can be conveniently used to run dynamic occupancy grids with any sensor (e.g. LiDAR, depth camera etc.) or for any application independent of light curtains!

It is a ROS-based package that creates and publishes custom messages dyn_map/DynamicOccupancyGrid and dyn_map/VisibilityGird for dynamic occupancy grids and visibility grids respectively. We extended a modern ROS visualizer, Foxglove Studio to visualize our custom messages. Our custom visualizer can be downloaded here.

Code contribution #2: lc_ve: velocity estimation using light curtains

The lc_ve package implements velocity and occupancy estimation using light curtains and dynamic occupancy grids (using the dyn_map package). The main results in the paper are generated using this package.




Acknowledgements

We thank Pulkit Grover for discussions on information-theoretic measures of mixed discrete-continuous random variables. This material is based upon work supported by the National Science Foundation under Grants No. IIS-1849154, IIS-1900821, the United States Air Force and DARPA under Contract No. FA8750-18-C-0092, and a grant from the Manufacturing Futures Institute at Carnegie Mellon University.

Disclaimer: Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation, United States Air Force and DARPA.