Projects

Current Projects

RAIL: Reachability-Aided Imitation Learning for Safe Policy Execution

Wonsuhk Jung, Dennis Anthony, Utkarsh Mishra, Nadun Ranawaka, Matthew Bronars, Danfei Xu *, Shreyas Kousik *

Guaranteed Reach-Avoid for Black-Box Systems through Narrow Gaps via Neural Network Reachability

Long Kiu Chung, Wonsuhk Jung, Srivatsank Pullabhotla, Parth Shinde, Yadu Sunil, Saihari Kota, Luis Felipe Wolf Batista, Cédric Pradalier, Shreyas Kousik

Towards Closing the Loop in Robotic Pollination for Indoor Farming via Autonomous Microscopic Inspection

Chuizheng Kong, Alex Qiu *, Idris Wibowo *, Marvin Ren, Aishik Dhori, Kai-Shu Ling, Ai-Ping Hu, Shreyas Kousik

Goal-Reaching Trajectory Design Near Danger with Piecewise Affine Reach-avoid Computation

Long Kiu Chung *, Wonsuhk Jung *, Chuizheng Kong, Shreyas Kousik

Robotics: Science and Systems (RSS), 2024


Current Research Directions

Our overall goal is to ensure full-stack safety by studying each component of the autonomy stack. We seek to build shared representations of uncertainty for each component such that, in the long term, a robot can teach itself to be safe.


Past Work

Reachability-Based Trajectory Design

Reachability-Based Trajectory Design, or RTD, is a receding-horizon planning method that generates dynamically-feasible, collision-free trajectories for autonomous mobile robots. Check out the tutorial for a walkthrough.


Modelling Uncertainty in Estimators and Learned Models

We developed a computationally-efficient method for shadow-matching, where one uses a 3-D urban map to identify GNSS (Global Navigation Satellite System) shadows, or areas where satellite signals are blocked, to create artificial set-valued measurements that represent uncertain possible receiver positions. To represent the curved convex shapes in this method, such as the Gaussian distribution confidence ellipsoids commonly associated with measurement uncertainty, we also created ellipsotopes, a novel set representation that fuses the benefits of polytopes and ellipsoids.

To compute reachability for learned models, we used RTD to build a safety layer for a reinforcement learning (RL) agent, which can outperform vanilla RTD. We also computed the exact forward reachable sets of feedforward neural networks, bridging the gap between verification and training of a neural network.