Yihang Yao

yihangya[at]andrew.cmu.edu

Hi, welcome to my website! I am a third-year Ph.D. candidate in Safe AI Lab at Carnegie Mellon University, advised by Prof. Ding Zhao. Before CMU, I received my Bachelor's degree from Zhiyuan College, Shanghai Jiao Tong University (SJTU) in 2022. In 2021, I spent a wonderful time working as a research intern in the Intelligent Control Lab led by Prof. Changliu Liu at CMU.

News

Publications
(* indicates equal contribution)
OASIS: Conditional Distribution Shaping for Offline Safe Reinforcement Learning

Yihang Yao*, Zhepeng Cen*, Wenhao Ding, Haohong Lin, Shiqi Liu, Tingnan Zhang, Wenhao Yu, Ding Zhao

NeurIPS 2024

TL;DR: We investigate offline RL from a data-centric perspective and propose a diffusion model-based data generator to curate training datasets aligned with user preferences.

Paper / Website / Code

Feasibility Consistent Representation Learning for Safe Reinforcement Learning

Zhepeng Cen, Yihang Yao, Zuxin Liu, Ding Zhao

ICML 2024

TL;DR: We introduce FCSRL, a framework that improves safety constraint estimation in RL through representation learning and self-supervised techniques.

Paper / Website / Code

Gradient Shaping for Multi-Constraint Safe Reinforcement Learning

Yihang Yao, Zuxin Liu, Zhepeng Cen, Peide Huang, Tingnan Zhang, Wenhao Yu, Ding Zhao

L4DC 2024

TL;DR: We introduce GradS, a gradient-based method for improving training efficiency in multi-constraint RL by manipulating gradients, optimizing both reward and constraint satisfaction.

Paper / Website / Code

Constraint-Conditioned Policy Optimization for Versatile Safe Reinforcement Learning

Yihang Yao*, Zuxin Liu*, Zhepeng Cen, Jiacheng Zhu, Wenhao Yu, Tingnan Zhang, Ding Zhao

NeurIPS 2023

TL;DR: We introduce CCPO, a framework for versatile/adaptive safe RL that enables efficient training and zero-shot adaptation to varying safety constraints.

Paper

Learning from Sparse Offline Datasets via Conservative Density Estimation

Zhepeng Cen, Zuxin Liu, Zitong Wang, Yihang Yao, Henry Lam, Ding Zhao

ICLR 2024

TL;DR: We introduce CDE, a DICE-based method, which addresses OOD errors in offline RL, achieving SOTA results on the D4RL benchmark, particularly in sparse reward and low-data scenarios.

Paper / Code

Datasets and Benchmarks for Offline Safe Reinforcement Learning

Zuxin Liu*, Zijian Guo*, Haohong Lin, Yihang Yao, Jiacheng Zhu, Zhepeng Cen, Hanjiang Hu, Wenhao Yu, Tingnan Zhang, Jie Tan, Ding Zhao

Journal of Data-centric Machine Learning Research (DMLR); RSS 2023 Safe Autonomy Workshop (Spotlight)

TL;DR: We present a comprehensive benchmarking suite for offline safe RL, featuring expertly crafted safe policies, diverse datasets, and baseline implementations across 38 tasks, designed to accelerate the development and evaluation of safe RL algorithms in both training and deployment phases.

Paper / Website / Code (OSRL) / Code (DSRL) / Code (FSRL)

Constrained Decision Transformer for Offline Safe Reinforcement Learning

Zuxin Liu*, Zijian Guo*, Yihang Yao, Zhepeng Cen, Wenhao Yu, Tingnan Zhang, Ding Zhao

ICML 2023

TL;DR: We propose CDT for offline safe RL, which leverages a multi-objective optimization approach to balance safety and task performance, achieving superior adaptability, robustness, and high-reward policies with zero-shot adaptation capabilities.

Paper / Code (DSRL)

Towards Robust and Safe Reinforcement Learning with Benign Off-policy Data

Zuxin Liu*, Zijian Guo*, Zhepeng Cen, Huan Zhang, Yihang Yao, Hanjiang Hu, Ding Zhao

ICML 2023

TL;DR: We propose SAFER, a robust off-policy learning approach for safe RL that improves policy robustness without adversarial training.

Paper

Safe Control of Arbitrary Nonlinear Systems Using Dynamic Extension

Yihang Yao, Tianhao Wei, Changliu Liu

TL;DR: We present a computationally efficient method for safe control in non-control-affine systems, using energy-function extensions and hyperparameter optimization to ensure safety and efficiency, with theoretical guarantees and numerical validation.

Paper



Services
Talk