I am a PhD candidate in Mechanical Engineering with Prof. Masayoshi Tomizuka at University of California, Berkeley. I am focused on building trustworthy planning algorithms for autonomous agents, such as vehicles and robots. I got my Master of Science in 2022, in the middle of my PhD study at UC Berkeley. Previously, I received my Bachelor of Engineering from Harbin Institute of Tehcnology, working with Prof. Huijun Gao and Prof. Weichao Sun on systems control and fault diagnosis.
Download my resumé .
Ph.D. in Mechanical Engineering, 2024 (Expected)
University of California, Berkeley
M.S. in Engineering, 2022
University of California, Berkeley
B.Eng. in Automation, 2019
Harbin Institute of Technology, China
Designed a guided online distillation algorithm (website) for safe reinforcement learning (RL): extracted skills from human demonstrations by Decision Transformer, and distilled them into a lightweight network in online interactive funetuning for safety enhancement
Proposed a metric to quantify the interaction intensity for multi agent RL, which guides resource allocation for training diverse policies under a constraint budget
Develop a generative model (Diffusion) based simulator producing human like interactions, which can be trained concurrently and accept feedback from planning modules for better sample efficiency and final performance on safety
Resulting Publications:
J. Li et al., ‘’Guided Online Distillation: Promoting Safe Reinforcement Learning by Offline Demonstration,’’ in arXiv:2309.09408 (Submitted to ICRA 2024), 2023.
Y. Chen, C. Tang, R. Tian, C. Li, J.Li et al., ‘’Quantifying Agent Interaction in Multi-agent Reinforcement Learning for Cost-efficient Generalization,’’ in arXiv preprint arXiv:2310.07218, 2023.
Designed a spatio-temporal graph dual-attention network for multi-agent prediction, considering context information, trajectories of interactive agents, and physical feasibility constraints
Proposed a Pessimistic Offline Reinforcement Learning algorithm, which palliates the distributional shift problem by explicitly handling out-of-distribution states
Built a hierarchical planning framework especially for long horizon tasks, with a high-level module reasons about long-term strategies and plan sub-goals, and low-level goal-conditioned offline reinforcement learning algorithms to achieve sub-tasks
Resulting Publications:
J. Li et al., ‘’Hierarchical Planning Through Goal-Conditioned Offline Reinforcement Learning,’’ in IEEE Robotics and Automation Letters (RA-L), 2022.
J. Li et al., ‘’Spatio-Temporal Graph Dual-Attention Network for Multi-Agent Prediction and Tracking,’’ in IEEE Transactions on Intelligent Transportation Systems, 2021.
J. Li et al., ‘’Dealing with the Unknown: Pessimistic Offline Reinforcement Learning,`` in 2021 Conference on Robot Learning (CoRL), 2021.
Built an interaction-aware behavior planning algorithm, which predicts the cooperativeness of the surrounding vehicles and solves a POMDP problem by MCTS
Proposed a general hierarchical planning framework, which safely handles various complex urban traffic conditions
Built a simulator that reproduces real traffic scenarios, and the proposed algorithms achieved both high completion rate of around and low collision rate
Resulting Publications:
J. Li et al., ‘’A Safe Hierarchical Planning Framework for Complex Driving Scenarios based on Reinforcement Learning,’’ in 2021 IEEE Conference on Robotics and Automation (ICRA), 2021.
J. Li et al., ‘’Interaction-aware behavior planning for autonomous vehicles validated with real traffic data,’’ in Dynamic Systems and Control Conference (DSCC). American Society of Mechanical Engineers, 2020.
Built an integrated SVM model with KPCA to extract and compress information, and GA to optimize the model parameters
Evaluated the algorithm on Tennessee Eastman process benchmark. Ablation studies showed that KPCA and GA both boost the performance of the SVM
Resulting Publications: