Speaker: Dr. Yunzhu Li, Columbia University
Time: 10:00 a.m., Jan 6, 2025, GMT+8
Venue: Room 204, Courtyard No.5, Jingyuan
Abstract:
Humans possess a strong intuitive understanding of the physical world. Through observations and interactions with our environment, we build mental models that predict how the world changes when we apply specific actions (i.e., intuitive physics). My research builds on these insights to develop model-based reinforcement learning (RL) agents that, through interaction, construct neural-network-based predictive models capable of generalizing across a range of objects made from diverse materials. The core idea behind my work is to introduce novel representations and integrate structural priors into learning systems to model dynamics at various levels of abstraction. I will discuss how such structures enhance model-based planning algorithms, enabling robots to accomplish complex manipulation tasks (e.g., manipulating object piles, shaping deformable foam to match target configurations, and crafting dumplings from dough using various tools). Furthermore, I will present our recent progress in developing purely learning-based, 3D interactable neural digital twins and how we combine neural dynamics models with a GPU-accelerated branch-and-bound framework to facilitate more effective long-horizon trajectory optimization in challenging, contact-rich manipulation tasks (e.g., non-prehensile planar pushing with obstacles, object sorting, and rope routing).
Source: Center on Frontiers of Computing Studies, PKU