Please Enter Keywords
资源 63
[Lecture] Multisensory Machine Intelligence
May. 13, 2024

from clipboard

Speaker: Ruohan Gao (高若涵) from University of Maryland, College Park (UMD)


Time: 18:30-20:00 p.m., May 13, 2024, GMT+8

Venue: Room 207, Teaching Building #2 (Yanyuan Campus)

Abstract: 

The future of Artificial Intelligence demands a paradigm shift towards multisensory perception—to systems that can digest ongoing multisensory observations, that can discover structure in unlabeled raw sensory data, and that can intelligently fuse useful information from different sensory modalities for decision making. While we humans perceive the world by looking, listening, touching, smelling, and tasting, traditional form of machine intelligence mostly focuses on a single sensory modality, particularly vision. My research aims to teach machines to see, hear, and feel like humans to perceive, understand, and interact with the multisensory world. In this talk, I will present my research of multisensory machine intelligence that studies two important aspects of the multisensory world: 1) multisensory objects, and 2) multisensory space. In both aspects, I will talk about how we design systems to reliably capture multisensory data, how we effectively model them with new differentiable simulation algorithms and deep learning models, and how we explore creative cross-modal/multi-modal applications with sight, sound, and touch.

Source: School of Computer Science, PKU