Bio

Yelin Kim is a final-year Ph.D. candidate working with Professor Emily Mower Provost in the CHAI (Computational Human-Centered Analysis and Integration) Lab at the University of Michigan. Her research interests are in computational human behavior analysis and affective computing by utilizing multimodal (speech and video) signal processing and machine learning techniques. She computationally measures, represents, and analyzes human behavior data (vocal and facial expressions, body gestures, etc.) to illuminate fundamental dynamics and structure in emotion expression, and to develop natural human-machine interfaces.

Yelin has been awarded several best paper/poster awards and fellowships, including Qualcomm Scholarship in 2011, Korean Government Scholarship for Study Abroad in 2011-2013, and the Best Student Paper Award ([news]) at the ACM International Conference on Multimedia (ACM MM) in 2014.

Research Overview

Can machines sense and identify human emotion? My main research interest is the automatic analysis of human behavior during real-world human-human and human-machine interactions. In particular my aim is to create an interdisciplinary research platform that develops systems and devices for automatic sensing, quantification, and interpretation of affective and social signals during interactive communication. Human-human and human-machine interactions often evoke and involve affective and social cues, such as emotion, social attitude, engagement, conflict, and persuasion. These signals can be inferred from both verbal and nonverbal human behaviors, such as words, head and body movements, and facial and vocal expressions. The signals profoundly influence the overall outcome of interactions, and hence the understanding of these signals will enable us to build human-centered interactive technology tailored to an individual user’s needs, preferences, and capabilities.

Computational human behavior research is at a tipping point. A variety of applications would benefit from outcomes of my research, ranging from personalized assistive systems to surveillance monitoring systems. The line of the proposed research builds upon multimodal signal processing and machine learning techniques, which provide a technical background for extracting meaningful information from audio and video recordings. However, the complexities inherent in human behavior necessitate the innovation and adaptation of traditional techniques based on behavioral and social contexts. In my previous work, I proposed methods to accurately estimate affective cues of individuals during conversations.

My research statement: YelinKim_research.pdf

Areas of Interest

Human behavioral signal processing, human-centered computing, affective computing, machine learning, multimodal signal processing, artificial intelligence, computer vision