Dima Damen

Dima Damen

Draft title: Opportunities in Wearable (Egocentric) Computer Vision

Draft abstract: Forecasting the rise of wearable computing with video feeds, this talk will present opportunities for research in video understanding when footage is captured from such devices. The unscripted, unedited footage from a camera that travels with the wearer around their daily activities poses challenges to current video understanding models, but more importantly offers opportunities in multi-modal fusion (video, audio and language) for the tasks of recognition and retrieval. In this talk, I will tackle new tasks, new approaches for supervision, and new models in egocentric vision. All projects details are at: Projects

Speaker Bio: Dima Damen is a Professor of Computer Vision at the University of Bristol. Dima is currently an EPSRC Fellow (2020-2025), focusing her research interests in the automatic understanding of object interactions, actions and activities using wearable visual (and depth) sensors. She has contributed to novel research questions including assessing action completion, skill/expertise determination from video sequences, discovering task-relevant objects, dual-domain and dual-time learning as well as multi-modal fusion using vision, audio and language. She is the project lead for EPIC-KITCHENS, the largest dataset in egocentric vision, with accompanying open challenges. She also leads the EPIC annual workshop series alongside major conferences (CVPR/ICCV/ECCV). Dima is associate editor of IJCV, IEEE TPAMI and Pattern Recognition, and was a program chair for ICCV 2021. She was selected as a Nokia Research collaborator in 2016, and as an Outstanding Reviewer in CVPR2021, CVPR2020, ICCV2017, CVPR2013 and CVPR2012. Dima received her PhD from the University of Leeds (2009), joined the University of Bristol as a Postdoctoral Researcher (2010-2012), Assistant Professor (2013-2018), Associate Professor (2018-2021) and was appointed as chair in August 2021. She supervises 8 PhD students, and 3 postdoctoral researchers.