The goal of this project is to create a dataset for multimodal activity recognition and common-sense modelling in the domain of autonomous robots. As a first step, we collect videos of users performing tasks with increasing complexity from the perspective of a robot (Wheelphone with attached Intel RealSense camera) and annotate them. Using this dataset, we will investigate methods for incrementally improving the annotations on-the-fly for increased recognition performance.

Contact

Michael Barz