DFKI - Interactive Machine Learning Lab

  • Home
  • Projects
  • News
  • People
  • Publications
  • Teaching

Daniel Sonntag, Invited talk at Oxford University

Published by Max Biwersi on March 14, 2019March 14, 2019

at the Oxford Robotics Institute about interactive machine learning in human-robot Interaction (HRI).

Categories:

Search
Categories
  • Lighthouse Project
  • Machine Learning
  • Multimodality
  • Natural Language Processing
  • Virtual Reality

Related Posts

Machine Learning

Visual Search Target Inference for Pro-active User Support

Visual Search Target Inference in Natural Interaction Settings with Machine Learning Visual search is a perceptual task in which humans aim at identifying a search target object such as a traffic sign among other objects. Read more…

Multimodality

Digital Pens in Education

Digital pen signals were shown to be predictive for cognitive states, cognitive load and emotion in educational settings. We investigate whether low-level pen-based features can predict the difficulty of tasks in a cognitive test and Read more…

Machine Learning

Active Learning in Image Captioning

In this present day and age of smart phones, ipads and other smart devices a digital camera has become an integral part of our daily experience. This combined with our desire to capture moments in Read more…

  • Eye Tracking Study
  • News
  • Oberseminar
  • Teaching
  • Thesis Advisors
  • Home
  • Projects
  • People
  • Publications
  • Impressum
  • Privacy Policy
Hestia | Developed by ThemeIsle