DFKI - Interactive Machine Learning Lab

  • Home
  • Projects
  • News
  • People
  • Publications
  • Teaching
    • Courses
    • Writing a Thesis
    • Bachelor/Master Thesis at Oldenburg University

Multimodal Dementia Tests with Embodied Conversational Agents in VR

Published by Alexander Prange on January 1, 2019January 1, 2019

Embodied conversational agents will be used for exploring multimodal approaches of active input in virtual reality (combination of speech and pen input) to determine the cognitive status of the user.

Contact

Fabrizio Nunnari

Categories: Multimodality

Search
Categories
  • Lighthouse Project
  • Machine Learning
  • Multimodality
  • Natural Language Processing
  • Virtual Reality

Related Posts

Multimodality

Digital Pens in Education

Digital pen signals were shown to be predictive for cognitive states, cognitive load and emotion in educational settings. We investigate whether low-level pen-based features can predict the difficulty of tasks in a cognitive test and Read more…

Multimodality

Multimodal Interactive Knowledge Exploration

In this project, we consider scenarios which include one or more expert users that interact with a multimodal-multisensor interface. In particular, we focus on applications of eye tracking and speech-based input for semantic search applications, Read more…

Multimodality

Cognitive Assessments using Speech-based Dialogue, Smartpen and ML

Cognitive assessments have been the subject of recent debate because there are limitations when they are conducted using pen and paper. For example, the collected material is monomodal (written form) and there is no direct Read more…

  • Eye Tracking Study
  • News
  • Teaching
  • Thesis Advisors
  • Home
  • Projects
  • People
  • Publications
  • Impressum
  • Privacy Policy
Hestia | Developed by ThemeIsle