Abdulrahman Mohamed, from the Interactive Machine Learning department, in collaboration with colleagues from the Cognitive Assistants department, presented a short paper at the 17th International Conference on Advanced Visual Interfaces (AVI 2024). AVI is a biennial conference, rooted in Italy, which draws Human-Computer Interaction (HCI) researchers from all over the world. The conference was held in Arenzano, Italy from June 3rd to 7th, 2024, and had an overall acceptance rate of 26%.
The paper, titled “Speech Imagery BCI Training Using Game with a Purpose” focuses on the novel application of games with a purpose (GWAPs) in Brain-Computer Interface (BCI) research, specifically targeting Speech Imagery BCI. Speech imagery is defined as the ability to generate speech without producing audible sounds or active muscle movements. The paper focused on addressing the challenges associated with collecting imagined speech electroencephalogram (EEG) data, a process that is mentally exhausting and time-consuming. To enhance participant engagement and enjoyment during data collection, the researchers developed a maze-like game. In this game, participants navigated a virtual robot capable of performing actions that represented the words of interest, while their EEG data was simultaneously recorded. The game-based approach was evaluated with 15 participants. The findings revealed that the game not only improved participant engagement and enjoyment but also achieved a 69.10% average classification accuracy using a random forest classifier. The paper highlights the potential benefits of incorporating GWAPs into Speech Imagery BCI research.
Reference
Abdulrahman Mohamed Selim, Maurice Rekrut, Michael Barz, and Daniel Sonntag. 2024. Speech Imagery BCI Training Using Game with a Purpose. In Proceedings of the 2024 International Conference on Advanced Visual Interfaces (AVI ’24). Association for Computing Machinery, New York, NY, USA, Article 43, 1–5. https://doi.org/10.1145/3656650.3656654