(last update of the page: 22/08/2016)
Active Brain Computer Interfaces (BCIs) allow people to exert direct voluntary control over a computer system: their brain signals are captured and the system recognizes specific imagined actions (movements, images, concepts). Active BCIs and their users must undergo training. This makes the signals easier to recognize by the system. BCIs can thus be applied to many control and interaction scenarios of our everyday lives, especially in relation to entertainment.
BCIs are mostly used by disabled people in a medical setting and seldom leave the lab. First of all, high-grade equipment is expensive and non-portable. Although there are commercial ventures proposing BCI acquisition equipment to the general public, the quality is still insufficient to build accurate and robust BCIs.
BCIs also suffer from numerous limitations:
- Variability of the signals: signals different across people or within the same individual at different times.
- Long and repetitive training sessions: between 10 minutes up to several months, disengage and bore users.
- Limited feedback: simple unimodal feedback ill adapted for many users. Feedback is unidirectional and the user just has to follow instructions.All these issues limit the adoption of BCI, the lack of widespread commercial success and the use of BCI from human computer interaction applications.The objective of my PhD thesis was to propose solutions to the above problems so as to obtain a consistent architecture that will allow BCIs to be better suitable to Human-Computer Interaction (HCI) applications. The idea is to implement co-learning in the BCI loop and to explore how users and system can give feedback to each other in order to improve BCI usability.
We first proposes a supporting architecture and implementation of a BCI system centered on the development for co-learning. Co-learning is the process of making BCI training bidirectional: where we do not only train the system and then give feedback to the user but we also exploit the user who can give feedback to the system in response to the feedback of the system. The objective is to let the system adapt to each user but also in promoting learning in the user so that the skills related to BCI use are also improved (signal modulation, focus, etc.). Co-learning implies that the training phase becomes non monolithic and it should be possible to add training examples during online use and the update the classification model in real time in order to take into account the new training information. Another implication is that feedback should be provided continuously and the detection of brain states should be asynchronous. Ideally, the initial training time before starting the online phase should be minimal. Thus we produced an architecture and system that allow minimizing initial training time by introducing unsupervised pre-filtering and that allow an incremental training of the classifier. We validate our system through a drone piloting application and evaluate its potential in the long-term study compared to a training based on operant conditioning.
Related publications of project 1: all the publications from the Publications page.
Project 2. Visualization for Brain-Computer Interfaces.
If we want the user to be able to give feedback to the system, we must first improve the nature of BCI classification feedback so that users have the relevant information in an easily understandable way in order to give feedback to the system and to interactively adapt the classification process. Thus, project 2 follows project 1 is dedicated to feedback and makes a twofold contribution:
- We propose a new and more intuitive visual feedback modality that makes it easy for end-users to understand the classification performed by the BCI.
- We introduce explicit feedback from the user to the system in order to obtain a co-learning system, where the user and the system learn from each other and help each other gradually.
We evaluate both contributions together on a simple shooter game involving the little Red Riding Hood using two BCI paradigms: Motor Imagery and Steady State Visually Evoked Potentials (SSVEP).
Related publications of project 2: all the publications from the Publications page.
Check the video of this project presented during CHI2016: https://www.youtube.com/watch?v=onSpIRGEEsw
Project 3. Everyday Applications for Brain-Computer Interfaces. Conceptual Imagery.
Given that our overarching objective is to make BCIs more suitable for everyday use beyond the current state of the art of possible BCIs, it is important to study possible application areas where BCIs could have an every day role. This study is twofold: on the one hand we must look at existing BCI paradigms and propose improvements that make them more adapted for everyday use; on the other hand we must look for new BCI paradigms that allow the creation of new niche application areas. The scope of this last objective is very wide and an exhaustive study in infeasible in the short or medium terms. For this reason we chose to focus on one particular aspect by proposing one new BCI paradigm based on Conceptual Imagery and evaluate its potential in one particular application area. Then we propose improvement to BCI training protocols that are made possible by the nature of this new BCI paradigm and show how they make new application areas possible.
Thus project 3 is dedicated to Conceptual Imagery, which allows the BCI to detect when users imagine concepts within a semantic category. Aside from direct motion control applications with Motor Imagery, the semantics of the BCI rarely matches the semantics of the task. Conceptual Imagery allows eliminating the problem by providing more natural interactions for many applications. We investigate and contribute an application of Conceptual Imagery:
smart home control through natural conceptual representations (e.g. “light bulb” turns the light on or off) on both healthy and disabled subjects. The videos are coming soon.
Related publications of project 3: a paper about the feasibility of BCI control in a realistic smart home environment is accepted to Frontiers in Human Neuroscience.
Project 4. Conceptual Priming for Brain Computer Interfaces.
In project 4, we leverage the natural semantics of Conceptual Imagery in order to propose a BCI that control a video game but where the training phase is also integrated in the game. We first propose and explicit interaction, where the instructions for training are conveyed within the game environment. Then, we propose a completely seamless training phase that is fully integrated into the game narrative by using semantic priming, a technique of psychological conditioning.
Check the video of this project presented during CHI2016: https://www.youtube.com/watch?v=BkAtT9pjhQ8
Related publications of project 4:
- Kosmyna, F. Tarpin-Bernard and B. Rivet. Conceptual Priming for In-game BCI Training. ACM Trans. Comput.-Hum. Interact. 2015. 5-year Impact Factor: 1.37 Presented at CHI 2016.
Co-learning for Brain Computer Interfaces
My current work is focussed on Co-learning for Brain Computer Interfaces, you can read a detailed account of my current progress on the dedicated page:
Co-learning for Brain Computer Interfaces
Brain Computer Interfaces for games: my master thesis
Another of my research interests, started during my master thesis, is the multimodal combination of Brain Computer Interfaces with Eye-tracking for gaming. More specifically, I want to evaluate the potential of affordable consumer grade hardware with different popular BCI modalities and try ways to improve the experience of the user both in terms of accuracy of the interactions but also in terms of ergonomy and usability.
In the context of my Master Thesis, I have devised a small scale experiment involving a simple puzzle game with the purpose of finding different avenues of improvements possible for consumer grade systems.
- The choice for the EEG acquisition device was Emotiv EPOC, which is an inexpesive yet well designed device for gaming purposes
- For the software component of the experiments I used the open source BCI and signal processing framework OpenViBE developed in France in INRIA
- The Eye-tracker used was a Tobii T60
The following video gives a glimpse of the experiments and of the modalities involved.
You can download the full video here [.mov h224 233MB]
© Copyright 2015 Nataliya Kosmyna. All Rights Reserved.