Publications

(last update: 08/10/2019)
For any additional information or to request a copy of the articles, please contact me directly.

  • Nataliya Kosmyna, Caitlin Morris, Thanh Nguyen, Sebastian Zepf, Javier Hernandez, and Pattie Maes. 2019. AttentivU: Designing EEG and EOG Compatible Glasses for Physiological Sensing and Feedback in the Car. In Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI ’19). ACM, New York, NY, USA, 355-368. 

          Get this article here: https://dl.acm.org/citation.cfm?id=3344516

Abstract: Several research projects have recently explored the use of physiological sensors such as electroencephalography (EEG) or electrooculography (EOG) to measure the engagement and vigilance of a user in context of car driving. However, these systems still suffer from limitations such as an absence of a socially acceptable form-factor and use of impractical, gel-based electrodes. We present AttentivU, a device using both EEG and EOG for real-time monitoring of physiological data. The device is designed as a socially acceptable pair of glasses and employs silver electrodes. It also supports real-time delivery of feedback in the form of an auditory signal via a bone conduction speaker embedded in the glasses. A detailed description of the hardware design and proof of concept prototype is provided, as well as preliminary data collected from 20 users performing a driving task in a simulator in order to evaluate the signal quality of the physiological data.

This paper was received Honorable Mention award at AutoUI conference in Utrecht, September 2019.

  • Nataliya Kosmyna, Pattie Maes. 2019. AttentivU: a Biofeedback Device to Monitor and Improve Engagement in the Workplace. In the 41st International Engineering in Medicine and Biology Conference, Berlin, Germany, July 23-27. To appear.

Abstract: Everyday work is becoming increasingly complex and cognitively demanding. A person’s level of attention influences how effectively their brain prepares itself for action, and how much effort they apply to a task. However, the various distractions of the modern work environment often make it hard to pay and sustain attention. To address this issue, we present AttentivU – a system that uses wearable electroencephalography (EEG) to measure the attention of a person in real-time. When the user’s attention level is low, the system provides real-time, subtle feedback to nudge the person to become attentive again. Users can choose to turn the device on or off based on whether their current task requires focused attention. We tested the system on 12 adults in a real workplace setting. The preliminary results show that the biofeedback redirects the attention of the participants to the task at hand and improves their performance. 

  • Nataliya Kosmyna, Caitlin Morris, Utkarsh Sarawgi, Pattie Maes. 2019. AttentivU: a Wearable Pair of EEG and EOG Glasses for Real-Time Physiological Processing. In the 16th IEEE-EMBS International Conference on Wearable and Implantable Body Nensor networks

Get this article here: https://ieeexplore.ieee.org/document/8771080 

Abstract: Recently several research projects have explored using physiological sensors such as electroencephalography (EEG) or electrooculography (EOG) electrodes to measure the engagement of a user in different contexts and augment learning activities. However, these systems still suffer from limitations such as an absence of a socially acceptable design, or use of impractical gel-based electrodes. We present AttentivU, a device using both EEG and EOG for real-time monitoring of physiological data. The device is designed as a socially acceptable pair of glasses and employs silver electrodes as an alternative to the commonly used silver/silver chloride (Ag/AgCI) “wet” electrodes. A detailed description of the hardware design and proof of concept prototype is provided, as well as a side by side comparison of conventional wet electrodes.

  • Nataliya Kosmyna, Caitlin Morris, Utkarsh Sarawgi, Pattie Maes.  2019. AttentivU: a Biofeedback System for Real-time Monitoring and Improvement of Engagement. CHI’19 Extended Abstracts on Human Factors in Computing Systems.

Check the pdf and the video here: https://dl.acm.org/citation.cfm?id=3311768

Abstract: It is increasingly hard for adults and children alike to be attentive given the increasing amounts of information and distractions surrounding us. We have developed AttentivU: a device, in a socially acceptable form factor of a pair of glasses, that a person can put on in moments when he/she wants/needs to be attentive. The AttentivU glasses use electroencephalography (EEG) as well as Electrooculography (EOG) sensors to measure attention of a person in real-time and provide either audio or haptic feedback to the user when their attention is low, thereby nudging them to become engaged again. We have tested this device in workplace and classroom settings with over 80 subjects. We have performed experiments with people studying or working by themselves, viewing online lectures as well as listening to classroom lectures. The obtained results show that our device makes a person more attentive and produces improved learning and work performance outcomes.

  • Nataliya Kosmyna, Utkarsh Sarawgi, and Pattie Maes. 2018. AttentivU: Evaluating the Feasibility of Biofeedback Glasses to Monitor and Improve Attention. In Proceedings of the 2018 ACM International Joint Conference and 2018 International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers (UbiComp ’18). ACM, New York, NY, USA, 999-1005.

Get the article here: https://doi.org/10.1145/3267305.3274124

Abstract: Our everyday work is becoming increasingly complex and cognitively demanding. What we pay attention to during our day influences how effectively our brain prepares itself for action, and how much effort we apply to a task. To address this issue we present AttentivU -a system that uses wearable electroencephalography (EEG) to measure the attention of a person in realtime. When the user’s attention level is low, the system provides real-time, subtle, haptic or audio feedback to nudge the person to become attentive again. We tested a first version of the system, which uses an EEG headband on 48 adults over several sessions in both a lab and classroom setting. The results show that the biofeedback redirects the attention of the participants to the task at hand and improves their performance on comprehension tests. We next tested the same approach in the form of glasses on 6 adults in a lab setting, as the glasses form factor may be more acceptable in the long run. We conclude with a discussion of the improved third version of AttentivU, currently under development, which combines a custom-made solution of the glasses form-factor with built-in electrooculography (EOG) and EEG electrodes as well as auditory feedback.

This article has everything you ever wanted to know about Brain-Computer Interfaces: how they work, what they can be used for, what are their limitations.

Abstract: Brain-Computer Interfaces (BCIs) have become more and more popular these last years. Researchers use this technology for several types of applications, including attention and workload measures but also for the direct control of objects by the means of BCIs. In this work we present a first, multidimensional feature space for EEG-based BCI applications to help practitioners to characterize, compare and design systems, which use EEG-based BCIs. Our feature space contains 4 axes and 9 sub-axes and consists of 41 options in total as well as their different combinations. We presented the axes of our feature space and we positioned our feature space regarding the existing BCI and HCI taxonomies and we showed how our work integrates the past works, and/or complements them.

  • N.Kosmyna, J.Lindgren, A. Lécuyer (2018). Attending to Visual Stimuli versus Performing Visual Imagery as a Control Strategy for EEG-based Brain-Computer Interfaces. Scientific Reports, Nature, volume 8, Article number: 13222.                            Abstract: Currently the most common imagery task used in Brain-Computer Interfaces (BCIs) is motor imagery, asking a user to imagine moving a part of the body. This study investigates the possibility to build BCIs based on another kind of mental imagery, namely “visual imagery”. We study to what extent can we distinguish alternative mental processes of observing visual stimuli and imagining it to obtain EEG-based BCIs. Per trial, we instructed each of 26 users who participated in the study to observe a visual cue of one of two predefined images (a flower or a hammer) and then imagine the same cue, followed by rest. We investigated if we can differentiate between the different subtrial types from the EEG alone, as well as detect which image was shown in the trial. We obtained the following classifier performances: (i) visual imagery vs. visual observation task (71% of classification accuracy), (ii) visual observation task towards different visual stimuli (classifying one observation cue versus another observation cue with an accuracy of 61%) and (iii) resting vs. observation/imagery (77% of accuracy between imagery task versus resting state, and the accuracy of 75% between observation task versus resting state). Our results show that the presence of visual imagery and specifically related alpha power changes are useful to broaden the range of BCI control strategies.
  • X. Zhang, N. Kosmyna, P. Maes, J. Rekimoto. Investigating Bodily Responses to Unknown Words: A Focus on Facial Expressions and EEG. In 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Honolulu, HI, USA, July 17-21, 2018.
  • N. Kosmyna, A. Lécuyer (2017). Designing Guiding Systems for Brain-Computer Interfaces. Front.Hum.Neurosci. https://doi.org/10.3389/fnhum.2017.00396. Impact Factor: 3.209.  Abstract: Brain–Computer Interface (BCI) community has focused the majority of its research efforts on signal processing and machine learning, mostly neglecting the human in the loop. Guiding users on how to use a BCI is crucial in order to teach them to produce stable brain patterns. In this work, we explore the instructions and feedback for BCIs in order to provide a systematic taxonomy to describe the BCI guiding systems. The purpose of our work is to give necessary clues to the researchers and designers in Human–Computer Interaction (HCI) in making the fusion between BCIs and HCI more fruitful but also to better understand the possibilities BCIs can provide to them.Download this article here.
  • Kosmyna N, Tarpin-Bernard F, Bonnefond N and Rivet B (2016) Feasibility of BCI Control in a Realistic Smart Home EnvironmentFront. Hum. Neurosci. 10:416. doi: 10.3389/fnhum.2016.00416. Impact Factor: 3.209. Abstract: Smart homes have been an active area of research, however despite considerable investment, they are not yet a reality for end-users. Moreover, there are still accessibility challenges for the elderly or the disabled, two of the main potential targets for home automation. In this exploratory study we design a control mechanism for smart homes based on Brain Computer Interfaces (BCI) and apply it in the “Domus”1 smart home platform in order to evaluate the potential interest of users about BCIs at home. We enable users to control lighting, a TV set, a coffee machine and the shutters of the smart home. We evaluate the performance (accuracy, interaction time), usability and feasibility (USE questionnaire) on 12 healthy subjects and 2 disabled subjects. We find that healthy subjects achieve 77% task accuracy. However, disabled subjects achieved a better accuracy (81% compared to 77%). 

Download this article here.

  • Kosmyna, F. Tarpin-Bernard and B. Rivet. Conceptual Priming for In-game BCI Training. ACM Trans. Comput.-Hum. Interact.  2015. 5-year Impact Factor: 1.37  Presented at CHI 2016.                                                        Abstract: Using Brain Computer Interfaces (BCIs) as a control modality for games is popular. However BCIs require prior training before playing, which is hurtful to immersion and player experience in the game. For this reason, we propose an explicit integration of the training protocol in game by a modification of the game environment to enforce the synchronicity with the BCI system and to provide appropriate instructions to user. We then dissimulate the synchronicity in the game mechanics by using priming to mask the training instruction (implicit stimuli). We conduct an evaluation of the effects on game experience compared to standard BCI training on 36 subjects. We use the game experience questionnaire (GEQ) coupled with reliability analysis (Cronbach’s alpha). The integration does not change the feeling of competence (3/4). However, flow and immersion increase sizably with explicit training integration (2.78 and 2.67/4 from 1.79/4 and 1.52/4) and even more with the implicit training integration (3.27/4 and 3.12/4).

Download this article here.

  • N. Kosmyna, F. Tarpin-­Bernard and B. Rivet. Operationalization of Conceptual Imagery for BCIs.  EUSIPCO’2015. In Proceedings of the 23d European Signal Processing Conference. Aug.2015.                                        Abstract: We present a Brain Computer Interface (BCI) system in an asynchronous setting that allows classifying objects in their semantic categories (e.g. a hammer is a tool). For training, we use visual cues that are representative of the concepts (e.g. a hammer image for the concept of ham- mer). We evaluate the system in an offline synchronous setting and in an online asynchronous setting. We consider two scenarios: the first one, where concepts are in close semantic families (10 subjects) and the second where concepts are from distinctly different categories (10 subjects). We find that both have classification accuracies of 70% and above, although more distant conceptual categories lead to 5% more in classification accuracy. Presented.

Download this article here.

  • Nataliya Kosmyna, Franck Tarpin-Bernard, and Bertrand Rivet. 2015. Brains, computers, and drones: think and control! ACM Interactions 22, 4 (June 2015), 44-47. DOI=10.1145/2782758 http://doi.acm.org/10.1145/2782758          Abstract: Imagine you could control the world with your thoughts. Sounds appealing, doesn’t it? There is a technology that can capture your brain activity and issue commands to computer systems, such as robots, prosthetics, and games. Indeed, brain-computer interfaces (BCIs) have been around since the 1970s, and have improved with each passing decade. You might wonder: “Wait! If this technology has been around all this time, how come we’re not all using it? I mean, we hear about great applications sometimes in the press—controlling a drone, for instance—but then nothing seems to come of it. Why is that?”

Download this article here.

  • N. Kosmyna, F. Tarpin-Bernard and B. Rivet. Towards Brain Computer Interfaces for Recreational Activities: Piloting a Drone. In 15th IFIP TC.13 International Conference on Human-Computer Interaction – INTERACT 2015. Springer Berlin Heidelberg. 2015.
    Conference of A level (CORE Ranking) / 2014 Acceptance Rate: 29%. Presented.                                                                                                        Abstract: Active Brain Computer Interfaces (BCIs) allow people to exert voluntary control over a computer system: brain signals are captured and imagined actions (movements, concepts) are recognized after a training phase (from 10 minutes to 2 months). BCIs are confined in labs, with only a few dozen people using them outside regularly (e.g. assistance for impairments). We propose a “Co-learning BCI” (CLBCI) that reduces the amount of training and makes BCIs more suitable for recreational applications. We replicate an existing experiment where the BCI controls a drone and compare CLBCI to their Operant Conditioning (OC) protocol over three durations of practice (1 day, 1 week, 1 month). We find that OC works at 80% after a month practice, but the performance is between 60 and 70% any earlier. In a week of practice, CLBCI reach- es a performance of around 75%. We conclude that CLBCI is better suited for recreational use. OC should be reserved for users for whom performance is the main concern.

Download this article here.

  • N. Kosmyna, F. Tarpin-Bernard, and B. Rivet. 2015. Adding Human Learning in Brain–Computer Interfaces (BCIs): Towards a Practical Control ModalityACM Trans. Comput.-Hum. Interact. 22, 3, Article 12 (May’2015), 37 pages. DOI=10.1145/2723162 http://doi.acm.org/10.1145/2723162
    5-year Impact Factor: 1.37. Presented at CHI 2016.                            Abstract: In this article we introduce CLBCI (Co-Learning for Brain Computer Interfaces), a BCI architecture based on co-learning, where users can give explicit feedback to the system rather than just receiving feedback. CLBCI is based on minimum distance classification with Independent Component Analysis (ICA) and allows for shorter training times compared to classical BCIs, as well as a faster learning in users and a good performance progression. We further propose a new scheme for real-time two-dimensional visualization of classification outcomes using Wachspress coordinate interpolation. It allows us to represent classification outcomes for n classes in simple regular polygons. Our objective is to devise a BCI system that constitutes a practical interaction modality that can be deployed rapidly and used on a regular basis. We apply our system to an event-based control task in the form of a simple shooter game where we evaluate the learning effect induced by our architecture compared to a classical approach. We also evaluate how much user feedback and our visualization method contribute to the performance of the system.

Download this article here.

  • N. Kosmyna, F. Tarpin-Bernard and B. Rivet. Drone, Your Brain, Ring Course: Accept the Challenge and Prevail! UBICOMP’14 ADJUNCT. 2014. 243-246. DOI 10.1145/2638728.2638785
    Conference of A+ level (CORE Ranking). Presented.                        Abstract: Brain Computer Interface systems (BCIs) rely on lengthy training phases that can last up to months due to the inherent variability in brainwave activity between users. We propose a BCI architecture based on the co- learning between the user and the system through different feedback strategies. Thus, we achieve an operational BCI within minutes. We apply our system to the piloting of an AR.Drone 2.0 quadricopter with a series of hoops delimiting an exciting circuit. We show that our architecture provides better task performance than traditional BCI paradigms within a shorter time frame. We further demonstrate the enthusiasm of users towards our BCI-based interaction modality and how they find it much more enjoyable than traditional interaction modalities.

Download this article here.

  • N. Kosmyna, F. Tarpin-Bernard and B. Rivet. Bidirectional Feedback in Motor Imagery BCIs: Learn to Control a Drone within 5 Minutes. CHI’14 Extended Abstracts on Human Factors in Computing Systems. 2014. 479-482. DOI 10.1145/2559206.2574820
    Conference of A+ level (CORE Ranking). Presented during 3 days.       
    Abstract: Brain Computer Interface systems rely on lengthy training phases that can last up to months due to the inherent variability in brainwave activity between users. We propose a BCI architecture based on the co-learning between the user and the system through different feedback strategies. Thus, we achieve an operational BCI within minutes. We apply our system to the piloting of an AR.Drone 2.0 quadricopter. We show that our architecture provides better task performance than traditional BCI paradigms within a shorter time frame. We further demonstrate the enthusiasm of users towards our BCI-based interaction modality and how they find it much more enjoyable than traditional interaction modalities.
    Download this article here.
    • N. Kos’myna, F. Tarpin-Bernard and Bertrand Rivet. Towards a General Architecture for a Co-Learning of Brain Computer Interfaces in Proceeding of the 6th International IEEE EMBS Conference on Neural Engineering, San Diego, USA, November 2013  DOI 10.1109/NER.2013.6696118
      Conference of A level (CORE Ranking). Presented.                         
      Abstract: In this article we propose a software architecture for asynchronous BCIs based on co-learning, where both the system and the user jointly learn by providing feedback to one another. We propose the use of recent filtering techniques such as Riemann Geometry and ICA followed by multiple classifications, by both incremental supervised classifiers and minimally supervised classifiers. The classifier outputs are then combined adaptively according to the feedback using recursive neural networks.
      Download this article here.

 

    • N. Kos’myna and F. Tarpin-Bernard. Evaluation and comparison of a multimodal combination of BCI paradigms and Eye tracking with affordable consumer-grade hardware in a gaming context. 2013. In IEEE Transactions on Computational Intelligence and AI in Games. Volume PP. Issue 99. DOI http://dx.doi.org/10.1109/TCIAIG.2012.2230003
      5-­year Impact Factor: 1.167.                                                                   
      Abstract: This paper evaluates the usability and efficiency of three multimodal combinations of brain–computer interface (BCI) and eye tracking in the context of a simple puzzle game involving tile selection and rotations using affordable consumer-grade hardware. It presents preliminary results indicating that the BCI interaction is interesting but very tiring and imprecise, and may be better suited as an optional and complementary modality to other interaction techniques.
      Download this article here.

 

  • N. Kos’myna and F. Tarpin-Bernard. Une combinaison de paradigmes d’interaction cerveau-ordinateur et suivi du regard pour des interactions multimodales. in Ergonomie et Interaction Homme-Machine ErgoIHM’2012. 2012. DOI 10.1145/2671470.2671486                   Abstract: This study evaluates the usability and efficiency of three multimodal combinations of Brain-Computer Interfaces (BCI) and Eye-tracking in the context of a simple puzzle game involving tile selections and rotations. Results indicate that although BCI interaction raises interest is still very tiring and imprecise. However, BCI based on SSVEP are efficient in cooperation with gaze.

Download this article here.

  • N. Kos’myna. A Multimodal Combination of Brain Computer Interfaces and Eye Tracking. Master Thesis. 2012.

© Copyright 2017 Nataliya Kosmyna/BRAINI/Your Mind is Mine. All Rights Reserved.