Signal processing to drive human-computer interaction : EEG and eye-controlled interfaces /
Saved in:
Corporate Author: | |
---|---|
Other Authors: | , , |
Format: | Electronic eBook |
Language: | English |
Published: |
London, UK :
The Institution of Engineering and Technology,
2020.
©2020 |
Series: | IET control, robotics and sensors series ;
129. |
Subjects: | |
Online Access: | Connect to this title online (unlimited simultaneous users allowed; 325 uses per year) |
Table of Contents:
- Machine generated contents note: 1. Introduction / Ioannis Kompatsiaris
- 1.1. Background
- 1.2. Rationale
- 1.3. Book objectives
- pt. I Reviewing existing literature on the benefits of BCIs, studying the computer use requirements and modeling the (dis)abilities of people with motor impairment
- 2. added value of EEG-based BCIs for communication and rehabilitation of people with motor impairment / Ioannis Kompatsiaris
- 2.1. Introduction
- 2.2. BCI systems
- 2.3. Review question
- 2.4. Methods
- 2.4.1. Search strategy
- 2.4.2. Types of participants and model systems
- 2.4.3. Data synthesis - description of studies-target population characteristics
- 2.5. EEG-based BCI systems for people with motor impairment
- 2.5.1. EEG-based BCIs for communication and control
- 2.5.2. EEG-based BCIs for rehabilitation and training
- 2.6. Discussion
- 2.7. Summary
- References
- 3. Brain-computer interfaces in a home environment for patients with motor impairment--the MAMEM use case / Ioannis Danglis
- 3.1. Introduction
- 3.1.1. Parkinson's disease
- 3.1.2. Patients with cervical spinal cord injury
- 3.1.3. Patients with neuromuscular diseases
- 3.2. Computer habits and difficulties in computer use
- 3.2.1. Patients with PD
- 3.2.2. Patients with cervical spinal cord injuries
- 3.2.3. Patients with NMDs
- 3.3. MAMEM platform use in home environment
- 3.3.1. Subjects selection
- 3.3.2. Method
- 3.3.3. Results
- 3.4. Summary
- References
- 4. Persuasive design principles and user models for people with motor disabilities / Cees Midden
- 4.1. Methods for creating user models for the assistive technology
- 4.1.1. User profiles
- 4.1.2. Personas
- 4.2. Persuasive strategies to improve user acceptance and use of an assistive device
- 4.2.1. Selection of persuasive strategies
- 4.2.2. Developing persuasive strategies for Phase I: user acceptance and training
- 4.2.3. Developing persuasive strategies for Phase II: Social inclusion
- 4.2.4. Conclusions
- 4.3. Effectiveness of the proposed persuasive and personalization design elements
- 4.3.1. evaluation of Phase I field trials
- 4.3.2. evaluation of the assistive technology in a lab study
- 4.4. Implications for persuasive design requirements
- 4.4.1. Implication for user profiles and personas
- 4.4.2. Updated cognitive user profile
- 4.4.3. Updated requirements for personalization
- 4.4.4. Updated requirements for persuasive design
- 4.4.5. Implications for Phase II persuasive design strategies
- 4.4.6. Conclusions
- 4.5. Summary
- References
- pt. II Algorithms and interfaces for interaction control through eyes and mind
- 5. Eye tracking for interaction: adapting multimedia interfaces / Steffen Staab
- 5.1. Tracking of eye movements
- 5.1.1. Anatomy of the eye
- 5.1.2. Techniques to track eye movements
- 5.1.3. Gaze signal processing
- 5.2. Eye-controlled interaction
- 5.2.1. Selection methods
- 5.2.2. Unimodal interaction
- 5.2.3. Multimodal interaction
- 5.2.4. Emulation software
- 5.3. Adapted multimedia interfaces
- 5.3.1. Adapted single-purpose interfaces
- 5.3.2. Framework for eye-controlled interaction
- 5.3.3. Adaptation of interaction with multimedia in the web
- 5.4. Contextualized integration of gaze signals
- 5.4.1. Multimedia browsing
- 5.4.2. Multimedia search
- 5.4.3. Multimedia editing
- 5.5. Summary
- References
- 6. Eye tracking for interaction: evaluation methods / Steffen Staab
- 6.1. Background and terminology
- 6.1.1. Study design
- 6.1.2. Participants
- 6.1.3. Experimental variables
- 6.1.4. Measurements
- 6.2. Evaluation of atomic interactions
- 6.2.1. Evaluation of gaze-based pointing and selection
- 6.2.2. Evaluation of gaze-based text entry
- 6.3. Evaluation of application interfaces
- 6.3.1. Comparative evaluation
- 6.3.2. Feasibility evaluation
- 6.4. Summary
- References
- 7. Machine-learning techniques for EEG data / Ioannis Kompatsiaris
- 7.1. Introduction
- 7.1.1. What is the EEG signal?
- 7.1.2. EEG-based BCI paradigms
- 7.1.3. What is machine learning?
- 7.1.4. What do you want to learn in EEG analysis for BCI application?
- 7.2. Basic tools of supervised learning in EEG analysis
- 7.2.1. Generalized Rayleigh quotient function
- 7.2.2. Linear regression modeling
- 7.2.3. Maximum likelihood (ML) parameter estimation
- 7.2.4. Bayesian modeling of ML
- 7.3. Learning of spatial filters
- 7.3.1. Canonical correlation analysis
- 7.3.2. Common spatial patterns
- 7.4. Classification algorithms
- 7.4.1. Linear discriminant analysis
- 7.4.2. Least squares classifier
- 7.4.3. Bayesian LDA
- 7.4.4. Support vector machines
- 7.4.5. Kernel-based classifier
- 7.5. Future directions and other issues
- 7.5.1. Adaptive learning
- 7.5.2. Transfer learning and multitask learning
- 7.5.3. Deep learning
- 7.6. Summary
- References
- 8. BCIs using steady-state visual-evoked potentials / Ioannis Kompatsiaris
- 8.1. Introduction
- 8.2. Regression-based SSVEP recognition systems
- 8.2.1. Multivariate linear regression (MLR) for SSVEP
- 8.2.2. Sparse Bayesian LDA for SSVEP
- 8.2.3. Kernel-based BLDA for SSVEP (linear kernel)
- 8.2.4. Kernels for SSVEP
- 8.2.5. Multiple kernel approach
- 8.3. Results
- 8.4. Summary
- References
- 9. BCIs using motor imagery and sensorimotor rhythms / Ioannis Kompatsiaris
- 9.1. Introduction to sensorimotor rhythm (SMR)
- 9.2. Common processing practices
- 9.3. MI BCIs for patients with motor disabilities
- 9.3.1. MI BCIs for patients with sudden loss of motor functions
- 9.3.2. MI BCIs for patients with gradual loss of motor functions
- 9.4. MI BCIs for NMD patients
- 9.4.1. Condition description
- 9.4.2. Experimental design
- 9.5. Toward a self-paced implementation
- 9.5.1. Related work
- 9.5.2. SVM-ensemble for self-paced MI decoding
- 9.5.3. In quest of self-paced MI decoding
- 9.6. Summary
- References
- 10. Graph signal processing analysis of NIRS signals for brain-computer interfaces / Ioannis Kompatsiaris
- 10.1. Introduction
- 10.2. NIRS dataset
- 10.3. Materials and methods
- 10.3.1. Graph signal processing basics
- 10.3.2. Dirichlet energy over a graph
- 10.3.3. Graph construction algorithm
- 10.3.4. Feature extraction
- 10.3.5. Classification
- 10.3.6. Implementation issues
- 10.4. Results
- 10.5. Discussion
- 10.6. Summary
- References
- pt. III Multimodal prototype interfaces that can be operated through eyes and mind
- 11. Error-aware BCIs / Ioannis Kompatsiaris
- 11.1. Introduction to error-related potentials
- 11.2. Spatial filtering
- 11.2.1. Subspace learning
- 11.2.2. Increasing signal-to-noise ratio
- 11.3. Measuring the efficiency - ICRT
- 11.4. error-aware SSVEP-based BCI
- 11.4.1. Experimental protocol
- 11.4.2. Dataset
- 11.4.3. Implementation details - preprocessing
- 11.4.4. Results
- 11.5. error-aware gaze-based keyboard
- 11.5.1. Methodology
- 11.5.2. Typing task and physiological recordings
- 11.5.3. Pragmatic typing protocol
- 11.5.4. Data analysis
- 11.5.5. System adjustment and evaluation
- 11.5.6. Results
- 11.6. Summary
- References
- 12. Multimodal BCIs - the hands-free Tetris paradigm / Ioannis Kompatsiaris
- 12.1. Introduction
- 12.2. Gameplay design
- 12.3. Algorithms and associated challenges
- 12.3.1. Navigating with the eyes
- 12.3.2. Rotating with the mind
- 12.3.3. Regulating drop speed with stress
- 12.4. Experimental design and game setup
- 12.4.1. Apparatus
- 12.4.2. Events, sampling and synchronisation
- 12.4.3. EEG sensors
- 12.4.4. Calibration
- 12.5. Data processing and experimental results
- 12.5.1. Data segmentation
- 12.5.2. Offline classification
- 12.5.3. Online classification framework
- 12.6. Summary
- References
- 13. Conclusions / Ioannis Kompatsiaris
- 13.1. Wrap-up
- 13.2. Open questions
- 13.3. Future perspectives.