MIndGrasp: A New Training and Testing Framework for Motor Imagery Based 3-Dimensional Assistive Robotic Control

03/01/2020
by   Daniel Freer, et al.
0

With increasing global age and disability assistive robots are becoming more necessary, and brain computer interfaces (BCI) are often proposed as a solution to understanding the intent of a disabled person that needs assistance. Most frameworks for electroencephalography (EEG)-based motor imagery (MI) BCI control rely on the direct control of the robot in Cartesian space. However, for 3-dimensional movement, this requires 6 motor imagery classes, which is a difficult distinction even for more experienced BCI users. In this paper, we present a simulated training and testing framework which reduces the number of motor imagery classes to 4 while still grasping objects in three-dimensional space. This is achieved through semi-autonomous eye-in-hand vision-based control of the robotic arm, while the user-controlled BCI achieves movement to the left and right, as well as movement toward and away from the object of interest. Additionally, the framework includes a method of training a BCI directly on the assistive robotic system, which should be more easily transferrable to a real-world assistive robot than using a standard training protocol such as Graz-BCI. Presented results do not consider real human EEG data, but are rather shown as a baseline for comparison with future human data and other improvements on the system.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro