INTRODUCTION
|
Brain-machine interfaces (or brain-computer interfaces) is a growing field of research that has witnessed rapid progress over the last years. In this respect, one of the most challeng- ing areas is neuroprosthetics, or controlling robotic and pros- thetic devices directly from brain signals, where, in addition to high accuracy in the decoding of mental commands, fast decision-making is critical [1], [2]. Demonstrations of such brain-controlled robots and prostheses range from robot arms [3], [4], to hand orthosis [5], [6], to mobile robots [1], [7], and to wheelchairs [2], [8], [9], [10]. Most of these works are based on asynchronous spontaneous approaches, where the subject voluntarily modulates sensorimotor brain activity, which seems to be the most natural and suitable way to control neuroprosthetic devices. |
Noninvasive electroencephalogram (EEG) is a convenient, safe, and inexpensive recording method that is ideal to bring brain-machine interface (BMI) technology to a large population. However, because of the inherent properties of EEG, BMIs based on such a kind of signals are limited by a low information transfer rate. Nonetheless, researchers have demonstrated the feasibility of mentally controlling complex robotic devices from EEG. A key factor to do so is the use of smart interaction designs, which in the field of robotics corresponds to shared control [11], [12], [13]. In the case of neuroprosthetics, Milla´n’s group has pioneered the use of shared control that takes the continuous estimation of the operator’s mental intent and provides assistance to achieve tasks [1], [2], [7], [9]. A critical aspect of shared control for 224631 (TOBI). |
BMI is coherent feedback —the behavior of the robot should be intuitive to the user and the robot should unambiguously understand the user’s mental commands. Otherwise, people find it difficult to form mental models of the neuropros- thetic device. Furthermore, thanks to the principle of mutual learning, where the user and the BMI are coupled together and adapt to each other, humans learn to operate the brain- actuated device very rapidly, in a few hours normally split between a few days [14]. |
Although the whole field of neuroprosthetics target dis- abled people with motor impairments as end-users, all successful demonstrations of brain-controlled robots mentioned above, except [5], have been actually carried out with either healthy human subjects or monkeys. In this paper, we report the results with two patients who mentally drove a telep- resence robot from their clinic more than 100 km away and compare their performances to a set of healthy users carrying out the same tasks. Remarkably, although the patients had never visited the location where the telepresence robot was operating, they achieve similar performances to healthy users who were familiar with the environment. The results with some of the healthy users were previously reported in [7]. |
In the following sections we will describe our BMI approach, the mobile robot and our shared control implementation. Then, we will present the experimental setup and the results achieved. |
Methods
|
A. Asynchronous BMI Approach |
To drive our telepresence robot, subjects use an asyn- chronous spontaneous BMI where mental commands are delivered at any moment without the need for any external stimulation and/or cue [1], [14]. To do so the users learn to voluntary modulate EEG oscillatory rhythms by executing two motor imagery tasks (i.e., imagination of movements such as either right hand vs. left hand, or feet vs. left hand). Each of these mental tasks is associated to a steering com- mand, either right or left. Furthermore, the robot executes a third driving command, forward, when no mental command is delivered. |
For our experiments, EEG was recorded with a portable 16-channel g.tec amplifier, with a sampling rate of 512 Hz and band-pass filtered between 0.1 Hz and 100 Hz. Each channel was then spatially filtered with a Laplacian derivation before estimating its power spectral density (PSD) in the band 4-48 Hz with 2 Hz resolution over the last second. The PSD was computed every 62.5 ms (i.e., 16 times per second) using the Welch method with 5 overlapped (25%) Hanning windows of 500 ms. |
The input to the classifier embedded in the BMI is a subset of those features (16 channels x 23 frequencies). We use the algorithm described in [9] to estimate the relevance of the features for discriminating the mental commands delivered to the robot. This algorithm is run on EEG data recorded during several calibration sessions separately (3 sessions for the experiments reported here) and we then select the features with discriminant values consistently high in all sessions. This ensures that we choose features that are discriminant and stable over time. These initial features are those that the user can naturally modulate, what facilitates and accelerates user training. |
The classifier is a statistical Gaussian classifier that com- putes the probability distribution over the mental commands of an EEG sample [1]. The BMI integrates over time the outputs of the classifier until it accumulates enough evidence about the user’s mental intent. To do so, the BCI first rejects classifier decisions that are below a confidence probability threshold. Then, the BCI accumulates the surviving deci- sions using an exponential smoothing probability integration framework Eq. 1: |
|
where p(yt |xt ) is the probability distribution, p(yt−1 ) the previous distribution and α the integration parameter. That is, probabilities are integrated until a class reaches a certainty threshold about the subject’s intent to change the robot’s direction. At this moment the mental command is delivered and the probabilities are reset to uniform distribution. Such an evidence accumulation (or integration) framework yields smooth and predictable feedback, thus helping user training by avoiding confusing and frustrating fluctuations. |
This evidence accumulation framework plays also a crucial role in preventing users from delivering arbitrary commands when their attention is shifted temporarily to some other task, thanks to the smooth convergence to the users intention. This property has another benefit, namely supporting intentional non-control —i.e., the ability not to intentionally deliver any mental command if the user doesn’t want to change the behavior of the neuroprosthesis. For our telepresence robot, this means that if no mental command is delivered, it will continue moving forward or stay still (in case it is stopped in front of a target). As mentioned above, intentional non- control translates into a third driving command that doesn’t require extra cognitive effort to the subject. |
B. Telepresence Robot |
Our telepresence robot is RobotinoTM by FESTO, a small circular mobile platform (diameter 38 cm, height 23 cm) with three holonomic wheels (Fig. 1). It is equipped with nine infrared sensors capable to detect obstacles up to ∼15 cm (depending on light condition) and a webcam that can also be used for obstacle detection, although for the experiments reported in this paper we only rely on the infrared sensors. For telepresence purposes, we have added a notebook with an integrated camera: the BMI user can see the environment through the notebook camera and can be seen by others in the |
notebook screen. The video/audio communication between the telepresence robot and the subject’s PC is done by means of commercial VoIP (Skype). This configuration allows the BMI user to interact remotely with people. For the sake of safety, the robot stops automatically in the case it loses the network connection with the BMI. |
C. Shared Control |
Driving a mobile platform remotely in a natural environ- ment might be a complex and frustrating task. The user has to deal with many difficulties starting from the variability of an unknown remote environment to the reduced vision field through the control camera. In this scenario, shared control facilitates navigation in two ways. On one hand, by taking care of the low-level details (i.e., obstacle detection and avoidance for safety reasons). On the other hand, trying to interpret the user’s intentions to reach possible targets. In any case shared control decides autonomously the direction of travel. This way the subject keeps full control of the driving of the robot. |
However, the concept of obstacles or targets is not abso- lute, it changes according to the user’s will. For instance, a chair in the path has to be considered an obstacle if the user manifests the intent of avoiding it. Conversely, it might be the target if the goal is to talk to somebody sitting on it. It is the task of the shared control to deal with these kinds of situations by weighting possible targets or obstacle. |
Our implementation of shared control is based on the dynamical system concept coming from the fields of robotics and control theory [15]. Two dynamical systems have been created which control two independent motion parameters: the angular and translation velocities of the robot. The systems can be perturbed by adding attractors or repellors in order to generate the desired behaviours. The dynamical system implements the following navigation modality. The default device behaviour is to move forward at a constant speed. If repellors or attractors are added to the system, the motion of the device changes in order to avoid the obstacles or reach the targets. At the same time, the velocity is determined according to the proximity of the repellors surrounding the robot. In this framework, if shared control is disabled, no repellor or attractor is added. The robot then changes direction only according to the user commands. If an obstacle is detected in the close vicinity of the robot, the device will stop in front of it, waiting for the next mental command. Otherwise, if the shared control is enabled, its role is to to decide what has to be considered an attractor or a repellor according to the commands delivered by the user. |
D. Subjects and Task |
Two users with motor disabilities (d1, female, and d2, male; both suffering from myopathy) and four healthy subjects (s1-s4, all males) volunteer to participate in our experiments. All subjects were previously trained with the BMI, although only subjects s2 and s3 were BMI experts. However, none of the subjects had previously work with the telepresence robot. Unfortunately, subject d2 could not finish all the experiments. |
The experimental environment was a natural working space with different rooms and corridors (Fig. 2). The robot started from position R, and there were four target positions T1, T2, T3, T4. The subject’s task was to drive the robot along one of three possible paths P1, P2, P3, each consisting of two targets and driving back to the start position. The experimental space contains natural obstacles (i.e., desks, chairs, furniture, and people) and six additional objects in the middle of the paths (small squares with a circle). |
In the case of subjects d1 and d2, they mentally drove a telepresence robot from their clinic more than 100 km away from the experimental environment. Healthy subjects, however, were seating at position S facing back to the environment. During a trial, the subject needed to drive the robot along one of the paths. Subjects were asked to perform the task as fast as possible. A trial was considered successful if the robot travelled to the two target positions and back to the start position within a limited amount of time (12 min). |
Since in a previous study with a pool of these healthy subjects we could already determined the beneficial role of shared control for a brain-control telepresence robot [7], which yields significant improvements in performance, here we only focus on two conditions of the experiment: either BMI with shared control or manual without shared control. In the case of manual control the subject drove the robot by delivering manual commands through a keyboard or buttons and travelled each path once. In the case of BMI control the subjects drove the robot along each path twice. Paths were chosen in pseudorandom order and BMI control always preceded manual control to avoid any learning effect. For each trial we recorded the total time and the number of commands sent by the user (manual or mental). Subjects were instructed to generate paths as fast and short as possible. |
EXPERIMENTAL RESULTS
|
The first striking result of our experiments is that all subjects succeeded in all the trials for both conditions, even those with a short BMI experience. As an indication of the challenge to drive the telepresence robot along the desired paths, the average duration of a trajectory over all the subjects while delivering manual commands is 264 s. This time can be considered as the reference baseline.But the most important result of our experiments is that shared control allowed all the subjects to drive mentally the telepresence robot almost as fast as when they did the task delivering manual commands without shared control. Figure |
3(a) shows the time needed for all six subjects to drive the robot along the three paths using BMI with shared control as a percentage increase (or decrease) with respect to manual without shared control. The ratio of the average time for all paths of BMI (with shared control) vs. manual commands (without shared control) is: s1, 1.22; s2, 1.11; s3, 1.08; s4: |
1.19; d1, 1.57; d2, 0.94. The only exception to the similarity of performances between BMI and manual is subject d1 in the second path. During the experiment the subject delivered some wrong mental commands, believing that the target was elsewhere, and it took some time and additional commands before bringing the robot to the correct target. If that second path is not consider, the ratio of the average time for subject d2 is 1.10 in line with all other subjects. |
Shared control also helped subjects in reducing the cog- nitive workload as measured by the number of commands (manual or mental) they needed to deliver to achieve the task (Fig. 3(b)). In the case of BMI, shared control led for some users to significant decreases. Again, subject d2 in the second path is the exception for the reasons mentioned above. On average, and excluding this second path of subject d1, the required number of mental commands is essentially the same than the manual commands, the ratio being 1.01. |
Finally, when comparing the performance of users with disabilities against healthy users, it appears that the average percentage increase of time to complete the paths is 1.07 for the former and 1.12 for the latter. Again, very similar performances for both kind of subjects. In this calculation, we have not considered the second path of subject d1. |
CONCLUSIONS
|
In this paper we present the first results of users with disabilities in mentally controlling a telepresence robot, a rather complex task as the robot is continuously moving and the user must control it for a long period of time (well over 6 minutes) to go along the whole path. These two users drove the telepresence robot from their clinic more than 100 km away. Remarkably, although the patients had never visited the location where the telepresence robot was operating, they achieve similar performances to a group of four healthy users who were familiar with the environment. In particular, the experimental results reported in this paper demonstrate the benefits of shared control for brain- controlled telepresence robots. It allows all subjects (includ- ing novel BMI subjects as our users with disabilities) to complete a rather complex task in similar time and with similar number of commands to those required by manual commands without shared control. Thus, we argue that shared control reduces subjects’ cognitive workload as it: (i) assists them in coping with low-level navigation issues (such as obstacle avoidance) and (ii) helps BMI users to keep attention for longer periods of time. |
We also observed that, to drive a brain-controlled robot, subjects do not only need to have a rather good BMI performance, but they also need to be fast in delivering the appropriate mental command at the correct time — otherwise they will miss key maneuvers to achieve the task efficiently. In our experience, fast decision making is critical and it depends on the proficiency of the subject as well as on his/her attention level. Along the same line, another critical ability that BMI subjects must exhibit is intentional non- control, which allows them to rest while the neuroprosthesis is in a state they don’t want to change (e.g., moving straight along a corridor). Our evidence accumulation framework implicitly support it. Nevertheless, we will continue working on principled approaches to handle intentional non-control in combination with shared control so that users can deliver commands only when they wish to do so, thus enlarging subjects’ telepresence experience. |
Figures at a glance
|
|
|
|
Figure 1 |
Figure 2 |
Figure 3 |
|
References
|
- J. d. R. Milla´n, F. Renkens, J. Mourin˜ o, and W. Gerstner, “Noninvasive brain-actuated control of a mobile robot by human EEG,” IEEE Trans.Biomed. Eng., vol. 51, pp. 1026–1033, 2004.
- J. d. R. Milla´n, F. Gala´n, D. Vanhooydonck, E. Lew, J. Philips, and M. Nuttin, “Asynchronous non-invasive brain-actuated control of anintelligent wheelchair,” in Proc. 31st Annual Int. Conf. IEEE Eng. Med. Biol. Soc., 2009, pp. 3361–3364
- J. M. Carmena, M. A. Lebedev, R. E. Crist, J. E. ODoherty, D. M.Santucci, D. F. Dimitrov, P. G. Patil, C. S. Henriquez, and M. A. L. Nicolelis, “Learning to control a brain-machine interface for reaching and grasping by primates,” PLoS Biol., vol. 1, pp. 193–208, 2003.
- M. Velliste, S. Perel, M. C. Spalding, A. S. Whitford, and A. B. Schwartz, “Cortical control of a prosthetic arm for self-feeding,”
- Nature, vol. 453, p. 10981101, 2008.
- G. R. Mu¨ ller-Putz, R. Scherer, G. Pfurtscheller, and R. Rupp, “EEG- based neuroprosthesis control: A step towards clinical practice,” Neurosci.Lett, vol. 382, pp. 169–174, 2005.
- M. Tavella, R. Leeb, R. Rupp, and J. d. R. Milla´n, “Towards natural non-invasive hand neuroprostheses for daily living,” in Proc. 32ndAnnual Int. Conf. IEEE Eng. Med. Biol. Soc., 2010.
- L. Tonin, R. Leeb, M. Tavella, S. Perdikis, and J. d. R. Milla´n, “The role of shared-control in BCI-based telepresence,” in Proc. 29th
- Annual Int. Conf. IEEE Syst. Man Cybern. Soc., 2010.
- B. Rebsamen, E. Burdet, C. Guan, H. Zhang, C. L. Teo, Q. Zeng, C. Laugier, and M. H. Ang, “Controlling a wheelchair indoors using
- thought,” IEEE Intelligent Systems, vol. 22, pp. 18–24, 2007.
- F. Gala´n, M. Nuttin, E. Lew, P. W. Ferrez, G. Vanacker, J. Philips, and J. d. R. Milla´n, “A brain-actuated wheelchair: Asynchronous and noninvasive brain-computer interfaces for continuous control of robots,” Clin. Neurophysiol., vol. 119, pp. 2159–2169, 2008.
- I. Iturrate, J. Antelis, A. Ku¨ bler, and J. Minguez, “Non-invasive brain- acutated wheelchair based on a P300 neurophysiological protocol andautomated navigation,” IEEE Trans. Robot., vol. 25, pp. 614–627, 2009.
- O. Flemisch, A. Adams, S. R. Conway, K. H. Goodrich, M. T. Palmer, and P. C. Schutte, “The H-Metaphor as a guideline for vehicle
- automation and interaction,” NASA, Tech. Rep. NASA/TM–2003- 212672, 2003.
- D. Vanhooydonck, E. Demeester, M. Nuttin, and H. Van Brussel, “Shared control for intelligent wheelchairs: An implicit estimation ofthe user intention,” in Proc. 1st Int. Workshop Advances in Service Robot., 2003, pp. 176–182.
- T. Carlson and Y. Demiris, “Human-wheelchair collaboration through prediction of intention and adaptive assistance,” in Proc. IEEE Int.Conf. Robot. Autom., 2008, pp. 3926–3931.
- J. d. R. Milla´n, P. W. Ferrez, F. Gala´n, E. Lew, and R. Chavarriaga, “Non-invasive brain-machine interaction,” Int. J. Pattern Recognit. Artif.Intell., vol. 22, pp. 959–972, 2008.
- G. Scho¨ ner, M. Dose, and C. Engels, “Dynamics of behavior: Theory and applications for autonomous robot architectures,” Robot. Autonomous Syst., vol. 16, pp. 213 – 245, 1995.
|