Home Robot Control for People With Disabilities

One of the users involved in the Georgia Tech research is Henry Evans, who has been working with a PR2 (and other robotic systems) for many years through the Robots for Humanity project. Henry suffered a brain stem stroke several decades ago, and is almost entirely paralyzed and unable to speak. Henry describes his condition in this way:

I had always been fiercely independent, probably to a fault. With one stroke I became completely dependent for everything—eating, drinking, going to the bathroom, scratching itches, etc. I would, to this day, literally die if someone weren’t around to help me, 24 hours a day. Most of us are able to take control over our own bodies for granted. Not me. Every single thing I want done, I have to ask someone else to do and depend on them to do it. They get tired of it. So do I, but whereas they can walk out of the room or pretend not to see my gestures, I cannot escape. People say I am very patient, and I am. It is only partly due to my nature. The basic truth is, I have no choice. 

Henry can move his eyes and click a button with his thumb, which allows him to use an eye-tracking mouse. With just this simple input device, he’s been able to control the PR2, a two-armed mobile manipulator, to do some things for himself, including scratching itches:

What the video doesn’t show is what most of this research is actually about: Giving Henry, and other people, the ability to control the robot to get it to do all of this stuff. PR2 is a very complicated robot, with an intimidating 20+ degrees of freedom, and even for people with two hands on a game controller and a lot of experience, it’s not easy to remote control the robot into doing manipulation tasks. It becomes even more difficult if you’re restricted to controlling a very 3D robot through a very 2D computer screen. The key is a carefully designed low-level web interface that relies on multiple interface modes and augmented reality for intuitive control of even complex robots.

Our approach is to provide an augmented-reality (AR) interface running in a standard web browser with only low-level robot autonomy. Many commercially-available assistive input devices, such as head trackers, eye gaze trackers, or voice controls, can provide single-button mouse-type input to a web browser. The standard web browser enables people with profound motor deficits to use the same methods they already use to access the Internet to control the robot. The AR interface uses state-of-the-art visualization to present the robot’s sensor information and options for controlling the robot in a way that people with profound motor deficits have found easy to use. 

By limiting the robot’s autonomy to low-level operations, such as tactile sensor driven grasping and moving an arm with respect to inverse kinematics to achieve end effector poses, the robot performs consistently across diverse situations allowing the user to attempt to use the robot in diverse and novel ways.

The interface is based around a first-person perspective, with a video feed streaming from the PR2’s head camera. Augmented reality markers show 3D space controls, provide visual estimates of how the robot will move when commands are executed, and also provide feedback from other non-visual sensors, like tactile sensors and obstacle detection. One of the biggest challenges is how to adequately represent the 3D workspace of the robot through a 2D screen, but a “3D peek” feature overlays a Kinect-based low resolution 3D model of the environment around the robot’s gripper, and then simulates a camera rotation. To keep the interface accessible to users with only mouse and single click control, there are many different operation modes that can be selected, including:

  • Looking mode: Displays the mouse cursor as a pair of eyeballs, and the robot looks toward any point where the user clicks on the video.
  • Driving mode: Allows users to drive the robot in any direction without rotating, or to rotate the robot in-place in either direction. The robot drives toward the location on the ground indicated by the cursor over the video when the user holds down the mouse button, and three overlaid traces show the selected movement direction, updating in real time. “Turn Left” and “Turn Right” buttons over the bottom corners of the camera view turn the robot in place.
  • Spine mode: Displays a vertical slider over the right edge of the image. The slider handle indicates the relative height of the robot’s spine, and moving the handle raises or lowers the spine accordingly. These direct manipulation features use the context provided by the video feed to allow the user to specify their commands with respect to the world, rather than the robot, simplifying operation.
  • Left Hand and Right Hand modes: Allow control of the position and orientation of the grippers in separate sub-modes, as well as opening and closing the gripper. In either mode, the head automatically tracks the robot’s fingertips, keeping the gripper centered in the video feed and eliminating the need to switch modes to keep the gripper in the camera view.

The grippers also have sub-modes for position control, orientation control, and grasping. This kind of interface is not going to be the fastest way to control a robot, but for some, it’s the only way. And as Henry says, he’s patient.

In a study of 15 disabled participants who took control of Georgia Tech’s PR2 over the Internet with very little training (a bit over an hour), this software interface proved both easy to use and effective. It’s certainly not fast—simple tasks like picking up objects took most participants 5 minutes when it would take an able bodied person 5 seconds, but as Kemp and Phillip Grice, a recent Georgia Tech PhD graduate, point out in a recent PLOS ONE paper, “for individuals with profound motor deficits, slow task performance would still increase independence by enabling people to perform tasks for themselves that would not be possible without assistance.”

A separate study with Henry, considered to be an “expert user,” showed how much potential there is with a system like this:

Henry also discovered an unanticipated use for the robot. He controlled the robot to simultaneously hold out a hairbrush to scratch his head and a towel to wipe his mouth. This allowed him to remain comfortable for extended periods of time in bed without requesting human assistance (two sessions approximately 2.5 hours and 1 hour in length). Henry stated that “it completely obviated the need for a human caregiver once the robot was turned on (always the goal),” and that “once set up, it worked well for hours and kept me comfortable for hours.” This was a task which designers had not anticipated, and was the most successful use of the robot in terms of task performance and user satisfaction, as the deployed research system provided a clear, consistent benefit to the user and reduced the need for caregiver assistance during these times.

Obviously, a PR2 is probably overkill for many of these tasks, and also not likely to be available to most people who could use an assistive robot. But the interface that Georgia Tech has created here could be applied to many different kinds of robots, including lower cost arms (like UC Berkeley’s Blue) that wouldn’t necessarily need a mobile base to be effective. And if an arm could keep someone independent and comfortable for hours instead of a human caretaker, it’s possible that the technology could even pay for itself.

[ Georgia Tech ]

Source: IEEE Spectrum Robotics