MIT's 'Conduct-A-Bot' uses human muscle signals to pilot a robot's movement

Advertisement

A team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) has developed a system called “Conduct-A-Bot,” which pilots a robot’s movement using human muscle signals from wearable sensors.

The team says that Conduct-A-Bot could potentially be used for various scenarios, including navigating menus on electronic devices or supervising autonomous robots. 

“We envision a world in which machines help people with cognitive and physical work, and to do so, they adapt to people rather than the other way around,” explains Daniela Rus, MIT professor and director of CSAIL, and co-author on a paper about the system.

Muscle signals and movements are measured by electromyography (EMG) and motion sensors worn on the biceps, triceps, and forearms, enabling the seamless teamwork between people and machines. Without any offline calibration or per-user training data, algorithms process the signals to detect gestures in real time. The barrier to casual users interacting with robots is largely reduced thanks to the system using just two or three wearable sensors, and nothing in the environment.

Any commercial drone could be used, but the team used a Parrot Bebop 2 drone for this research. Conduct-A-Bot detects a variety of actions to move the drone left, right, up, down and forward, as well as allow it to rotate and stop.

During tests, the drone correctly responded to 82 percent of 1,500-plus human gestures when it was remotely controlled to fly through hoops. The system also correctly identified approximately 94 percent of cued gestures when the drone was not being controlled.

“Understanding our gestures could help robots interpret more of the nonverbal cues that we naturally use in everyday life,” says Joseph DelPreto, lead author on a new paper about Conduct-A-Bot.

“This type of system could help make interacting with a robot more similar to interacting with another person, and make it easier for someone to start using robots without prior experience or external sensors.”

Eventually, this type of system could be used for no shortage of applications for human-robot collaboration, including remote exploration, assistive personal robots, or manufacturing tasks like delivering objects or lifting materials. The system could also open up a realm of future contactless work.

How Conduct-A-Bot works

The team note that muscle signals can often provide information about states that are hard to observe from vision, such as joint stiffness or fatigue. An example of this is that if a person watched a video of someone holding a large box, they might have difficulty guessing how much effort or force was needed. A machine would also have difficulty gauging that from vision alone. Using muscle sensors, though, opens up possibilities to estimate not only motion, but also the force and torque required to execute that physical trajectory.

For the gesture vocabulary currently used to control the robot, the movements were detected as follows: Stiffening the upper arm to stop the robot (similar to briefly cringing when seeing something going wrong): biceps and triceps muscle signals; waving the hand left/right and up/down to move the robot sideways or vertically: forearm muscle signals (with the forearm accelerometer indicating hand orientation); fist clenching to move the robot forward: forearm muscle signals; and rotate clockwise/counterclockwise to turn the robot: forearm gyroscope.

Using the wearable sensors, the gestures are then detected by machine learning classifiers. To learn how to separate gestures from other motions, unsupervised classifiers processed the muscle and motion data and clustered it in real time. A neural network also predicted wrist flexion or extension from forearm muscle signals. 

According to the researchers, the system essentially calibrates itself to each person's signals while they're making gestures that control the robot, which makes it faster and easier for casual users to start interacting with robots.

In the future, the team hopes to expand the tests to include more subjects. Researchers also want to extend the vocabulary to include more continuous or user-defined gestures, with the eventual hope of having the robots learn from these interactions to get a better understanding of the tasks and provide more predictive assistance or increase their autonomy.

“This system moves one step closer to letting us work seamlessly with robots so they can become more effective and intelligent tools for everyday tasks,” DelPreto says.

“As such collaborations continue to become more accessible and pervasive, the possibilities for synergistic benefit continue to deepen.”