Scientists have used machine learning to help robots learn how to do new tasks even in changing conditions.
Scientists have used machine learning to help robots learn how to do new tasks even in changing conditions. (Malte Mueller | Getty Images)

Imagine a robot that could do your laundry, make your bed, cook your dinner, or stock the dairy section at your local grocery store. Humans have long been able to teach robots how to do individual tasks, but instructing them on these more sophisticated jobs has been an elusive goal, despite billions of dollars invested into robotics.

Now, a team of scientists in Switzerland has made progress in the quest to invent helpful robots that can act on complex instruction from humans. The development raises questions about whether this kind of technology could someday learn not only to help humans, but to also become capable of harming them.

Inventing the personal barista 

For years, robotics scientist Sthithpragya Gupta has been dreaming about the things his robot might do. “I personally want the robot to make me a coffee,” says Gupta. He and his colleagues at École Polytechnique Fédérale de Lausanne — an engineering school in Switzerland — keep late hours at their lab in the Swiss Alps. “There’s a lot of coffee consumption,” says Gupta.

“If I could just say ‘a little bit of sugar, a bit more creamer,’ stuff like that,” he says. “That would be a dream come true.”

A problem that robotics scientists and engineers like Gupta have long battled is that robots cannot do tasks beyond those they are specifically programmed for. Gupta uses a tennis example to explain the issue. Robots may be able to learn how to hit a backhand shot, he explains. They can backhand that ball perfectly again, and again, and again. But if conditions change — say, their opponent moves, or the light changes — it all falls apart. Humans have no problem adjusting to these kinds of changes. Teaching a robot how to adapt, though, is much more difficult.

“It’s very difficult to transfer this behavior from humans to robots,” says Gupta.

Until now, he hopes. Gupta and his colleagues have published a paper in the journal Science Robotics demonstrating a new way of teaching robots using machine learning, a type of artificial intelligence. The approach relies on kinematic intelligence — a robot’s built-in awareness of how its own body can move safely through space.

In a video demonstrating their technology, robots with a single arm attached to a base watch as a human instructor tosses a ball into a small container. The robots then pick up the ball and copy the instructor’s behavior, adjusting for their own position and accommodating their non-human bodies. Robots are then capable of transferring these skills and knowledge to other robots.

Self Aware Robots

“It could be a turning point,” says Robert Platt, who studies engineering and robotics at Northeastern University. Platt, who called the work a “breakthrough,” noted that the field of robotics is not in widespread agreement about the path forward to creating effective robots through machine learning — but that most agree the problem these researchers are tackling is a critical one. “More people may do this in the future,” he said.

Platt was hesitant to forecast any particular timeline for robots becoming widespread household accessories. “We’re at a point of very fast change,” he notes. “ Part of the reason why I hesitate to make predictions — look what happened with large language models,” he says, referring to generative AI chatbots such as ChatGPT or Claude that have been widely adopted. “We were a long way away and then all of a sudden — we weren’t.”

A fine line between self-awareness and consciousness 

If a robot can self-correct and teach others, does that make it self-aware?

“ It looks like this robot is capable of doing some very impressive feats of learning,” says Susan Schneider, who studies artificial intelligence at Florida Atlantic University. “But that doesn’t mean something has full-blown consciousness or inner awareness in the sense that biological beings have it.”

Schneider points out that a critical distinction between robots and humans is feeling. “Consciousness is the felt quality of experience,” she says. “When you sip your morning espresso shot, when you see the richness of the sunset, when you have a headache, it feels like something from the inside to be you.”

But this lack of consciousness poses new questions about ethics. “It immediately raises alarm bells in any AI safety researcher’s mind,” says Schneider. Later versions of this kind of technology, she says, could potentially be weaponized against humans.

Researchers have taken care to include safety protocols to ensure that robots aren’t able to hurt people. Even they acknowledge, however, that future development of this technology will need guardrails. “ I think really soon we should have regulatory frameworks on who operates a robot and how,” says Gupta.

Humans are at an inflection point with robotics, says Susan Schneider. “It’s a very exciting time,” she says, “and we just don’t know where it’s headed.”

Transcript:

SCOTT DETROW, HOST:

Imagine having a robot around your house to do things like laundry or make your bed. Some scientists say they have made a key breakthrough that would allow robots to figure out these tasks on their own. But this raises big questions about how much risk comes with letting robots be in charge of their own learning. NPR’s Katia Riddle has more.

KATIA RIDDLE, BYLINE: For years, robotic scientist Sthithpragya Gupta has been dreaming up the perfect robot.

STHITHPRAGYA GUPTA: I personally want the robot to make me a coffee.

RIDDLE: With his engineering and science colleagues, says Gupta, people stay at the lab all hours. They drink a lot of coffee. He’d like to be able to give his robot specific directions.

GUPTA: Little bit of sugar, a bit more creamer, stuff like that, then oh, that would be a dream come true.

RIDDLE: It’s not just his dream. There’s a slew of startups full of engineers trying to develop robots that could do things like unload a dishwasher or sort packages at a warehouse. Humans have been able to teach robots to do all kinds of specific tasks, but it’s hard to teach them how to learn things. Gupta likes to explain this difference with a tennis metaphor.

GUPTA: Imagine that, you know, like, there’s a tennis academy and there’s the instructor with some people who are there to learn tennis.

RIDDLE: The instructor demonstrates how to hit a backhand. Most people can learn it eventually. Robots could also potentially learn to hit a backhand, but then something changes. The opponent moves. The light shifts. A person can adjust their backhand to accommodate these changes. The robots, not so much – they can’t adapt like humans.

GUPTA: It’s very difficult to transfer this behavior from human to all the robots.

RIDDLE: That is until now, says Gupta. He and his colleagues have just published a paper in the journal Science Robotics about a new way of teaching robots. They’ve used machine learning, which is a kind of AI, to help robots adjust their movements based on their own physical abilities and limitations. They demonstrate their work in this video. It shows robots with a single arm attached to a base.

(SOUNDBITE OF ARCHIVED RECORDING)

UNIDENTIFIED NARRATOR: The way forward is robots learning from humans and transferring those skills across different designs.

RIDDLE: In the video, an instructor picks up a ball and throws it into a small container. Robots watch, then they pick up the ball and throw the ball into the container as well, adjusting for their own position and very nonhuman bodies.

(SOUNDBITE OF ARCHIVED RECORDING)

UNIDENTIFIED NARRATOR: This opens the door to flexible, easily upgradable robot fleets.

RIDDLE: Gupta says we’re likely years away from our own personal baristas, but this development introduces some thorny questions like, if this robot can adjust and improve upon its own behavior, does that make it self-aware or even conscious?

SUSAN SCHNEIDER: It looks like this robot is capable of doing some very impressive feats of learning, but that doesn’t mean something has full-blown consciousness or inner awareness or is a self in the sense that biological beings have it.

RIDDLE: Susan Schneider studies AI at Florida Atlantic University. She points out that this robot doesn’t feel things. That’s a critical distinction between humans and robots.

SCHNEIDER: Consciousness is the felt quality of experience. So when you sip your morning espresso shot, when you see the rich hues of the sunset, when you have a headache, it feels like something from the inside to be you.

RIDDLE: The robot is feeling none of these things, even if it knows how to teach itself to improve. But this lack of feeling introduces a new issue. If there’s no consciousness, is there morality? What’s to keep someone from teaching the robot how to hurt someone?

SCHNEIDER: It immediately raises alarm bells in, you know, any AI safety researcher’s mind.

RIDDLE: The researchers on this project are including safety protocols to ensure that robots can’t hurt people. But even they acknowledge it’s a risk. Susan Schneider says this moment is an inflection point in how robots will evolve with AI.

SCHNEIDER: It’s a very exciting time, and we just don’t know where it’s headed.

RIDDLE: The robots that emerge in the next five to 10 years, says Susan Schneider, could be life-changing for good or not. Katia Riddle, NPR News.