Robots Taught to Use Gestures to Communicate with Humans

Robots Taught to Use Gestures to Communicate with Humans

Tuesday, 20 November, 2018 - 06:15
Researchers in South Korea develop a neural network model that can generate sequences of co-speech gestures. (AP illustrative photo)
London - Asharq Al-Awsat
Researchers at the Electronics and Telecommunications Research Institute (ETRI) in South Korea have developed a neural network model that can generate sequences of co-speech gestures, reported the German news agency.

Youngwoo Yoon, one of the researchers who carried out the study, said: "Smart devices we are interacting with have evolved from personal computers to mobile phones and smart speakers, and we think that social robots could be the next interaction platform."

Youngwoo explained: "Physical motion is one of key differences between social robots and other smart devices, opening new possibilities for emulating human- or animal-like behaviors, which can increase intimacy.”

"We wanted to generate natural and human-like social behaviors, especially hand gestures while speaking. Observing others is a very natural way of learning a new behavior, so we proposed a learning-based gesture generation model that was trained on a dataset of TED talks," Youngwoo noted.

The Techxplore website reported that the model devised by the researchers was trained on a dataset containing 52 hours of video footage from TED talks. The neural network model developed by Yoon and his colleagues successfully generated several continuous sequences of gestures for speech texts of any length.

"We found that robots can learn social skills, and we think this approach can be applied to other social skills, as well as to characters in video games and VR worlds," Youngwoo concluded.

Editor Picks

Multimedia