This article was produced and financed by the University of Bergen - read more

Vadim Kimmelman from UiB has just published a study showing how facial expressions affect both grammatical information and emotions in sign language.

Will a computer be able to read the complex system of facial expressions in sign language?

In the near future deaf people and hearing people may be able to communicate in real time, using automatic translation systems based on computer vision technologies. Now researchers are carefully studying facial expressions used in sign language to express both grammatical information and emotions.

We have already seen examples of robot-projects that can convert text into sign language, but it has proven more challenging to translate from sign language into spoken languages.

"In sign language, facial expressions are used to express both linguistic information and emotions", says Vadim Kimmelman.

"For example: eyebrow raise is necessary to mark general questions in most sign languages. At the same time, signers use the face to express emotions – either their own, or when quoting someone else. What we need to know more about, is what happens when grammar and emotions needs to be expressed at the same time. Will a computer software be able to capture the correct meaning?" Kimmelman asks.

Kimmelman is an Associate Professor at the University of Bergen, where he works as a linguist, primarily on the grammar of sign languages, specifically Russian Sign Language (RSL) and Sign Language of the Netherlands (NGT).

In his latest study published in PLOS ONE, Kimmelman is looking at what happens when different emotions are combined with different sentence types, using 2D video analysis software to track eyebrow position in video recordings.

Kimmelman and his colleagues at Nazarbayev University in Kazakhstan investigated these questions for Kazakh-Russian Sign Language, which is the sign language used in Kazakhstan.

COMPLEX FACIAL EXPRESSIONS: Vadim Kimmelman is looking at how facial expressions are used in sign Language to exspress bot grammatical information and emotions.

Tracking hand, body and facial features

"In our study we asked nine native signers to produce the same sentences as statements and two types of questions (e.g. The girl fell – Did the girl fall? – Where did the girl fall?). The signer posed the questions three times, adding three different emotions (neutral, angry, surprised). Based on research on other sign languages, we had expected that both emotions and grammar would affect eyebrow position, and that we might find some interactions", Kimmelman explains.

The main novelty in this study was the use of the OpenPose software, which allows automatic tracking of hands, body, and facial features in 2D videos.

"Using this software, we were able to precisely and objectively measure eyebrow positions across different conditions, and conduct quantitative analysis. The study showed as expected, that both emotions and grammar affected eyebrow position in Kazakh-Russian Sign Language. For instance, general questions are marked with raised eyebrows, surprise is marked with raised eyebrows, and anger is marked with lowered eyebrows".

Complex facial expressions

Kimmelman also found that emotional and grammatical marking can be combined: "We found that with surprised general questions the eyebrows raise even higher than with neutral questions. In addition, we found some complex interactions between factors influencing eyebrow positions, indicating the need for future research. In future, we will also investigate how eyebrow movement aligns with specific signs in the sentence, in addition to how average eyebrow position is affected, and we will investigate other facial features and head and body position".

Copyright: Vadim Kimmelman.

The results of this study show some real practical implications. The evolution of new technologies has clearly contributed to improve and extend the communication opportunities of hearing impaired people, Kimmelman says:

"First, students learning Kazakh-Russian Sign Language (e.g. future interpreters) should be aware of how both emotions and grammar are affecting facial expressions. Second, our findings will have an impact on projects on automatic recognition of sign language, as it is clear that both grammatical information and emotions should be considered by recognition models when applied to facial expressions".

"Last but not least, our study is a showcase of the possibilities that new technologies, such as computer vision, offer to scientific research of sign languages" Kimmelman says.

Facts

  • In sign language, facial expressions are used to express both linguistic information and emotions.
  • Vadim Kimmelman from UiB has just published a study showing what happens when different emotions are combined with different sentence types in Kazakh-Russian Sign Language (KRSL), using 2D video analysis software to track eyebrow position in video recordings.
  • These existing findings will have an impact on projects on automatic recognition of sign language and should be considered by recognition models when applied to facial expressions.
  • The study show that students learning Kazakh-Russian Sign Language (e.g. future interpreters) should be aware of how both emotions and grammar are affecting facial expressions.
  • The study is a showcase of possibilities that new technologies, such as computer vision, offer to scientific research of sign languages.

Reference:

Vadim Kimmelman et.al: Eyebrow position in grammatical and emotional expressions in Kazakh-Russian Sign Language: A quantitative study, PLoS ONE, 2020. https://doi.org/10.1371/journal.pone.0233731

Powered by Labrador CMS