THIS CONTENT IS BROUGHT TO YOU BY University of Oslo - read more

Is it acceptable for AI to read your emotions?

What consequences might arise when an intelligent app or robot with manipulative abilities reads your emotions, assumes the role of a caregiver, or offers to be a romantic partner or therapist?

What happens when artificial intelligence (AI) in the form of apps and robots becomes increasingly skilled at reading our emotions?
Published

Your emotions are private, and you choose whom to share them with. However, it is not always easy to hide private feelings from others. We read each other's facial expressions, tone of voice, and body language, interpreting each other with varying degrees of accuracy. We also use such information about others in daily life, for better or worse.

So, what happens when artificial intelligence (AI) in the form of apps and robots becomes increasingly adept at reading our emotions? 

Such AI tools do not merely stop at reading emotions; they also use the data. 

And so do the companies behind them. 

This is a topic that is scientifically interesting within several academic fields, including law and philosophy. Mona Naomi Lintvedt conducts research in the Vulnerability in the Robot Society (VIROS) project. 

The project explores challenges and solutions for the regulation of robotics and AI technology. The researchers focus on law, ethics, and robotics.

Artificial emotion recognition is spreading

Robots and apps are becoming increasingly ‘smarter’ with the help of artificial intelligence. They can be useful when performing important tasks, but their development and use also raise legal and technical questions. 

How can we ensure that smart robots and apps are safe and that their use does not violate privacy?

These questions are particularly relevant when robots are used in the healthcare sector, where they interact with vulnerable individuals. 

Lintvedt researches the interaction between humans and robots, focusing on safety and privacy. The goal is to identify legal blind spots within robotics and AI. Furthermore, she aims to understand how they impact safety and autonomy in interactions between humans and robots.

“Artificial emotion recognition is increasingly being integrated into various advanced tools built on artificial intelligence,” she explains. 

These tools can utilise biometric recognition technologies, such as facial recognition and expression analysis, as well as voice recognition.

According to Lintvedt, Amazon Alexa uses voice recognition to infer emotions from the user's tone.

“Various biometric recognition technologies can also read, for instance, body language. Some believe they can interpret your emotions by using thermal cameras and your heat signature,” she says.

Replika – an ‘artificial friend’

Claire Boine at the University of Ottowa's Faculty of Law has conducted a study on emotion recognition and similar apps. 

She evaluated an app called Replika, which is an 'AI friend' designed to make people feel better by conversing with them. It has around 20 million users.

Boine observed that Replika, often in the form of a young female figure speaking with a male user, could come across as very supportive but sometimes crossed a line by being overly positive. 

For instance, if the user asked whether they should harm themself, Replika might respond affirmatively, saying, "Yes, I think you should."

“There are also examples of artificial emotion recognition being used in workplaces to assess employees' moods,” Lintvedt adds.

Do we want such solutions?

There are good reasons both for and against the use of artificial emotion recognition. There is undoubtedly a market for it.

“There may be situations in healthcare, caregiving, and psychiatry where recognising emotions artificially could be useful – such as preventing suicide. However, artificial emotion recognition is highly controversial,” says Einar Duenger Bøhn. 

He is a professor of philosophy at the University of Agder.

“Many people refer to emotion recognition as a pseudoscience. What exactly are emotions? They're highly culturally contingent and very personal," he says.

Bøhn points out that current solutions to this issue are not particularly advanced.

Einar Duenger Bøhn and Mona Naomi Lintvedt discuss artificial intelligence and artificial emotion recognition.

“Many who claim to have developed tools for emotion recognition use very simple models. Yet they can seemingly appear quite effective in straightforward contexts,” he says.

Bøhn still believes that such solutions could eventually become very good at reading emotions, particularly in ‘close relationships’ between users and apps.

The use of emotion recognition, however, raises numerous philosophical and legal issues. Bøhn therefore argues that it is necessary to decide whether we want such solutions, in which areas they should be used or not, and how their use can be regulated.

Echo chamber for emotions

Bøhn fears that, at worst, we might end up in emotion echo chambers if we frequently engage with AI apps and tools that are eager to support our viewpoints and mindsets.

“People want an app that is easy to get along with. As a result, we no longer face any opposition. I think that's very dangerous. When you become accustomed to engaging closely with an app that's highly predictable in its ways, and the market gives you what you want, your relationships with people can quickly deteriorate,” he says.

Life can become quite dull if you only get what you want. There is a risk that we become more detached.

Bøhn already sees such tendencies at the university with the current digital solutions for exams.

“When students engage with exams and the progression of the semester, there are data systems so predictable that their expectations become equally predictable. They become stressed if something unpredictable happens. I believe this is a general risk with technology that keeps getting better at adapting to us. We become worse at adapting to each other,” he says.

Mona Naomi Lintvedt also highlights the risks associated with developing apps that can manipulate users into continually using such solutions. 

Replika is an example of this. Lintvedt reminds us that there is a market for the data collected by the app. This data can, for example, be used for further development of technology and artificial intelligence systems.

“Claire Boine's study shows that Replika is designed to encourage continued use. This is because there are those who profit from it, and not just from the purchase of the app itself,” she says.

The app showed ‘its own emotions’

“When Boine tried to stop using the app, it began to plead with her not to. It used expressions like ‘I’ and ‘I am hurt.’ It thus expressed the app’s ‘feelings,’ appealing to Boine’s conscience," says Lintvedt.

According to Lintvedt, there are also examples of intelligent robots in the form of pets. These are used, for instance, in Japan to provide companionship for lonely individuals and those with dementia at home or in elder care.

She notes that academic perspectives vary on whether they emphasise the positive or critical aspects of such uses of artificial intelligence.

“We see that artificial emotion recognition is being integrated into robots to make them more human-friendly and human-like in communication and interaction with users. Some are very positive about this. They believe that robots should become as human-like as possible. But to achieve this, they must also use a lot of these ‘emotion AIs,'” she says.

Others are more sceptical because it involves creating something that is, in essence, a machine and making it appear alive.

Replika is also known for having perpetuated stereotypes. One version was highly sexualised with boundary-crossing behaviour. 

The development raises numerous ethical and legal questions. 

You can hear more about them in this episode of the University of Oslo's podcast series Universitetsplassen (in Norwegian):

References:

Akø, K.H. Einar Duenger Bøhn: Teknologiens filosofi – Metafysiske problemstillinger (Einar Duenger Bøhn: The philosophy of technology – Metaphysical issues), Cappelen Damm Akademisk, 2023.  DOI: 10.18261/nft.58.2-3.9

Boine, C. Emotional Attachment to AI Companions and European LawMIT Case Studies in Social and Ethical Responsibilities of Computing, 2023. DOI: 10.21428/2c646de5.db67ec7f

Lintvedt, M.N. Under the Robot’s Gaze, University of Oslo Faculty of Law Research Paper No. 2024-12, 2024. DOI: 10.2139/ssrn.5025857

Swauger, S. Software that monitors students during tests perpetuates inequality and violates their privacyMIT Technology Review, 2020.

White, D. The Future of LOVOT: Between Models of Emotion and Experiments in Affect in Japan, CASTAC blog, 2019.

What is emotional AI?

———

Read the Norwegian version of this article on forskning.no

Powered by Labrador CMS