THIS CONTENT IS BROUGHT TO YOU BY UiT The Arctic University of Norway - read more

International women's day:
Why AI performs worse for women

Artificial intelligence may treat men and women differently. How does this happen? 

Associate professor Elisabeth Wetzer says AI tends to treat men and women differently.
Published

From health-promoting technologies to personal assistants. There is little doubt that artificial intelligence (AI) can help us in many different ways. 

But does it help everyone equally?

“AI has a tendency to treat men and women differently,” says Elisabeth Wetzer, an associate professor at the UiT Machine Learning Group and SFI Visual Intelligence.

Wetzer, an AI expert, sees this as a significant challenge. She points to examples like job application systems favouring men, credit limits being lower for women, and AI struggling to recognise female faces. 

What makes an AI algorithm treat some groups unfairly, like favouring men over women?

How does AI become biased?

AI is trained on enormous amounts of data. Chatbots like ChatGPT, DeepSeek, and Elon Musk’s Grok are based on millions of pictures, videos, and texts from the internet. These ‘big data’ are essential for an AI system to perform a given task.

But data reflects history, including outdated stereotypes and biases, for example pertaining to gender.

“If you look through a set of data from the last decade, you will quickly find groups who have been discriminated against based on their gender, sexuality, or skin colour. Since AI is made to find patterns and correlations in data, there's a risk that the systems may pick up and reinforce biases from the dataset,” explains Wetzer.

This can have significant consequences, especially for marginalised and underrepresented groups.

“Let's say you have a credit score system designed to determine the size of the loan someone should be granted. If the system is based on salary statistics from the past 60 years, it will pick up that there's a significant wage gap between women and men,” she says.

Wetzer explains that the system will then assume that women are less economically responsible than men, and that women are less suited to be granted a loan. 

"This means that it has learned a skewed and incorrect connection between gender and income,” she says. 

Should AI be gender-blind?

If there is a risk of AI misusing gender information to treat people differently, does this mean that AI systems should be developed to be gender-blind?

It depends on what the AI system is designed for, says Wetzer. In some cases, gender can be an important factor.

“For example, some diseases occur more frequently in women than men. For those cases, you don't want AI to consciously ignore the person’s gender when detecting such diseases,” says Wetzer.

She explains that if the information is relevant to the AI’s task, it is important that gender is not ignored. 

"But an algorithm should never use this information to determine how suited someone is to be granted a loan,” she says.

Developing systems that avoid this bias is not always easy. AI is good at detecting gender-related factors, even ones developers may not realise exist in the data. For example, an AI tool from Amazon learned to ignore job applications that mentioned universities associated with women.

Lacking representation in data

In today’s global society, it is important for everyone to be equally represented. This also applies to the data that AI systems are based on. Who is included – or left out – affects how well the technology performs for different groups.

“If a specific group of people is not equally represented in the data as other groups, the system will perform worse on that particular group,” Wetzer explains.

If AI is trained on images of male professors, it may assume that only men hold this job. AI developers need to be mindful of representation in a dataset, especially when developing systems that make decisions affecting people’s health and well-being.

“For example, an AI cancer diagnostics system may have been created in a developed country that can afford to do so. People will assume that it will perform well for everyone, but some marginalised groups may not ever have been part of the training data. The system will most likely not work well on those people," she says. 

‘Woke’ AI

The pursuit of equal representation in AI can sometimes go too far. 

Last year, Gemini, Google’s AI image generator, faced criticism for being‘woke.’ It generated images of German soldiers from 1943 with African and Asian appearances.

Wetzer explains that when asking AI about how something was in Germany at a particular point in time, it should not incorrectly assume that the population was more diverse than it actually was. 

"You can clearly see that it has actively tried to make a more diverse set and produced something which does not make much sense,” she says. 

Skewed gender balance

Biases in AI do not just include training data. The AI workforce itself is imbalanced. Only 30 per cent of today’s global AI workforce are women, meaning that most systems are developed by men. This can significantly impact the development of such systems.

“There are a lot of things to consider when developing AI. For example, which training data, neural network, and parameters to use. These decisions are made by someone, and today’s workforce is not particularly diverse,” says Wetzer.

If AI is developed by just one group, it may reflect only their perspectives. The system might be shaped by how they see and experience the world. This is rarely intentional and usually happens without the developers themselves being aware of it.

“Several studies show that technology is shaped by those who create it. This means that there is a chance that a single group may forget to include others’ perspectives and experiences around gender discrimination and racism,” the researcher says.

A need for female role models

The AI workforce and academia should reflect the diversity in society, says Wetzer. Increasing diversity among AI developers and researchers is crucial for developing AI technology that works well for everyone.

“It's absolutely essential to include different perspectives and experiences in the development of these solutions,” says Wetzer.

She believes the field will become more diverse. However, several measures are still needed to motivate girls and women to study, research, and develop AI.

“We must continue to spark interest in STEM sciences from an early age and put STEM careers on the map for girls. We also need strong role models who can inspire them to study and work with AI," she says.

According to Wetzer, more attention should be given to female researchers and their contributions to the field. 

AI regulation is necessary

Last year, the AI Act – the world’s first AI law – was passed in the EU. 

It imposes strict requirements for the responsible development and use of AI in Europe and Norway. The full legislation is set to be implemented in Norway by 2026.

Without such guidelines, AI could contribute to reinforcing social and economic inequalities, including those based on gender. Wetzer is positive about the AI Act and sees it as an important step toward developing safer and fairer AI.

“I believe it will provide thorough guidelines on how AI systems should be developed and tested before being implemented, similar to how medications are tested. There are clear guidelines and multiple stages that must be followed before the drugs can be used and sold, and the same should apply to AI,” says Wetzer.

She explains that the regulation will encourage developers and researchers to consider how AI systems should be designed according to fundamental ethical principles. 

"It's important that AI systems serve more than just corporate interests,” she concludes.

Powered by Labrador CMS