THIS CONTENT IS BROUGHT TO YOU BY UiT The Arctic University of Norway - read more
Why does Norway need its own AI law?
An AI expert explains that the law is intended to make AI technology safe for everyone.

Imagine if you lived in a society where artificial intelligence (AI) is used to spy on you during class or work. What would you think about that?
Or what if it was used to rank your value as a human being based on how you behave at school, the workplace, or in your free time?
In 2024, the EU adopted the world's first law for AI. It will determine how AI should be developed and used in the EU and EEA countries in a responsible way, thereby avoiding these nightmare scenarios. The entire legislation will be in effect in 2026.

Today, the Norwegian government is working on making its own AI law. It will be similar to the EU’s, but adapted to Norwegian conditions and regulations.
Meant to maintain our safety
Eirik Agnalt Østmo is a researcher at the AI centre SFI Visual Intelligence at UiT The Arctic University of Norway.
He says the law is important for ensuring that we feel safe in a society where AI is being used in more areas.
It's about taking control of a technology that is developing at a rapid pace – and preventing it from being abused.
"AI is developing at an extremely fast rate, and we still haven't seen the technology's full potential. That's why we need rules for how AI systems should be developed and used," he explains.
Holding AI developers accountable
While AI technology can help us in many different ways, some systems may pose a risk to people's rights, health, and well-being. For example, they can be used to manipulate people into doing things or exploit individuals in vulnerable situations.
Fortunately, not all AI systems are equally dangerous – such as the kind that suggests which movie you should watch on Netflix. That is why the AI law divides the systems into four different risk categories: minimal, limited, high, and unacceptable risk.
The greater the risk an AI system poses, the stricter the requirements developers must follow.
"This helps to hold those who provide AI solutions accountable, while also determining how the systems should be used depending on their risk category," Østmo explains.
Unwanted AI systems
Some AI systems are considered unacceptable. Those that can be used to rank individuals as good or bad citizens based on their behaviour will be banned in Norway. The ban ensures that the technology cannot be used to violate our privacy.
"These are systems that have such a high potential for harm that we don't want them in our society," says Østmo.
The ban also prevents people from using the technology to limit our freedom of speech. For example by monitoring people that attend political rallies or demonstrations.
Earlier in 2025, the Hungarian government planned to use AI to identify and fine individuals who took part in Pride celebrations.
"People must have the right to say and believe what they want without the fear of being monitored or recorded. That's why AI should not be used this way," he says.

Preventing discrimination
Today, AI is used to decide who gets a mortgage or to find the best job candidate. These are examples of high-risk AI systems.
While these systems can make these tasks quicker and simpler, they have been known to discriminate against people based on their gender, skin colour, or sexual orientation.
This happens when the AI is trained on historical data – meaning text, images, or videos that may contain outdated attitudes or stereotypes.
An AI system from Amazon favoured job applications from male applicants, since it is often men that apply for or work in ICT-related jobs. An AI-based credit card system from Apple was investigated in 2019 for giving women lower credit card limits.
The AI law will be important to ensure that the systems do not discriminate against anyone.
"The AI systems must treat everyone equally. The law serves as a tool to comply with this principle" says Østmo.
Limiting fake news and information
AI chatbots imitate human intelligence in a very convincing way. Social media platforms like TikTok can show fake videos that look surprisingly real.
That’s why it's not always easy to know when you are talking to a machine or a human – or to separate false information from reality.
'Transparency' is a key principle in the AI law. It means that developers must ensure that we always know when we're interacting with AI. Clear labeling of AI-generated content is therefore important.
“If we know that we're talking to a machine, or that an image was created by AI, it can help us be more critical of the information. This can help combat fake news and misleading content on social media and the internet," Østmo explains.
Safe AI development
The AI law is meant to make AI technology safe for everyone, but some fear that strict regulations might slow down innovation – even AI systems that could help doctors detect diseases.
Østmo emphasises that AI development must be safe. The most essential thing is that the technology benefits people. Unfinished or harmful AI systems must not be released into society.
"If we don't have clear rules for what is acceptable use of AI and what is not, technology companies will decide that for themselves. That's why it's important to have a law that ensures safe AI development," he says.

This content is paid for and presented by UiT The Arctic University of Norway
This content is created by UiT's communication staff, who use this platform to communicate science and share results from research with the public. UiT The Arctic University of Norway is one of more than 80 owners of ScienceNorway.no. Read more here.
More content from UiT:
-
Researchers reveal a fascinating catch from the depths of the sea
-
How can we protect newborn babies from dangerous germs?
-
This is how AI can contribute to faster treatment of lung cancer
-
Newly identified bacterium named after the Northern Lights is resistant to antibiotics
-
International women's day:Why AI performs worse for women
-
New study: Streptococcal vaccines are both safe and effective