THIS CONTENT IS BROUGHT TO YOU BY THE University of Agder - read more
"Artificial intelligence should help people, not replace them"
Norwegians trust that public authorities will use artificial intelligence responsibly. Now researchers have developed 12 principles to ensure that this trust is not abused.

“Artificial intelligence should help and augment what people do, not replace them. Technology should be a support that gives us new opportunities,” says Stefan Schmager, a researcher at the University of Agder.

He wanted to see how citizens and employees would respond to a system that used artificial intelligence (AI) to follow up individuals on sick leave. He collaborated closely with the Norwegian Labour and Welfare Administration (NAV).
What surprised him most was how trusting Norwegians were towards the state and governmental organisations.
“When I presented the results at conferences in the US, people were astonished. They couldn't believe that people trusted the authorities. In Norway, people understand that the public sector serves an important role,” says Schmager.
Trust facilitates innovation
Norwegians' trust in public authorities helps society function well, says Schmager, who is originally from Germany.
“Trust facilitates innovation. Well-intentioned initiatives aren't as easily shelved due to a lack of understanding or obstinacy. Still, I think it's healthy not to be naïve and to question the decisions being made,” he says.
Participants in the study were positive about NAV using artificial intelligence to handle their data. They appreciated the transparency around how the data would be used and understood that AI could help save time and resources in a way that benefits everyone.
For example: If you are on sick leave, AI can help your caseworker decide whether a follow-up meeting is necessary or not.
“People understood they were contributing to something bigger. By letting NAV use their data, they free up resources that can be used for others in more need of help,” explains Schmager, adding:
“In short, human-centred AI is about using AI as a tool that augments our abilities to perform our tasks, rather than technology taking those tasks from us.”
Employees are positive
Schmager also interviewed 19 NAV caseworkers. Most of them were positive about AI but had clear views on how it should be used.
“They saw great potential in AI handling routine tasks, freeing up more time for the people they are meant to help,” he says.
As one NAV employee told Schmager: “We could spend our time on the most important cases, those who truly need it. People with few resources who cannot take care of themselves.”
Caseworkers wanted AI that could:
- Find important information faster
- Help prioritise cases
- Handle time-consuming administration
- Give them more time for the most challenging cases
12 rules for safe AI use
“In the private sector, we want AI to adapt to us. But in public services, everyone must be treated equally. The system shouldn't learn and copy one caseworker's habits,” says Schmager.
For instance, if a caseworker often rejects applications from young men, the AI should not start doing the same. Every case should be assessed fairly based on the regulations.
Schmager created 12 principles for how AI should be used in the public sector – 6 focused on the needs of citizens and 6 on the needs of the employees.
Few guidelines for the public sector
“When approached by the researchers, NAV was very open and interested about the opportunity. They said ‘we plan to use AI, but we know there are risks and we would appreciate if you could help us do it right’,” says Schmager.
His research fills a clear gap. While many major tech companies have developed their own AI rules, there are almost no guidelines for the public sector.
“Private companies have to make money. The public sector serves the people. That's why different rules are needed,” says Schmager.
"Must be done responsibly"
“NAV is rapidly advancing in digital development, and the collaboration with the University of Agder and Stefan has provided our organisation with valuable knowledge and insights,” says Arve Haug, senior adviser at NAV.
He says NAV depends on trust in its digital services, which makes the collaboration with the university crucial for ensuring the services are perceived as safe and fair.
“There are obviously many opportunities to use AI in our services, but this must be done responsibly,” says Haug.
A word of caution
Schmager's study shows that Norway and the Nordic countries are leading the way in responsible AI use. Still, he warns against moving too fast.
“Don't use AI just because everyone else is doing it. First, understand the problem you wish to solve, then determine if AI is the right tool,” he advises.
The principles he developed can be applied by any public entity looking to implement AI. They can also be adapted for other countries, despite differences in levels of trust compared to Norway.
Reference:
Schmager, S. Human-Centered Artificial Intelligence: Design Principles for Public Services, Doctoral dissertation at the University of Agder, 2025.

This content is paid for and presented by the University of Agder
This content is created by the University of Agder's communication staff, who use this platform to communicate science and share results from research with the public. The University of Agder is one of more than 80 owners of ScienceNorway.no. Read more here.
More content from the University of Agder:
-
“There's a kind of overconfidence that AI will solve most things"
-
How nursing students can get a better start in the profession
-
Researchers are now examining several hundred-year-old snakes in search of a deadly disease
-
The video game GTA sets the standard for how far you can go
-
Gaming companies rake in profits one dollar at a time
-
How video games can impact your career