THIS CONTENT IS BROUGHT TO YOU BY THE University of Agder - read more

"Artificial intelligence should help people, not replace them"

Norwegians trust that public authorities will use artificial intelligence responsibly. Now researchers have developed 12 principles to ensure that this trust is not abused.

En mann utenfor bygg med et NAV-skilt.
Stefan Schmager investigated how citizens and employees responded to a system that used artificial intelligence for following up individuals on sick leave.
Published

“Artificial intelligence should help and augment what people do, not replace them. Technology should be a support that gives us new opportunities,” says Stefan Schmager, a researcher at the University of Agder.

“Human-centered AI is about using AI as a tool that augments our abilities to perform our tasks, rather than technology taking those tasks from us,” says Stefan Schmager.

He wanted to see how citizens and employees would respond to a system that used artificial intelligence (AI) to follow up individuals on sick leave. He collaborated closely with the Norwegian Labour and Welfare Administration (NAV). 

What surprised him most was how trusting Norwegians were towards the state and governmental organisations.

“When I presented the results at conferences in the US, people were astonished. They couldn't believe that people trusted the authorities. In Norway, people understand that the public sector serves an important role,” says Schmager.

Trust facilitates innovation

Norwegians' trust in public authorities helps society function well, says Schmager, who is originally from Germany.

“Trust facilitates innovation. Well-intentioned initiatives aren't as easily shelved due to a lack of understanding or obstinacy. Still, I think it's healthy not to be naïve and to question the decisions being made,” he says.

Participants in the study were positive about NAV using artificial intelligence to handle their data. They appreciated the transparency around how the data would be used and understood that AI could help save time and resources in a way that benefits everyone.

For example: If you are on sick leave, AI can help your caseworker decide whether a follow-up meeting is necessary or not.

“People understood they were contributing to something bigger. By letting NAV use their data, they free up resources that can be used for others in more need of help,” explains Schmager, adding:

“In short, human-centred AI is about using AI as a tool that augments our abilities to perform our tasks, rather than technology taking those tasks from us.”

Employees are positive

Schmager also interviewed 19 NAV caseworkers. Most of them were positive about AI but had clear views on how it should be used.

“They saw great potential in AI handling routine tasks, freeing up more time for the people they are meant to help,” he says.

As one NAV employee told Schmager: “We could spend our time on the most important cases, those who truly need it. People with few resources who cannot take care of themselves.”

Caseworkers wanted AI that could:

  • Find important information faster
  • Help prioritise cases
  • Handle time-consuming administration
  • Give them more time for the most challenging cases

12 rules for safe AI use

“In the private sector, we want AI to adapt to us. But in public services, everyone must be treated equally. The system shouldn't learn and copy one caseworker's habits,” says Schmager.

For instance, if a caseworker often rejects applications from young men, the AI should not start doing the same. Every case should be assessed fairly based on the regulations.

Schmager created 12 principles for how AI should be used in the public sector – 6 focused on the needs of citizens and 6 on the needs of the employees. 

12 principles for public use of AI

Citizen-focused design principles:

  1. Balancing AI benefits and individual freedom
    AI systems should clearly explain how they benefit society whilst respecting individual rights and freedoms.
  2. Clear and necessary use of data
    Only collect and use the personal data that's actually needed, and explain what data is being used and why.
  3. Fit with government mandates and processes
    Make it clear how AI fits into government processes and legal requirements, using simple language.
  4. Gradual information provision
    Start with a basic overview, and let people access more details if they want it.
  5. Easy feedback options
    Provide easy ways for citizens to give feedback and explain how that feedback will be used to improve services.
  6. Appropriate consent practices
    Get proper permission from citizens when using their personal data, especially when data ownership isn't clear.

Employee-focused design principles:

  1. Sensible resource allocation
    Help public service workers focus their time on people who need the most support.
  2. Automating repetitive administrative tasks
    Let AI handle repetitive work so employees can focus on more important tasks.
  3. Consistency over personalisation
    Make sure AI treats everyone equally, rather than learning one employee’s way of doing things.
  4. Providing legal reassurance
    Give employees clear information about data sources and legal compliance to help them do their jobs confidently.
  5. Easy feedback options
    Provide easy ways for employees to give feedback about AI systems and explain how improvements will be made.
  6. Human control over decisions
    Keep people in charge of key decisions, with AI serving as a helpful tool rather than making choices independently.

Source: Human-Centered Artificial Intelligence: Design Principles for Public Services

Few guidelines for the public sector

“When approached by the researchers, NAV was very open and interested about the opportunity. They said ‘we plan to use AI, but we know there are risks and we would appreciate if you could help us do it right’,” says Schmager.

His research fills a clear gap. While many major tech companies have developed their own AI rules, there are almost no guidelines for the public sector.

“Private companies have to make money. The public sector serves the people. That's why different rules are needed,” says Schmager.

"Must be done responsibly"

“NAV is rapidly advancing in digital development, and the collaboration with the University of Agder and Stefan has provided our organisation with valuable knowledge and insights,” says Arve Haug, senior adviser at NAV.

He says NAV depends on trust in its digital services, which makes the collaboration with the university crucial for ensuring the services are perceived as safe and fair.

“There are obviously many opportunities to use AI in our services, but this must be done responsibly,” says Haug.

A word of caution

Schmager's study shows that Norway and the Nordic countries are leading the way in responsible AI use. Still, he warns against moving too fast.

“Don't use AI just because everyone else is doing it. First, understand the problem you wish to solve, then determine if AI is the right tool,” he advises.

The principles he developed can be applied by any public entity looking to implement AI. They can also be adapted for other countries, despite differences in levels of trust compared to Norway.

Reference:

Schmager, S. Human-Centered Artificial Intelligence: Design Principles for Public ServicesDoctoral dissertation at the University of Agder, 2025.

About the research

The doctorate is part of the NFR-funded AI4Users project in the Human-Centered AI research group and also part of an extensive collaboration between NAV and the University of Agder.

Powered by Labrador CMS