THIS CONTENT IS BROUGHT TO YOU BY University of Oslo - read more

Researcher: "AI weakens our judgement"

AI's superior ability to formulate thoughts and statements for us weakens our judgement and ability to think critically, says media professor Petter Bae Brandtzæg.

Studies show that even though we like to say we're critical, we still follow the advice of AI.
Published

No one knew about Chat GPT just three years ago. Today, 800 million people use the technology. 

The speed at which artificial intelligence (AI) is rolling out breaks all records and has become the new normal.

We cannot opt out of AI

Many AI researchers, like Petter Bae Brandtzæg, are sceptical. AI is a technology that interferes with our ability to think, read, and write.

“We can largely avoid social media, but not AI. It's integrated into social media, Word, online newspapers, email programs, and the like. We all become partners with AI – whether we want to or not,” says Brandtzæg.

The professor of media innovations at the University of Oslo has examined how AI affects us in the recently completed research project An AI-Powered Society.

The Freedom of Expression Commission overlooked AI

The project has been conducted in collaboration with the research institute SINTEF. It is the first of its kind in Norway to research generative AI – meaning AI that creates content – and how it affects both users and the public.

The background was that Brandtzæg reacted to the fact that the report from the Norwegian Commission for Freedom of Expression, which was presented in 2022, did not sufficiently address the impact of AI on society. 

At least not generative AI.

AI affects how we think and understand the world

“We must not forget that AI is not a public, democratic project. It's commercial, and behind it are a few American companies and billionaires,” says researcher Petter Bae Brandtzæg.

“There are studies that show that AI can weaken critical thinking. It affects our language, how we think, understand the world, and our moral judgment,” says Brandtzæg.

A few months after the Commission for Freedom of Expression report, ChatGPT was launched, making his research even more relevant.

“We wanted to understand how such generative AI affects society, and especially how AI changes social structures and relationships,” he says.

The boundaries between humans and systems are blurring

The social implications of generative AI is a relatively new field that still lacks theory and concepts. Researchers have therefore launched the concept of ‘AI-individualism.’ It builds on ‘network individualism,’ a framework launched in the early 2000s.

Back then, the need was to express how smartphones, the internet, and social media enabled people to create and tailor their social networks beyond family, friends, and neighbours.

Networked individualism showed how technology weakened the old limits of time and place. 

But with AI, something new is happening: the boundaries between humans and systems also begin to blur, since AI takes on roles that used to belong to humans.

AI can meet our social needs – what about community?

“AI can also meet personal, social, and emotional needs,” says Brandtzæg.

He has a background in psychology and has previously researched the personal bonds people form with the chatbot Replika. ChatGPT and similar social AIs can provide immediate, personal support for just about anything.

“It strengthens individualism by enabling more autonomous behaviour and reducing our dependence on people around us. While it can enhance personal autonomy, it may also weaken community ties. A shift towards AI-individualism could therefore reshape core social structures,” he says.

Brandtzæg argues that the concept of AI-individualism offers a new perspective for understanding and explaining how relationships change in society with AI.

“We use it as a relational partner, a collaborative partner at work, to make decisions,” he says.

Students choose chatbots

The project is based on several investigations, including a questionnaire sent to 166 upper secondary school students about how they use AI.

“ChatGPT and MyAI go straight to the point regarding what we ask, so we don't have to search endlessly in the books or online,” one student said about the benefits of AI.

“ChatGPT helps me with problems, I can open up and talk about difficult things, get comfort, and good advice,” another student answered.

In a blind test, many preferred answers from a chatbot over a professional when they had questions about mental health. 

More than half preferred the chatbot, less than 20 per cent said they preferred a professional, while 30 per cent responded both.

“This shows how powerful this technology is, and that we sometime prefer AI-generated content over human-generated,” says Brandtzæg.

‘Model power’ – which can hallucinate

The theory of ‘model power’ is another concept they have introduced. It builds on a power relationship theory developed by sociologist Stein Bråten 50 years ago.

Model power is the influence one gains by having a model of reality that has impact, and which others must accept in the absence of equivalent models of power of their own, according to this article (link in Norwegian).

In the 1970s, it was about how media, science, and various groups with authority could influence people and held model power. 

Now it's AI.

Brandtzæg's point is that AI-generated content no longer operates in a vacuum. It spreads everywhere – into public reports, research, and encyclopedias. 

AI spreads everywhere

When we perform Google searches, we first get an AI-generated summary.

“A kind of AI layer is covering everything. We suggest that the model power of social AI can lead to model monopolies, significantly affecting human beliefs and behaviour,” says Brandtzæg.

Because AI models like ChatGPT are based on dialogue, they call them social AI. But how genuine is a dialogue with a machine fed with enormous amounts of text?

“Social AI can promote an illusion of real conversation and independence – a pseudo-autonomy through pseudo-dialogue,” he says.

AI had invented sources

91 per cent of Norwegians are concerned about the spread of false information from AI services like Copilot, ChatGPT, and Gemini, according to a survey from the Norwegian Communications Authority from August 2025.

AI can hallucinate. A well-known example is a report the municipality of Tromsø used as a basis for a proposal to close eight schools. It was based on sources that AI had fabricated. 

Thus, AI may contribute to misinformation, and may undermine user trust in AI, service providers, and public institutions.

How much misinformation is actually out there?

Brandtzæg wonders how many other smaller municipalities and public institutions have done the same and is worried about the spread of misinformation.

He and his researcher colleagues have reviewed various studies indicating that although we like to say we are critical, we nevertheless follow AI's advice, which highlight the model power in such AI systems.

“It's perhaps not surprising that we follow the advice that we get. It's the first time in history that we're talking to a kind of almighty entity that has read so much. But it gives a model power that is scary. We believe we are in a dialogue, that it's cooperation, but it's one-way communication,” he says.

American monoculture spreads

Another aspect of this model power is that the AI companies are based in the USA and built on vast amounts of American data.

“We estimate that as little as 0.1 per cent is Norwegian in AI models like ChatGPT. This means that it's American information we relate to, which can affect our values, norms, and decisions,” he says.

What does this mean for diversity? The principle is that 'the winner takes it all.' AI does not consider minority interests. 

Brandtzæg points out that the world has never before faced such an intrusive technology, which necessitates regulation and balancing against real human needs and values.

“We must not forget that AI is not a public, democratic project. It's commercial, and behind it are a few American companies and billionaires,” says Brandtzæg.

References:

Brandtzaeg, P.B. Transforming Social Structures in the Age of Social Artificial Intelligence. In P. Hacker (Ed.), Oxford Intersections: AI in Society, 2025. DOI: 10.1093/9780198945215.003.0099

Skjuve et al. 'Unge og helseinformasjon: ChatGPT vs. fagpersoner' (Young people and health information: ChatGPT vs. health professionals), Tidsskrift for velferdsforskning, 2025. DOI: 10.18261/tfv.27.4.2

Powered by Labrador CMS