THIS CONTENT IS BROUGHT TO YOU BY SINTEF - read more
Deepfakes are threatening trust in society
Will we be able to trust text and images in the future? Deepfake technology is being used not just for innocent fun, but also to influence voters in the world’s most powerful countries.

Hollywood star Brad Pitt recently opened SINTEF’s conference on digital security.
Well, not really.
“I cloned his voice in less than three minutes,” says Viggo Tellefsen Wivestad, a researcher at SINTEF Digital.
Wivestad began his talk on deepfake a video of himself – except it wasn't him. Instead, he appeared as Brad Pitt, speaking in the actor's signature voice: “Deepfake. Scary stuff, right?”
And that is precisely Wivestad’s message.
A growing threat to society
“Deepfake will become a growing threat to us as both private individuals and employees, and to society at large. The technology is still in its infancy. Artificial intelligence is opening up unimaginable opportunities and becoming harder and harder to detect,” says Wivestad.
The word ‘deep’ in deepfake comes from ‘deep learning,’ a field of artificial intelligence. In practice, deepfake refers to any AI-generated fake media content.

We have seen many examples of this in the US election campaign. Even those who have used the technology to make deepfake videos themselves, have issued warnings.
Very credible hoaxes
The researcher explains that cloning someone’s voice is easy – all you need is a recording to use as a basis. The same applies to still images. Creating deepfake videos is more complex, but the technology has reached a level where fake videos appear highly convincing.
“Microsoft has a technology that produces fake videos based on just a single still image. You decide what the person should say, and the lips and face move in a natural way,” says Wivestad.
The ease of face-swapping
Many enjoy using Snapchat filters that give us rabbit ears or distort facial features. However, when the technology is used to swap people’s faces in photos and videos, the consequences can be anything but fun.
“In South Korea, students posting porn videos with the faces of their fellow students added is a major problem. In six out of ten cases, the cyberbullying involves minors. Popstars and actors around the world have been exposed to the same," says Wivestad.
More apps are offering face swap technology. These are becoming increasingly advanced. Now, with just a short descriptive text about a well-known person, AI can realistically generate both their voice and live images.
Embraced by criminals
Deepfake and AI are powerful tools in the hands of criminals.
Wivestad shares the story of a Hong Kong financier who was tricked into transferring USD 25 million after a video call with someone he thought was the company's CFO. The man was sceptical at first but became convinced when he recognised several colleagues attending the same meeting, all agreeing with the CFO’s request to execute a fast-track money transfer.
“The problem was that everyone he met during the video conference was a deepfake. The scammers had cloned both the faces and the voices of the man’s colleagues," says Wivestad.
He explains that this was a sophisticated operation, but it is becoming easier to commit advanced fraud due to increasingly powerful digital tools.
You can get anyone to say anything at any time,” he says.
Undermining trust in society
Wivestad fears that deepfake technology could weaken confidence in society.
“During the US election campaign, Biden’s voice was cloned and used in robocalls where the president asked people not to vote. Later, Trump reposted fake photos apparently showing Taylor Swift fans who were Trump supporters,” he says.

When trust is weakened, it is easy to create uncertainty. Trump claimed that news articles showing images from the crowd at a Kamala Harris gathering were generated by AI, despite overwhelming evidence to the contrary.
“We also have what is known as Liar’s Dividend – that it's easier for people to claim that real events are fake if they don’t like what it depicts. One famous example is Trump’s claim that Kamala Harris had exaggerated the size of the crowd at her gathering,” says Wivestad.
'Has anyone noticed that Kamala CHEATED at the airport? There was nobody at the plane, and she "A.I.’d" it, and showed a massive "crowd" of so-called followers, BUT THEY DIDN’T EXIST!' Trump claimed on Truth Social.
“Insurance companies are seeing an increase in the number of fraudulent claims they receive. This is particularly worrying given that we know that fake news spreads ten times faster than the truth,” says Wivestad.
How to detect deepfakes
Wivestad offers some helpful tips for detecting deepfake videos and images. However, he notes that this advice may soon be outdated.
- Look closely at the details in an image or video. Is there anything that doesn’t seem quite right – the number of fingers, an unusual shape to something?
- What about shadows and reflections? It can be easier to detect fakes by looking at things in the background. Is there any text in the picture, and does it make sense?
- Do the images come from a credible source? Does the image stir up strong feelings, could someone have malicious intentions behind using such an image?
“As regards the human voice, we have the advantage that Norwegian is a small language, especially in the case of dialects. It's easier to create deepfakes from Jonas Gahr Støre’s Eastern Norway dialect than Erna Solberg’s Bergen dialect,” the researcher says.
A simple security measure: create codewords
If you have any doubts about whether the person you are talking to in a video call is real, Wivestad suggests having an agreed codeword, or asking about something that the fraudsters are unlikely to know.
If you are put under time pressure or the messages stir up strong emotions, you may want to think twice and check whether the person you are talking to is real.
“You can still avoid being fooled if you keep up-to-date with critical thinking, media knowledge, and netiquette. Even if deepfakes become perfect one day, they will still have some weaknesses: Just because someone pretends to be someone you know does not mean that the fraudster has an in-depth knowledge of your relationship with them,” Wivestad explains.
If someone reaches out unexpectedly, Wivestad suggests ending the conversation and contacting the person through the trusted channel you would normally use.
"It's also not possible to deepfake reality, so you might want to consider having sensitive conversations face-to-face,” he suggests.
Tech giants are working on the problem
The general population, companies, institutions, and authorities alike all need to improve their understanding and awareness about the threats that deepfake poses.
"The more we know, the easier it is to detect fakes. Companies like Microsoft, Google, and OpenAI are working on the problem. However, there is a race between technology for detecting deepfake and for finding ways around the security barriers," says Wivestad.
He explains that it is not the tools themselves that define our future, but how we regulate and use them.
“We need institutions, organisations, journalists, and researchers that we can trust. For research institutions such as SINTEF, credibility is crucial,” the researcher says.
More content from SINTEF:
-
Can an app make us more engaged in environmentally friendly architecture?
-
How researchers plan to prevent plastic pollution in the ocean
-
Experiments on mice: Nanomedicine using gas bubbles may offer a new cure for lung cancer
-
Norwegian schools are not adequately accessible to all their users
-
Norwegian boys dislike school the most
-
Old brickwork can be brought back to life in old heritage buildings