THIS CONTENT IS BROUGHT TO YOU BY NTNU Norwegian University of Science and Technology - read more

An AI-generated image of a woman with bionic elements.
This image of Inga Strümke was created with the help of artificial intelligence. Can you spot the error? Find out further down in the article.

AI expert: We should not have AI that exploits people's weaknesses

While Inga Strümke does not believe artificial intelligence will take over the world with killer robots, the field still requires regulations. And now, Europe is about to get them.

Published

Her office isn’t necessarily boring, but the cell-like room behind the glass door at NTNU does not exactly reveal that this is the base of one of Norway’s most renowned researchers either.

For Inga Strümke has become something of a guru for those interested in artificial intelligence in Norway, partly because of her book Machines That Think. Even more so because she enjoys sharing what she knows with the rest of us.

The EU is now introducing new and stricter rules for the use of artificial intelligence, the so-called AI Act. As an EEA member, Norway will likely adopt the same regulations.

The rules are meant to protect us as consumers. They should help ensure that we are safe and can trust the use of AI, according to Inga Strümke.

“I’m not a fan of regulation for regulation’s sake,” says Strümke.

However, she sees several reasons why Norway should participate.

“Norway is part of Europe and the world. It's politically important that we continue to work closely with the EU on these issues," she says.

Beyond the purely political aspects, the AI Act is important for you and me as well. 

"We shouldn’t have AI that exploits people’s weaknesses,” she says.

The use of AI that can be dangerous or unfair will be subject to much stricter regulations than uses that appear to be more innocent. Some uses will simply be prohibited in the EEA and EU. 

“The rules will protect consumers. They will ensure that we are safe and can trust the use of AI,” says Strümke.

A summary of the EU's AI Act

The EU's AI Act divides AI applications into different risk categories:

  1. Unacceptable risk: This use will be banned in the EU. This includes AI that manipulates human behaviour, biometric remote identification in real time, such as facial recognition in public spaces, or helps to give people some kind of behavioural character that can give them advantages or disadvantages in society.
  2. High risk: These are AI systems that threaten health, safety, or fundamental rights, such as those used in healthcare, education, recruitment, critical infrastructure management, law enforcement, or the judiciary. These will be subject to very strict regulations.
  3. General-purpose AI (GPAI): This includes programs like ChatGPT. These must meet certain requirements for transparency. They must be thoroughly evaluated.
  4. Limited risk: AI systems that are subject to transparency requirements, including those that generate or manipulate images, audio, or videos. Free and open-source models with publicly available parameters are generally not regulated.
  5. Minimal risk: Includes, for example, spell checkers and AI systems for video games or spam filters. Not regulated at the EU level.

The information in this fact box is taken from Wikipedia.

What if we do not get any regulations?

Would it really be such a big deal if these regulations were not put in place? Everyone knows that rules can be annoying.

“We could also ignore traffic rules," says Strümke. Then everyone could drive as fast as they wanted.

For a short time, that is. Until something goes horribly wrong. 

"AI is such a strong commercial force. When ChatGPT was launched, it took just five days before it reached 1 million users,” she says.

ChatGPT is currently the most famous chatbot. Many students have already taken shortcuts by using it.

“We're not yet finished dealing with the use of AI in the education sector," she says.

Because when you let AI do almost all the work, you don't learn much either.

Regulations already in place

However, it is not the case that the rest of the world will necessarily adhere to the same strict regulations as the EU and Norway. 

For example, what is to stop China or the USA from having more relaxed regulations and thereby possibly gaining a competitive advantage?

“Nothing. We can't prevent other countries from having different rules. But it's a myth that only the EU regulates the use of artificial intelligence," says Strümke.

Earlier this year, the USA already had 58 different regulations for the use of artificial intelligence, though they were spread across various states. So it is not a free-for-all elsewhere either. 

"We have to choose the type of society we want to live in. The EU has had a tendency to set a precedent for others,” she says.

The EU often leads the way

Other parts of the world often follow the same path that is taken by the EU. Strümke illustrates this by showing a USB-C cable, the new standard for charging cables in much of the world. It was recently adopted despite much noise and complaints from tech companies at first.

But soon, you might not need heaps of different charging cables lying around. One type will suffice. As an additional benefit, USB-C transfers the most data and power, and the cable can be used in either direction. We can thank the EU for making USB-C the standard.

Similarly, much of the world has followed the EU’s General Data Protection Regulation (GDPR), which gives us all greater control and rights over the type of information companies can collect about us.

Strümke believes AI regulations might follow the same path.

Machines that think

Strümke is originally a physicist with a PhD in particle physics, but in recent years she has focused on artificial intelligence.

Her specialty is machine learning, specifically how you train artificial intelligence and explainable AI. The latter deals with what AI has actually understood and where this knowledge comes from.

She has decided to prioritise public information, while still spending a lot of time on research.

Earlier this year, her book Machines That Think had been on the Norwegian Booksellers Association’s bestseller list for a whole year.

“I saw it as an opportunity to get a proper overview of the field of artificial intelligence,” she says.

No Terminator, but…

Inga Strümke is not worried that AI-driven killer robots will soon take over the world. However, the massive, rapid advances in AI still present many problems.

She is afraid that we might lose our spark.

“We have always been used to slumping down in front of the TV and being entertained, but that isn’t the same as what is happening now. Many people are sedentary all day long,” she says. 

Various entertainment services and social media quickly learn what we like and simply feed us more of it. Suddenly, you realise you have spent much more time on TikTok or YouTube than you intended.

“This is worse than coercion. These services give such good recommendations and they know you so well that you simply can’t resist,” says Strümke. 

These services are able to do this because they collect information about you. Almost all of us have clicked on the buttons that give them permission to do so, making it legal. 

With the new EU regulations, this data collection may become more difficult, at least in terms of using the information to keep us glued to the screen for hours.

Strümke herself hardly uses social media anymore. Well, except Instagram. 

But AI does not just threaten our leisure time. It also threatens democracy.

Who gets your vote?

Political elections are also influenced by artificial intelligence. The most entertaining politician can get the most attention.

This is nothing new, but disinformation can be spread through deep fakes, image manipulation, and targeted fake news.

This image was created with the help of artificial intelligence. The starting point is the small picture of Inga Strümke below. Note that the right hand has at least six fingers, currently a typical mistake when we use AI for help.

Messages can be tailored specifically to individuals or groups using artificial intelligence and social media. That puts us in a more vulnerable position.

Lies and advantages for the attractive candidates have always been part of politics. During the 1960 U.S. presidential election, Richard Nixon lost to John F. Kennedy. It certainly did not help Nixon’s case that he appeared on TV stammering, awkward, and without makeup, against his charming rival.

Portrait photo of a woman in a light hoodie.
The image used as a starting point to create the AI-generated illustration above.

“I am still undecided as to whether what we are seeing now is something completely new. But it's different than before. There is no longer the need for a large organisation involving many people to influence an election campaign. All you need is enough computers,” Strümke says. 

A typical Russian propaganda strategy, for example, is not necessarily to lie outright all the time. Instead, people are bombarded with conflicting information until they become fed up. 

"People are affected by digital fatigue. They get tired of it. It can also have a pacifying effect,” she says.

The strategy also opens the door for populists. People with simple messages that cut through the chaos, delivering exactly what you want to hear, often paired with rhetoric blaming some minority or presenting another convenient enemy.

So, it’s going to be a nightmare, right?

The use of artificial intelligence presents many dangers and pitfalls. The world is changing, and not everything seems to be moving in a positive direction.

“People often ask me if I'm an optimist or a pessimist. What I can say is that I'm never in between. Things could really end up going to hell, but it could also turn out to be really good,” says Strümke.

On the one hand, there are all the problems we face in the world. We humans are destroying the planet, eradicating species, destroying habitats, buying loads of things we do not need that are shipped from far away, all while a changing climate looms over us. In recent years, we've seen less democracy and more unrest.

Individually, we humans are not stupid. But our systems and the organisation of our communities are not always that smart. Sometimes, artificial intelligence is part of the problem.

But on the other hand, we have opportunities. And the solutions can just as easily be aided by artificial intelligence if we use it wisely.

“I can’t think of a single field where AI wouldn’t be useful,” says Strümke.

———

Read the Norwegian version of this article on forskning.no

Powered by Labrador CMS