THIS ARTICLE/PRESS RELEASE IS PAID FOR AND PRESENTED BY University of Oslo - read more
The new, artificial composers
In the future, a new kind of composer may contribute to new kinds of music.
In recent years artificial intelligence (AI) has affected music to an increasing extent, for instance in music production: While a human writes the main melody, the machine may produce the background arrangement using AI.
“However, music produced by AI today may not often be very surprising, as surprise is not AI’s priorority. It is more about giving you what you order,” Cağrı Erdem says.
As a researcher at RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, he wanted the machine to become a partner.
“I wanted us to make music together. So the machine needs to have some kind of agency, a capability to act on its own,” he explains.
The machine as a partner
He recently completed a PhD where he investigated what he called ‘shared control’, where he developed several interactive systems.
One of the systems he named CAVI. This is an instrument controlled by both a musician and a machine, and where both actors are able to make choices.
“In this clip, six 'self-playing guitars' listen and react based on what they hear. It becomes a kind of guitar choir, where the guitars give their own contribution to the piece,” Erdem says.
In November, Erdem will be organising a workshop on musical AI.
The movements of the body
He also investigated how our bodies can collaborate with AI.
“If you look at the AI systems in music today, they do not collaborate with the human body. In general they mostly base their actions primarily on sound. However, when humans play together, they communicate both through sound and movement," Erdem says.
He invited 36 guitarists to his lab and collected a dataset that gave information about how their movements were linked to the music being played.
“I found that there is a close connection between sound and movement, particularly the movements of the right hand. The muscular force we exert when hitting the guitar strings reflects the sound almost perfectly,” he says.
Erdem used machine learning algorithms to code movement and sound. Later he could give the machine information about movement only and the machine would produce sound based on that input.
In the longer term, Erdem believes that this kind of technology can make it easier for humans and machines to collaborate.
Instruments for the future
Erdem works in a niche field, but he believes his research to be important.
“In music history, there are many examples of new instruments that have influenced the music being made,” he says.
When the piano was invented, it was originally named ‘pianoforte’ since it allowed you to play both silently (Italian: piano) and loudly (Italian: forte).
“You see the effect of it on pieces written afterwards. Tools affect the music we make. New instruments may build a foundation for the music of the future,” Erdem says.
A more creative AI
Today the tech field is led by engineers, not artists, and this is highly visible in today’s AI systems, he adds.
“If you write the word ‘cat’ in a search engine, you get pictures based on other cat pictures. This makes sense, but it also means that the algorithm often does not show you the photos of rare cats," Erdem says.
In order for AI to make broader musical expressions more artists must get their hands dirty with AI technologies, he states.
“People working on the crossroads of art, technology and science, like myself, need to investigate how algorithms may contribute to broadening the art and music of the future,” he says.
AI in all phases of music production
As a tool, AI already contributes immensely to people’s everyday musical experiences. For instance, when your streaming platform suggests artists for you, it is using AI. When you listen to film music with big orchestra arrangements, they may be made by AI, perhaps based on a simple melody.
Erdem has no doubt that in 50 years, AI will be a strong presence in music production.
“I think it will be indispensable in all phases of music production, such as sound synthesis, songwriting/composition, arranging, recording, mixing, mastering, distribution/streaming, promotion, as well as live performances,” he says.
He also believes that we will find AI musicians with huge fanbases. The robot Shimon is already out there. It doesn’t have a fanbase yet, but you can book it for your event.
“But what will happen with copyrights? We don’t know yet. For example, after CAVI’s premier, we could not figure out how to deal with that in the Norwegian system,” Erdem says.
Reference:
Cağrı Erdem. Controlling or Being Controlled? Exploring Embodiment, Agency and Artificial Intelligence in Interactive Music Performance, Doctoral thesis, University of Oslo, 2022. Abstract.
This article/press release is paid for and presented by the University of Oslo
This content is created by the University of Oslo's communication staff, who use this platform to communicate science and share results from research with the public. The University of Oslo is one of more than 80 owners of ScienceNorway.no. Read more here.
See more content from the University of Oslo:
-
Researcher: There is an increased risk of nuclear weapons use
-
This newly developed robot can play the drums, listen, and learn
-
Genetically, this is a super fungus
-
The proportion of women in power worldwide in 2019 was the same as in France in the 1300s
-
A cloud of dust prevents us from seeing the universe clearly – researchers are now going to clear it up
-
New method tested on mice gives new hope for immunotherapy against prostate cancer