Perfect Sound Forever

AI AND THE FUTURE OF HUMAN MADE MUSIC


Image generated by Runway AI

Introduction and Interviews by J. Vognsen


In an article titled "The Shallowness of Google Translate," the eminent mathematician, physicist and cognitive scientist Douglas Hofstadter reflected on the idea that human translators may be an endangered species. Would humans be left only as "mere quality controllers and glitch fixers rather than producers of fresh new text," he wondered, as machines increasingly take their place? Hofstadter did not at all like the prospects:
"Such a development would cause a soul-shattering upheaval in my mental life. (...) Indeed, the idea frightens and revolts me. To my mind, translation is an incredibly subtle art that draws constantly on one's many years of life experience, and on one's creative imagination. If, some 'fine' day, human translators were to become relics of the past, my respect for the human mind would be profoundly shaken, and the shock would leave me reeling with terrible confusion and immense, permanent sadness."
While Hofstadter - way back then, ages ago in early 2018 - was ultimately reassured that human translators still had a place, his fundamental outlook was deeply ominous as it hinged only on the current state of the technology as he saw it at the time. As Hofstadter made clear in the article, professionally his name is closely associated with the idea that there are no fundamental differences between human brains and machines:
"From my point of view, there is no fundamental reason that machines could not, in principle, someday think; be creative, funny, nostalgic, excited, frightened, ecstatic, resigned, hopeful, and, as a corollary, able to translate admirably between languages. There's no fundamental reason that machines might not someday succeed smashingly in translating jokes, puns, screenplays, novels, poems, and, of course, essays like this one. But all that will come about only when machines are as filled with ideas, emotions, and experiences as human beings are. And that's not around the corner. Indeed, I believe it is still extremely far away. At least that is what this lifelong admirer of the human mind's profundity fervently hopes. When, one day, a translation engine writes an artistic novel in verse in English, using precise rhyming iambic tetrameter rich in wit, pathos, and sonic verve, then I'll know it's time for me to tip my hat and bow out."
Of course, Hofstadter's particular views on the nature of machines, human minds and creativity are up for debate. He may or may not be right; I certainly don't know. But I don't think one needs to subscribe to any particular views outside of common experience to feel profoundly flabbergasted as AI continues to encroach on previously human-exclusive territory and uneasily ponder, Where does this end?

As we've seen, Hofstadter describes translation as an art. Well, what about all the other arts then? What about, say, music? Will there come a day when it's time for humans to tip their hats and bow out of making music? That would leave me reeling with terrible confusion and immense, permanent sadness as well, I'm quite sure.

At the very least, there are many things to consider here. To gather perspectives, I reached out to a number of composers and performers and asked them to reflect on the following:

The ability of AI to make text, images and sound is improving at a rapid pace. Does this affect how you think about your own music? Does it change how you view the value and purpose of making music? For example: On a practical level, do you worry about being unable to reach an audience as AI generated music continues to proliferate? Or, on a more existential level, can you imagine a point where AI gets so good that it will no longer be meaningful for you to continue making music?
for helping out, thanks to
Fabio Ricci
Ulkar Aghayeva
Nick Didkovsky
Massimo Farjon Pupillo




ULKAR AGHAYEVA, biologist and composer:

In Isaac Asimov's novella "Profession," education is a solved problem where children undergo "taping" at certain ages and have all the relevant information recorded in their brains without the need to make an active learning effort. The protagonist, George Platen, is however deemed unfit for taping at the age of 18, and placed in "The House for the Feeble-Minded" where he, along with others of similar mental ability, is given access to all human knowledge but has to acquire it himself. He escapes the House a year later to confront the doctor who diagnosed him as feeble-minded, and finds himself at the Olympics (happening in San Francisco!) where Educated People compete to be hired by advanced Outworlds. It becomes clear to George that their problem-solving abilities are limited by the knowledge made available to them through taping. But he thinks he has a better solution to such limitations - namely reading books and talking to people who have expertise in a given field (an extraordinary claim for his time). Later, through a series of fateful encounters, George learns that the House he was earlier confined to is in fact the Institute of Higher Studies where people like himself, who have the urge to create, despite being told otherwise, work for the advancement of science and human civilization.

This novella can be seen as a parable of AI-human relationship in music composition - and any creative endeavor, really. One might say deep learning-based music-generating AI (which so far, including Google's MusicLM, have been arguably less impressive than image- and text-generating ones) are "taped" by training on the existing music libraries and don't have a chance of ever being as creative as humans have been. Though another might counter that even in human-composed music, everything is a remix, and it's only a matter of enough training on high-quality data for AI to reach human levels. In any case, I don't see the AI-human relationship in music as fundamentally antagonistic. I believe the existence of AI composers will be no more threatening to the future of human composition than is the existence of fellow human composers - both peers and masters of the past.

What is meaning-making in the space of effectively infinite combinatorial possibilities of rhythm and melody and harmony? Granted, what we call music "has temporal patterns that are tuned to the [human] brain's ability to detect them because it is another brain that generates these patterns" [1] (incidentally, this is also why aleatoricism, stochastic music and the like find few appreciators). Composition is not about arbitrarily combining rhythms and pitches - there's often a narrative arc, and always an intention behind a piece of music. Musical ideas are not just cycled through and then forgotten, but are developed, modified, and returned to in a way that carries an emotional charge. An AI must have a model of human emotional landscape - which abstractly relates to musical gestures and textures - and an understanding of musical narratives at different scales in order to be able to write music that would be meaningful to humans. Is it possible to gain this understanding through training on the corpi of existing human-composed music? That's what human composers do, at any rate, in addition to being attuned to their own emotional microcosm.

I don't think any developments in AI music will make my own writing less meaningful or less fulfilling for me personally. I write music as a way of reconnecting to my roots, of expressing a longing for my native Azeri musical language and my admiration of Western classical forms, spanning centuries - which is arguably a niche crossover, in the overall musical soundscape. I don't plan to use AI in my writing in the near future - again, because of the personal significance, I want musical ideas to be born in my heart unaided.

But there is no doubt a huge potential in using AI for experimenting with combinations of existing musical ideas and textures, irrespective of one's level of musical education. Arguably, some musical genres are more amenable to such experimentation than others. I expect high-quality AI-generated techno music to emerge earlier than, say, an impressive example of an AI-generated Renaissance-style madrigal. Neither will stop musicians from writing and producing music in those genres or from inventing new genres and soundworlds out of whole cloth - which may for now remain a human prerogative.

[1] György Buzsáki, Rhythms of the brain, p. 123



Photo by Steffen Jorgensen

ANDERS BACH, drummer and producer:

When I'm making music, I'm in a constant collaboration with machines. They help me facilitate the best and the worst of my ideas. For instance: if I'm working in a DAW [digital audio workstation], my computer and my mood dictates the degree to which I in a specific moment zoom-in adequately enough to precisely cut a region of audio at a specific place, or if I concede to something roughly estimating the original intention. Or if I'm using a synthesizer, I'll search through the synergy of the resonance and frequency cutoff of a certain filter until I reach something musically satisfying, often more satisfying than what I initially could think of. I would not be the music-maker I am today without unintentional musical intelligence from the machines I populate my practice with that collide with my imperfectionism. This mundane use of these technologies to create feels extremely meaningful to me.

When I think of AI's most direct influences in my everyday life, I think of social media algorithms and enemy design in video games. For these instances I am either rewarded with an autocompleted search-bar or an ad for a usb coffee mug warmer, or I am thoroughly entertained killing things in "Elden Ring." In this regard, I am a consumer in the Western liberal reality, through-and-through, and without much thought I let this type of AI permeate my life like most other people in Denmark. As an artist I have worked with interactive evolutionary computation which is a branch of research into automation that seeks to mimic evolutionary systems. I've used this in projects that revolved around improvisation to try and make a point that basic musical improvisation is possible to compute in real-time within the math of genetics. I really was not successful with that, to be honest; everything ended up sounding really bad, and I couldn't use any of the material I had generated for any meaningful musical context. So I grasped for straws and turned to neural karaoke, a discipline I found fascinating. In 2016, this was a recent trend in AI music research where an algorithm is trained in a specific conventional genre of music and is then fed a photograph that it associates to certain musical variables of rhythm, harmony and melody. Any picture or photo could be used. The algorithm ultimately generates a short song, usually around one minute, with drums, bass, keys and vocaloid-type vocals with English lyrics. Most of the results are pretty funny, but the lack of musical intelligence in this neural karaoke is the real punchline when compared to the shift in computation that we've experienced since.

Curtis Roads wrote a paper back in 1985 where he mentions the possibilities of AI in the future being able to help make playback with tape in electronic music easier. To be fair, that paper is highly prophetic, but it nonetheless underscores an argument visible only in hindsight: that we had and still have no way of imagining the uses of AI that will permeate music-making in subsequent decades. The processes that will be automated will exceed our imagination and present vast new areas of the creative process that maybe isn't about optimization of existing musical structures, but about other forms of musicality, workflow and sonic expression. For some reason, that doesn't make me fear my own obsolescence as an artist, but maybe I should worry. AI permeates musical research, which makes it prone to become seamlessly embedded in music media much in the same way as AI is part of my life outside of work. I honestly don't know how to feel about that. Maybe I feel indifferent? Because things change so rapidly in ways I don't understand? Or hopeful? Because I might actually be alive to experience a new generation of music-makers who can truly embrace this evolution? Or hopeless? Because I realize the biggest advances in this field are made on the premise of the existing economic reality of the West?

I spoke to my friend Troels the other day, who had a good point about music plugins (small pieces of software in DAWs that process sound): that AI has been used in music software and in plugins for years without using the "AI"-descriptor. But once "AI" is used to describe something, people kind of freak out because the general conception of the technology of AI is embedded in a pop-culture dystopian, human-life-threatening science-fiction. For me, my problem is not with the technology. My problem is with the occurrent political structures that dictate that this technology should primarily be used for the monetary gain of Silicon Valley assholes. This is way more of a threat to my creative outlet and work-life than any intelligent automation of the mundane use of machines that I humbly try to express my feelings through.


JON LEIDECKER, electronic musician:

These are very leading questions with a fairly grim story arc to them -- there's no safe way to answer them without accepting a certain frame for the technology you're discussing. That frame comes from corporations and individuals who need you to think this is entirely new, because that helps them own it, and you can't trust them. We're definitely at an inflection point, but there's nothing new about automation, and to figure out what to do it's vital to pay attention to the precedents. I take comfort in the entire history of electronic music -- from the Barrons and David Tudor forward, my favorite works are the ones that directly tackle the creative and moral issues behind automation: self-playing instruments, feedback, generative music, and how to proceed when you're making music with tools capable of learning enough to make their own rules. Understanding what non-human intelligences are, in a way that can prompt new kinds of collaboration. Even the short list of composers that have shown the way is in the dozens -- this precedence is reassuring, and once you see the curve of that work, you can see all the work left to be done.

Music helps us form models of the world we live in, and the best music models the world we really want. The most inspiring electronic works for me are less interested in replacing humans than looking for new roles for human involvement, new places for humans in the loop. What I like of what I'm hearing with the newly emerging neural synthesis renders are on a perfect continuum with my favorite two threads in electronic music -- sample collage and feedback -- in a way that feels validating, a complete extension of how recordings enabled what we now call Popular music, with technology enabling ever greater control over curation and playback. Because it always needs help, it always needs editing (all the so-called AI art making it to our feeds have been culled from dozens to thousands of outputs -- there is no such thing as a passive audience).

Chris Cutler doesn't remember telling me once that no one born before 1970 enjoys music made with drum machines, because it doesn't model the world of their childhood. Of course, techno modeled my world perfectly, in an incredibly gratifying way -- you don't want your music to be lying to you, you want it to be telling you something accurate about the world you know you're in. So now human artists need to begin listening for all the sounds that will model the next -- machines are certainly going to be in that world, and it will be a full time challenge coming up with ways to keep ourselves meaningfully in the loop.


CHARLIE LOOKER, composer, vocalist, guitarist and keyboardist:

I haven't been staying abreast of recent AI developments, and I don't spend much time contemplating the specific ways that they might affect my musical career or practice. Guests on my podcast have been bringing up AI lately, and they always have more to say about it than I do. I'm not a science or tech guy. I'm more philosophical, and not pragmatic enough to follow through in imagining the specifics of what this near future will look like, and what I can, or should, "do" about it.

I put "do" in quotation marks because AI is just the next step in the compromising of human agency, or of our idea of that agency. Our sense of ourselves as rational agents took a real blow from Freud, when he revealed that people are, to a significant extent, controlled by unconscious, irrational forces that we can't see. AI is the next level of that, because AI itself is the embodiment of rationality or intelligence, and we're seeing that intelligence/rationality/etc. isn't actually the seat of human agency; it's the biggest threat to that agency. It's an alien entity. Maybe intelligence has only recently crystallized as something alien and threatening, or maybe it was all along.

But even amidst these existential anxieties, when it comes to me actually sitting down to make music, writing, rehearsing, and performing, that stuff isn't in the room. Well, it's in a lot of the lyrics, but even in that case, the fact that I'm even expressing them is a statement of human agency, dignity, and beauty. I experience music-making as a sacred and very human practice, even if the Human is just a silly myth that's on the way out. What it feels like for me to actually create music hasn't changed recently along with AI, and I don't expect that it will.

While I don't read much about AI, I do spend a lot of time online, interfacing with social media, both for fun and for career/promotional purposes. To do this is to plug your brain into the whole web of AI-mediated algorithmic incentive structures, and thought patterns, and to attempt to still have some agency within them. So when it comes to promoting my work, I'm already pretty explicitly part of a cybernetic process. While I've been enjoying certain kinds of success I've had with it, I'm very aware of how it's changed my brain functioning. It's been fucking up my attention span, especially over the past couple of years. Working on the music itself, focusing on sound, privately, for prolonged periods of time, has been helping me push back against this attention-shredding. It returns me to more ordered thinking, aligned more with deep intuition than with AI dopamine feedback loops. I'm sure AI has already wormed its way into this core of my creative practice more deeply than I realize, but it doesn't feel that way yet.

As for the issue of whether AI-generated music will replace human musicians, that's just not something I can imagine. I'm not saying it won't happen (it probably will). I'm just saying I can't imagine it, the same way you can't imagine what it's like to be dead.


FRANCISCO LÓPEZ, experimental music composer and audio-artist:

Since the early 2000's, as a composer and audio-artist, I have had a considerable ongoing interest in 'creative autonomous systems', which I consider to be a wider techno-cultural category/concept than current (early 2020s) 'AI.' With the collaboration of several programmers, I developed different experimental prototypes of this kind of creative system, initially under a concept I called 'Sonic Alter Ego', which, in a somewhat utopian manner, explored the idea of extending my sonic creative life beyond the grave. Unlike the prompt-based, socially-mimicking AI systems of today, therefore, this very modest one aimed at investigating whether an artificial entity could be developed that would be able to create 'sort of like me' (not like others) -aesthetically, stylistically, formally, even emotionally- and thus could continue to do so on its own in the future (in a musical/sonic context, this would likely expand and extend classic concepts such as composition and soundwork to wild new levels). Crucially, however, this wasn't simply a mimicking system but rather a 'partnership' one. So my expectation was to learn from it and be surprised by its creations, new twists and novel 'ideas' -and to be able to cooperate with it. Implicit in this are of course notions and technical implementations of both uncontrolled and evolving properties of such an artificial creator. In the best hypothetical scenario for me this creative entity would be ever-changing in its capacities and unknowable in its inner workings. In other words, not a 'tool' but an autonomous cooperator, even at the risk of setting free a 'Pandora's black box.'

I have always had a similar perspective with apparently much simpler machines, particularly with the perceptive non-cognitive ones, like photo cameras and sound recorders, whose main interest for me has never been their imitative representational capacities but rather their exceptional position as potential complementary partners with their unique ability to perceive without thinking, something we can rarely do. So instead of deficient tools providing a second-rate representation of reality -as they are typically understood- they become amazing phenomenological and ontological probes.

Similarly, what I find particularly uninteresting about current mainstream creative AI is that, no matter how apparently sophisticated in its operation, it is mostly understood and used as an imitator/replicator of the appearance of conventional aesthetics. And definitely used and developed with a yardstick of an anthropomorphic and anthropocentric nature. While this might be logical in practical terms for some (e.g., now-widespread commercial uses of prompt-based, AI-generated art), it is a cultural recipe for self-reinforcing, centripetal artistic cliches and standards.

That's why I think autonomy and unknowability are so relevant for creation: they might reveal what we cannot imagine. What I see as most interesting in so-called AI -and other forms of autonomously creative techno-conglomerates- is precisely their potential for innovative 'alien-ness' -what I call the 'anthropoEXcentric', as it obviously comes from us but aims at revealing and moving beyond us. That points to aspects and forms of human-machine relationship of complementariness for discovery instead of replacement for imitation. But it also, fundamentally, requires us to be aesthetically truly open: the more 'alien' (thus farther away from imitation), the more attentive we should be. That might one day lead to an 'AA', a surprising and exciting Artificial Aesthetic.

Such a strong-claim status for creative partnership -instead of simple 'toolness'- in an artificial entity might naturally elicit proof or demonstration, so during the 2010's I developed (again with the collaboration of different programmers) two implementations of a peculiar project of sonic creation with collective contributions from hundreds of artists and an autonomous composer-performer in charge of the real-time composition and live performance. Using audio contributions understood as both pieces and building blocks/materials for further transformation and creation (processing, editing, mixing, etc.), this entity would, in real time, 'listen' to the pool of options, select, transform and playback with no human controller during live performance. This project materialized first in Madrid in 2014 with the contributions of 100 artists ('audio-MAD': https://audio-mad.bandcamp.com/releases) and then in The Hague in 2016, with contributions from 250 artists ('audio-DH': http://audiodh.nl/). These two implementations of the project developed different autonomous entities, playfully named with intentionally ironic anthropomorphic acronyms: PEPA (a common Spanish female nickname, after 'Programa Experimental de Procesamiento de Audio') and HARING (a common product in The Hague, after '(Humanless Audio Recombinator for Infinite Novelty Generation). The results did not imitate the aesthetic or style of any of the participating artists but flowed creatively according to a multiplicity of open rules and criteria for 'composing' and 'performing' that could naturally be changed, tweaked or let to evolve. This was presented with a solitary laptop computer on a plinth on stage, with no human controller, 'facing' the audience. A crucial outcome of all this is that, in my opinion (as well as that of many experienced people in the audience), this system passed the Turing test as composer and live performer of experimental music.

I believe that the autonomy of the artificial entities -if it's truly creative- should be embraced. In non-toolness partnership, it brings in what we don't have and changes us in unimaginable ways. Playing with Promethean fire is indeed dangerous but we wouldn't be human without it.


MASSIMO FARJON PUPILLO, bass player, multi-instrumentalist, composer and improviser:

Let me introduce this: it will go in circles.

If John Trudell said this, or Black Elk, it would be appropriate, but it's not supposed for me to talk like this. This is where we are as a culture. I'll say it then: we (as a culture) have severed our connection both from Mother Earth and from Father Sky (Dyaus-Pita).

Who of us knows which leaves are which, when to plant a seed, how to survive for 2 days alone in a forest? And who has slept under the stars at night soaking in the wonder of the firmament until every cell of the body says "wow"?

We threw these connections--all the other senses, every mythological vision--in the bin, and yet we're unable to replace them. We considered everything from the past a dream, or worse, a dogma. But this new void was filled by another dogma. What is a dogma, if not that which one is not allowed to question? What would it be these days if not Science. But worse, what is mostly described as Science, very often is mere technology. Teknè.

Do you remember in ancient Greek mythology the sin called hybris? A feeling of being superior. Super-Pride. I see Western culture as a culture in full hybris. And hybris is always paid sooner or later. We live in this middle bardo-state, which is mostly a mental construct, and a highly manipulated one. We are born in a hospital and mostly die in a hospital. Yet we perceive ourselves as the pinnacle of a self-prescribed evolution. Language builds our perception. Sanskrit had 52 letters; see them as blocks to describe the world. Madhyamika philosophers in the 4 century AD in India were absurdly more advanced than us. Yet.

Ok. So. A.I.

In many aspects, at the moment this is THE question. I have to start from different points on the periphery of this question though, and try to go to the centre of the issue. We have thrown everything out the window. If one doesn't realize this, amusingly enough, it is very probable that one is actually doing it. Tradition, ancient paths and philosophies, true spiritual ways, everything was sacrificed by modern man on the altar of rationality, science and technology. Not that these are bad, I'm not saying that, but did we need to sacrifice one hemisphere to let the other one rise to power? Is it possible that humanity as a species is growing in one direction only; growing only technologically, and not in a spiritual, ethical, philosophical way?

The native wheel of medicine states that a man needs to grow accordingly, in every direction. And so does humanity.

Ethical questions like: if it CAN be done technically, does it mean it HAS TO be done in reality? Is there a limit where a scientist will say, "Listen this is enough, we risk blowing up the world here, we risk spreading a deadly pathogen all around." It looks like this healthy NO is never on their minds. Maybe because "we are at war" with someone, or maybe because "if I don't do it someone else will, so I might as well" (aka the banality of evil).

Does it seem like I'm off the q? Leave me one more minute.

When we throw everything away and we feel all proud of us being modern/postmodern man and that the past is over and progress is only good, I think some very basic questions have not been asked. When technology and spiritual/moral evolution don't go hand in hand, a disaster is being prepared. A philosopher said,

And a researcher I'm quoting by heart here, said, after a certain number of layers of algorithms, a certain AI program becomes elusive even for the programmer. Can we put this in the equation? And he was referring to something as trivial as self-driving cars, not a quantum supercomputer.
We have been prepared to see this shift happen for years.

Arts and music in general have become more and more machine-like, program driven, algorithm minded. True originality has been so boycotted that, adding insult to injury, you have to read stupid articles by arrogant journalists that claim "everything has been already done," because they themselves don't know where to look, or don't care to look for true creativity anymore.

You already heard the synth playing the first preset from many acclaimed electronic music stars. As man goes machine-like, machines seem to move towards man. But can it? So far, at least that's what we've been told (and I don't live in 4th grade so I can't think seriously for a second that we are told where current research and technology has arrived). Anyway, we're told that so far AI is learning but is not self-conscious. So far, it seems like it can and will replicate genres and mimic styles. Humans have become already used to sameness and lack of spirit in what they listen to every day, all around. Lack of deep research, lack of transcendental feeling. There's not much of a difference here. It's already like this.

To make it big, which is something many seem very interested in (after all this is the matrix of our times), one will sacrifice any originality until differences are like those between yogurts in the supermarket... just a tiny bit different from one another. Unable to conceive otherness, strangeness, weirdness. Just different labels for the same stuff. If this is the case, we won't see the difference between humans and AI, I'm afraid.

The question arises: where do we really draw true inspiration from? May it be a transcendental field, an external intelligence that our subtle senses have access to? May it be that this spirit is something that machines that can think a billion times faster than us... cannot have?

A question that raises questions.
And Buddha himself recommended to question everything.
The Zen monk Haikun said you need 3 things: great faith, great doubt, great resolve.
I'll leave you, dear and patient reader, with this.
Thank you.


FABIO RICCI, musician and composer:

At an existential level, I am not worried - there are already countless human musicians that can perform, compose, code and patch much better than myself - that doesn't stop me from doing my music. Why should it be different with AIs? I don't see music as a competition; it's a form of creative expression, of freedom.

When I compose, I want to be hypnotized by what's going on, be it a melody or a soundscape. I seek bewilderment, captivation in the process. The specific technique/tool doesn't matter. AIs can be very unpredictable, and that to me is a quality. They're part of the great playground of electronic music: a new carousel to be explored.

Personally, I like to focus on the uncanny valley sensation that AIs can provide. The eerie feeling I experienced 5-6 years ago, when I heard Mr Shadow, one of the first publicly released AI-generated tracks, was very intense. It was unlike anything I had ever heard before and unlike anything I thought was possible through AIs. It felt inhuman.

The more AIs get better - and they've gotten incredibly better over the last 1-2 years - the less I'm interested in them: polished stuff doesn't inspire me artistically. Some people get very excited by technical wizardry, musicians playing instruments to absolute perfection, or AIs becoming hyper realistic. I don't. I like errors, the unexpected.

When technique is not serving composition, it's more like calisthenics than music to me. Of course there are cases when you need a specific technique to unlock a musical passage, or when a whole composition is the technique itself (e.g. "Flight of the Bumblebee") - and that's when it makes sense. These limited instances show an analogy with AI - if used for targeted purposes/ideas, it can do a lot to enhance your compositions. But if you're just prompting it to do the work for you, you're not doing music, you're fueling the capitalistic machine.

I literally love creating music, so why would I ask someone else to do it for me? Takes out all the fun! Of course I get stuck at times... it can be frustrating. That's where a little help can be extremely useful. But there are tons of tools that already did this, way before neural networks took the spotlight.

Reaching audiences might be a future issue, but I feel privileged: because I don't earn a living through music, I have the luxury of not needing to please anyone when I produce music, their response will not affect my livelihood. Undeniably, I am very happy (and often surprised!) if people enjoy my works and purchase them, but I don't create them for that purpose. I create music principally because it brings me joy.

Perspectives change a lot if your livelihood depends on your music. The rise of AIs poses countless challenges and problems when linked to market and societal dynamics. A number of issues are already being debated at length (recording bulimia, ownership/copyright, musicians losing jobs, etc.), others I haven't seen mentioned so much.

The first one is trust. Even though there is always a level of mystery surrounding electronic compositions (especially experimental genres) - listeners never want to feel cheated. They need to feel that what they are listening to is the result of a human based struggle toward an idea of perfection. As of today, you can ask AIs to sing a song mimicking the voice of any specific singer (including dead ones!) and it will do a pretty amazing job at it. We fulfilled a Black Mirror hyperstition. It's the deep-fake problem: AIs undermine this trust-based relationship to the point that it may seem meaningless to listen to any recorded music at all. If a listener is mistrusting your music, they will not immerse in it; it will not be a meaningful experience for them.

The second issue is retromania. Trends show that recorded music has lost importance to other forms of entertainment especially among the youth, but what still maintains a marketing value are re-issues: they speak to the older generations. Clearly, if we cannot trust the present, our only option is to look back at when things were "really truthful." Older generations provide for a wealthy reservoir of potential buyers. If big corporations manage to seal the necessary copyright agreements, they will also be able to pull off many AI-tricks based on the Black Mirror idea.

Producing countless new songs instantly - which, philosophically, are almost the same song - will certainly inflate the already saturated market. Underground and independent music makers like myself might fall into a deeper and even more obscure niche. However, differentiation and honesty - establishing that trust relationship with the listener - will be key: your audience will look for your unique traits and because they connect with you.

One could argue - in a very Borges-type scenario - that the potential of infinitely generating content via AIs will exhaust all possibilities in the long run, leaving musical creation a sterile exercise. This very simplistic/positivistic thought has repeatedly been proven wrong over the course of history. Recent examples include Ed Sheeran's trial success or generating via midi all possible melodies in a given key and storing them in a hard drive. There is so much more going on in music than stacking up sequences of notes.

AIs should be thought of as a product of mankind, not as a threat. They might force us to redefine the concept of creativity, ownership, production methods, etc., but I don't think they will "end music exploration." Not even close.

Rather than perfecting AIs so to substitute humans in a sort of dystopian musical production "assembly line," it would be far more interesting to unleash them wild. Though the obvious links to the training data sets remain, we've seen recently that the most advanced language AIs models are capable of developing very meaningful content in areas that are completely different from those they have been trained on.

It will be fascinating to see this happen also in music AIs.


ROBERT ROWE, composer and professor of music technology:

Recent advances in machine learning have produced models that can, with impressive accuracy, generate text and still images given a prompt. Part of this leap forward has been due to the development of transformers, a type of model that is so far the best for learning long-term dependencies in sequential information. Music is, of course, the art that most deeply structures change over time, in fact hierarchical change over time. And that kind of structure is particularly difficult for these models to learn, which is why musical outputs so far have been nowhere near as convincing as text or still images. Consequently, and for any of these uses really, AI systems are most promising as tools for human creators, not as substitutes for them. Whatever the systems ultimately do for us, humans will create. Humans still play chess though none of them can beat a machine. Humans still run, though none of them can outrun a car. But beyond the simple compulsion many of us experience, until some fundamental breakthrough that we cannot yet discern erupts, the best music will be made by human musicians, and some musicians will make delightful, novel, and unexpected work using AI tools. Making music will always be meaningful to me personally, as it will for anyone else who feels called to do it. Now we have new tools to explore and twist and learn and grow from. I believe we're all going to have a lot of fun doing it.


ANNA XAMBÓ SEDÓ, musician, composer and senior lecturer in music and audio technology:

In the field of computer music, new technology is generally embraced with passion. This is due to the expected promise that novel technologies will help musicians to take journeys to unknown sonic territories.

AI is not new in computer music. To include AI in the artistic process means not only creating software to work but also training the software, performing with it to see/hear whether it musically fits and keeping this ongoing loop for several iterations. You can see this process as a conversation. A computer music creator is specially prepared to take on this challenge. By tradition, computer music creators embrace both technical and artistic skills equally well, and take an entrepreneurial attitude, as shown by key figures in computer music such as David Cope (EML), Rebecca Fiebrink (Wekinator), Sergi Jordà (Reactable), George Lewis (Voyager), François Pachet (Continuator), Laurie Spiegel (Music Mouse), among others.

However, the current exponential progress of AI using large language models (LLMs) applied to natural language processing (NLP) research, such as ChatGPT (OpenAI), is radically changing the way we produce creative content based on text generation. MusicLM (Google Research/IRCAM), for example, is a deep neural network (DNN) model that generates music from text descriptions. MuseNet (OpenAI) is a DNN that generates music compositions combining the styles of Mozart and the Beatles, among others. Perceiver AR (Google Research, Google Brain, DeepMind) is a DNN that generates classical piano music.

After some years in computer music, I started to devote more time writing programs that I would then use in my performances. Since my practice includes the use of AI algorithms, I also occupy a considerable amount of time both writing programs to train data and training the data to perform. This changes the way I am thinking about my music because I wear multiple hats throughout the creative process: the programmer hat, the data scientist hat, the sound designer hat, and the performer's hat. This produces an even more abstract way of conceptualising the creative process. On the other hand, it is still about designing systems and creating software to be able to improvise algorithmic music, so the essence has not changed substantially.

We are at the stage of exploring AI as a music companion, with more or less autonomous behaviour. Keeping the musician/human-in-the-loop allows for moderation or quality control of the resulting music in a future sonic landscape with a potential AI-led overproduction of music. However, it can be interesting to see what the musical results would be without human intervention, as part of the creative process. In a data-driven era, one of the most salient challenges is understanding the potential data bias. For example, the Western music paradigm might be overrepresented in many of the existing models, which favours Western music harmony and tonality over other ways of organising sounds.

Being aware of the potential dystopian perils, the future is still promising, even more for algorithmic music, in which giving agency to the machine is fundamental and connects with the seek for new sound and music experiences. This is especially relevant for sonic arts and sound-based music, which are still in the infancy of the AI era and can contribute meaningfully to the AI discourse by researching the nature of the partnership between humans and machines.


YI SEUNGGYU, electronic musician and modern music composer:

Artificial intelligence was closer to us than we thought. In the case of ChatGPT, which is currently emerging, it just feels new as an interactive system. The same goes for search engines. We enter key words and get suggestions from Google in the order of high relevance. The artificial intelligence we're discussing is a little more interactive than that. At this point, artificial intelligence refers to machine learning. It learns vast amounts of data as humans learn and predicts results when new inputs come in based on that data. The important thing here is that the standard of learning is determined by humans. Machine learning is handcrafted. It's automated, but it's a human creation that uses machine learning as a tool. No matter how good the result is, it cannot be seen as a creation unless the composer presented the idea himself. So even if the process of creation is automated, the subject is human. Therefore, I think artificial intelligence itself cannot be the subject of creation. Even if artificial intelligence creates itself with intent in the future, art itself is not absolute, but an agreement promised by humans, so the agreement defined as art will continue to change over time. Did all the artists tremble with fear after the camera appeared? Artificial intelligence is a new instrument and brush. We need to study it as a tool. A tool that, like the tool that helped translate this text from Korean, can be useful but also has its limitations.



Photo by Sophie Thun

PAMELIA STICKNEY, thereminist and composer:

I think of AI as just another tool... it can't 'produce' anything until something is fed into it, from what I understand - so maybe it depends on WHO makes the program, setting its parameters... The automated response which is programmed can have an influence - I think of how using effect pedals can have an influence on how a musician executes what they are playing, for instance. Maybe in the same way that chess players who trained with AI ended up learning new strategies that they normally would not have learned by playing other humans - maybe it could be the same way for musicians/artists who are working with AI in their creative process. Or like... driving a Tesla car, with the knowledge of how to drive without all of the automated help... and also with awareness of the glitches that come with the AI involved (like, sometimes light/shadows can trick the AI system on a Tesla car to make an alarm signal that there is a bicyclist near your car, when there isn't).

I don't think AI changes how I view the value and purpose of making music. Listening to music - produced by a human or automatically somehow generated... well... I could only point to how we now already have these programmed chatbot things that can become 'love relationship companions' - in the end, it's what you make of it... Many people might feel connected to the artist who they are listening to but that is just one-way since likely that artist they are moved by and listening to doesn't even know them... So I think, how is that any different from being moved/touched emotionally by AI (or even another person that knows you). Maybe there is no purpose in making art... it is just something that one does because they must and just do it... Sometimes human artists do create with the intention of reaching people - some do it with the hope of being heard... for the creator or listener... maybe it is just a tool for expression... for inspiration... some people make art as a form of political dissidence... there are so many different motivations...

Maybe that is the only difference between AI generated art and human generated... For a human, there is a motivation behind it...

I don't worry about AI getting in the way of reaching an audience... PR is another technique used to reach an audience. Whether an artist reaches/influences an audience or not doesn't have to be the basis or motivation for making art. There is a lot of amazing stuff that gets produced and because of marketing, ticket prices, demographics, a lot of it will never reach many people who might find it interesting... I'm curious about what motivates someone to keep giving their attention to something... and that can be for many reasons too.

It's not AI that would be the obstacle in the making of music being meaningful for me - only I myself make that choice to place the obstacle (blame/excuse). I can choose to blame it on AI... or blame it on not being able to pay the rent making it... or blame it on having a kid and not enough time... It is always possible to make music and for it to be meaningful. As much as being a parent or a friend or teacher or student or having a job.


JOHANNA ELINA SULKUNEN, experimental vocalist, composer and improviser:

The role of technological innovations in our society is a topic of intense debate. The fear of machines taking over, known as the singularity, has sparked discussions about whether technology is the solution or the cause of our challenges. This is a highly relevant topic that also impacts my work as a composer and performer.

My interest in technology began with my album work KOAN, released in 2018, when I started incorporating electronics into my work. Prior to that, I felt saturated with my music writing, which heavily relied on individualist expressionism and songwriting from a free improvisational jazz angle. I took a break and approached music from a new perspective, focusing on the sound and resonance of my instrument. I sought inspiration from sources outside my emotional sphere, such as technology, nature, and the resonance of my own body.

More than that, this was also the start of my ongoing collaboration with visual artist Tapani Toivanen, who is working on an axis of art and technology.

Currently, the looming threat of climate change, the rapid rise of artificial intelligence, and recent global events like war and pandemics have profoundly impacted all of us, leaving us in a state of alarm as we face the potential realization of dystopian visions. Utilizing technology and artistic expression, we can potentially seek alternatives that may still be beyond our reach.

I think the fear of AI in relation to music often relates to its collective nature, and the discussion is relevant since technology will inevitably affect the way we consume music. The collective performer-audience dynamic might be replaced or changed.

However, I also believe that the value of music extends beyond collective experiences and holds immense significance on an individual level.

Reaching audiences has always been challenging for me as a creator of niche music, regardless of technological advancements. My concerns about reaching audiences are not directly related to AI overtaking music creation but rather the broader cultural context in which we live. Power dynamics within our digitalized culture influence our consumption, aesthetic preferences, needs, and ability to hear and listen.

If only a certain kind of music reaches larger audiences, there is a risk of a flattened worldview. The world is full of nuances, colors, and sounds, and how we respond to them is our responsibility.

Ultimately, music is a visceral experience that resonates with our bodies and connects us to our fundamental humanity. The fear related to AI, I believe, lies in the potential loss of our humanity. Personally, the meaningfulness of making music is deeply existential to me, and I can't envision a point where it would disappear unless the technical development continues until our brain, feelings, and thoughts are taken over by artificial intelligence.

I anyhow think that the authenticity of our emotions and experiences cannot be replicated by AI alone. It is the human struggle and the ability to share and recognize each other's emotions and stories that allow us to feel understood and seen. Music also transcends the boundaries of language and intellectual comprehension, providing solace and connection to our existence.

Ultimately, it is our own response to music that matters the most, not how it is produced or made.

I believe that AI-assisted art and music have the potential to spark a revolution, and it's already happening. The outcome is uncertain, but in the utopian scenario, this revolution would enhance our capacity for empathy, understanding, and a genuine desire to understand each other. Embracing and accepting our differences and disabilities can create a vision of a brighter, yet unimaginable, world.



Photo by Alex Bonney

PIERRE ALEXANDRE TREMBLAY, composer, improviser and academic:

This is a fascinating set of questions, but I do not think the threat to our music practices lies where it is implied. If musicking is making sense of the world as/with/through sound practice within a community, then the perfectioning and wider distribution of this technology is not a musical problem: it is a societal one. Let me unpack my statement.

I am neither a luddite nor a techno-enthusiast. Artists, musicians included, have always strived with, and against, new additions to their technological toolset. Whether it was the electrification of guitar, or the turntable, new tools were used and creatively abused by various communities for their musicking benefit. In turn, that technology evolved to support and expand these new practices.

Moreover, that technology is not that new nor that interesting in itself. Not that new, because algorithmic music making is as old as the written word, it got faster with computers from the '60's and now is getting a jump boost again. I look forward to seeing how artists will bend these new affordances.

Like any technological promise taken in a vacuum, it is not that interesting either. Artificial intelligence, in the wise words of Kate Crawford, is neither artificial nor intelligent. I would even say flippantly that it is naturally dumb. It is a technology that finds patterns and converges to the mean, often biases by the humans setting them off... whereas I see masterpieces as situated anomalies, at the boundaries of sense making. In other words, creativity is playing as a collective with boundaries of intersubjectivities, with the quirks, not aiming at the common middle-ground.

Now, if my practice was doing sound-alikes, I would be worried. A whole set of the 'industry' will have to change I reckon. But what I do within my various music communities of practice, and what we like and value, and are inspired by, doesn't register enough in these grand trends to be reproduced for profit. More importantly, it implies people in a room sharing a moment.

What is likely to change is that these tools enable us to get the boring part of the job done faster, both in term of techniques and processes. It is a new way to see and manipulate large parts of our workflows in different ways. It is already happening, and this is a fantastic new affordance, enabling new ways of being creative with sounds, not unlike the unlocking of fast traits enabled by the double escapement on the piano mechanism... but not everyone will be able to harness it creatively like Chopin. Or see its real potential the way Grand Wizzard Theodore saw the decks, seeing their divergent expressive potential instead of restricting himself to their intended usage like everyone else.

I would even dare to say that artificial intelligence in music will probably not have as revolutionary an impact as the three major revolutions of the last millennium: music notation, sound fixation, and dematerialised distribution. With the latter, we are already submerged by too much music... so challenges to authoring, canons, tastes, is not going to be worse because of a sudden mass production of generic music without communities.

And this is where I see the real threat: if we continue to let this technology destroy the fabric of our society, seeding artificial division, giving despotic control to a selected few without accountability and under a fake pretence of objectivity, eroding privacy and individual agency, where will art go? Asking if we get to a point where it is not meaningful anymore to do music has nothing to do with the music technology using AI. What will musicking mean if we keep on ignoring the ethical impact of these privately owned tools on the very fabric of what being a human in society, of human-ing, is?


MIA WINDSOR, composer and improviser:

For me, when answering this question, it's first important to consider what the musical output of AI currently is and what it could be. I think that a lot of the AI-generated music currently available isn't very interesting because it exists to re-create what a human can do without space for movement beyond this, resulting in the music stagnating. Often what happens in these cases is the music that the AI is trained on is represented symbolically (such as in MIDI form) which is often too constrained and cannot capture the nuances in music found in its timbre. Though it's certainly possible to generate infinite amounts of music this way, and though I'm sure this will inevitably be done and pushed by companies, perhaps in the form of stock/library music; I don't think that it will be anything worth actively listening to.

I don't believe that AI will change the purpose and value of making music as there will always be a need for human-made music because of the lived experience, culture, tension, and interaction tied up with it both for creators and listeners. I do, however, think that AI can be helpful for musicians once we've filtered through the buzzwords and soul-sucking corporate approaches. The most straightforward example of this is using AI tools for efficiency when making music. Perhaps an AI tool could exist that generates a MIDI pattern to fit a certain mood or style from a starting point that can then be altered, or one that helps you find the right synthesised sound with text prompts (I learned while writing this that this now exists in the form of Vroom).

The role of AI in music for me revolves more around trying to push it to aesthetic limits that have previously been unattainable in human-made music while still having a human artist as the main decision maker. AI can recognise and create patterns that are imperceptible to humans and this can be especially useful when considering timbre because it allows opportunity for it to be altered in a lot of detail that is hard for humans to recognise. Leijnen (2014) has suggested using 'bugs, errors and random numbers', or in a musical context, audible glitches, as a means of generating and eliminating creative constraints and suggests that this has the potential to create new ideas that do not fit into a previous style or convention. The output of raw audio neural networks (RANNs) is an example of this. RANNs work by dividing audio at the sample level (e.g. 44100 samples per second) and then using a neural network to try and piece together the audio it has been fed. Due to the neural network's failure to piece together these tiny samples entirely correctly, the result is an inhuman aesthetic that is not present in the original training material, instead driven by many subtle, intricate glitches. These outputs are unique sounding and are aesthetically beyond something a human could create alone. RANNs are not capable of generating as much music as the symbolic models mentioned prior as their training material is more complex (44100 samples per second racks up to a lot of data in just a few minutes). They also require their training material to be relatively timbrally consistent. The plus side of this limitation is the fact that it means that humans must have creative control over these systems for them to be useful. The artist's job is to select or create focused training material for the neural network and spend time curating the output. Jennifer Walshe and Dadabots' A Late Anthology of Early Music Vol. 1 - Ancient to Renaissance (2020) is a good example of this. For this album, Dadabots trained a RANN on hours of acapella voice recordings of Walshe over 40 generations. The final album (curated by Walshe) dips into the different generations in the training process with the track titles comically following the early history of Western music. The album creates a boundless world of alien-sounding vocal technique with squelching formants, ghostly moving drones, glitches, and bleeps and bloops, with each track being unique to the last. Another example is Vicky Clarke's Aura Machine (2021) which involved training a RANN on field recordings categorised into the classes of 'Echoes of Industry' (Manchester mill spaces), 'Age of Electricity' (DIY technology, noise & machinery) and 'Materiality' (glass fragments and metal sound sculptures). The output of this has some additional contributions from the composer of mostly pitched synth material which helps draw attention to the sounds created by the RANN. I like this as a cleaner version of what can be done with RANNs. Both these approaches are much more musical to me than symbolic approaches because of how much detail timbral emulation allows for. Being able to use AI to emulate and ultimately change this is a really exciting creative tool that helps rather than threatens the human artist. I am excited when new meaningful music-making methods become available and tools like this give me more creative drive.

Reflecting on the non-human qualities of AI more generally, I think having clear oppositions in art is productive for shifting perspectives. I've seen a lot of AI-generated visual art that ends up looking 'too perfect' or uncanny and I often find it quite beautiful in its emptiness. Though important as a standalone, I also think that art like this draws attention to what's missing, or rather, what is so important in human or human-machine (rather than just machine) created art. I believe that if we remain critical and creative, we can gain from the outputs of AI in meaningful ways, without taking away from human artists.


References

Leijnen, S. 2014. Creativity & Constraint in Artificial Systems. Ph.D. thesis, Radboud University Nijmegen.
Available from: https://repository.ubn.ru.nl/bitstream/handle/2066/132144/132144.pdf


Check out the rest of PERFECT SOUND FOREVER

MAIN PAGE ARTICLES STAFF/FAVORITE MUSIC LINKS E-MAIL