In the late 1920s, sound film began to dominate the film industry. The convention of live musicians accompanying silent film disappeared, and hundreds of musicians lost jobs right as the Depression hit. Live film scoring as an art form became obsolete.
The fear of losing artistic careers to technological development is historically supported and presently relevant — artificial intelligence improves at an incredible pace, and it’s hard to always know what’s possible. A quick YouTube search reveals that the generation of AI audio files is abundant at various quality levels. The recent Swiss app Leopold Music AI has seized upon the niche of AI music teachers, analyzing users’ rhythm, intonation, and expression. But how is AI manifesting in the concert hall?
“[If] the creation of the sound and the performance is happening right in front of you, and if there is literally just not a person on the stage, you know it’s not happening,” double-degree third-year Noah Lin said. “If the antithesis of AI is being in a room with a human while they make art, concert performance is definitely going to maintain its identity.”
Lin is a TIMARA major. His work was recently featured on double-degree fourth-year Katharina Mueller’s senior recital, accompanying a flute quartet she composed inspired by Refik Anadol’s AI art installation “Unsupervized.” Anadol’s piece consists of a giant screen that responds to the public digital archives of the Museum of Modern Art, where it is displayed. Lin created a visual accompaniment to Mueller’s piece that mimics Anadol’s work without using AI.
However, Lin has also found ways to use AI that are less ecologically harmful, running it on a local computer system. He uses AI in his art for comedic purposes.
“There are funny ways to use AI,” he said. “[As much] as it is myopic to eschew all of the moral issues to embrace AI for all these things, I think it is equally silly to throw all this stuff out when there are workarounds to avoid the ethical implications of using it.”
College third-year Olivia Pickens, who studies Psychology and Computer Science and practices violin in her spare time, reflected on conversation around AI.
“A lot of my friends who are in arts and humanities fields really hate it,” Pickens said. “It’s such a terrible thing. It’s taking away quality from the fields, and people are using it for not the right purposes. I see that viewpoint a lot … some other people are very much along the lines of where I stand: It does have its uses and it can be a helpful tool, but it’s not quite there yet. It needs more development and more ethics surrounding it before we can incorporate it as a tool in our society.”
Many musical institutions have also tried implementing this integrative process to varied degrees of tastefulness. Often these attempts read as gimmicky, part of an unpopular tradition of using modern technologies and sensibilities to make classical music “relevant to the youth.”
For example, on Sept. 27, the Jacksonville Symphony Orchestra performed a concert centered around Beethoven, including postulations of his unfinished 10th symphony based on his notes. One movement was written by Barry Cooper, a composer and Beethoven scholar, while another was generated by AI with assistance from composer Walter Werzowa. Audiences voted for their preferred version.
Similarly, the Royal Ballet and Opera, under new Associate Director Netia Jones, is planning an annual technology festival called “Shift,” committing to continual participation in the fast-paced world of music technology. It will premier June 4–7, 2026, their first year spotlighting AI.
“How can artists and producers interact with AI in the most exciting way?” their website reads. “And what new experiences can audiences expect from this transformation? This festival will explore both the technological possibilities and our role as the human in the loop.”
Virtuosic Turkish pianist AyseDeniz enters the heart of this loop. After sharing disappointment in her peers’ disinterest in classical music, she aims to connect with a digital-age audience through her touring concert, Classical Regenerated AI Initiative and Piano Show, featuring her heartfelt interpretation of pieces composed by AI in the style of famous composers.
“I always wondered what Chopin would have composed if he was still alive today,” she wrote in the description of the YouTube video of one of these performances. “When I began experimenting with AI, I realized his works were used in the training of the music I had generated. To give him the credit he deserves and to play a brand new piece inspired by him, I created the Chopin Regenerated AI Prelude as part of my Classical Regenerated AI Initiative. The initiative is a tribute to legendary composers who have shaped Western Classical Music, and to embrace technology in a way that amplifies human talent, instead of replacing them.”
This may perpetuate concerns about the silencing of living composers, especially those of marginalized identities, as there’s no denying that classical music already resists new voices, chained to a traditional culture that gravitates toward the canon. The persistence of programming this numbingly ubiquitous music demonstrates the value of live performance.
“I think that there’s something wonderfully tangible about live music that, at this point in time, is not able to be captured through augmented reality or through generative AI systems,” Assistant Professor of Computer Music and Digital Arts Eli Stine said. “If I had to throw out something that’s productive musically, I think it’s just generating new sounds that you couldn’t otherwise create.
And they can be sounds that are orphaned or unwanted by these systems. You can call them weird or unique or incorrect, but those sounds can be really cool for experimental electronic music. That’s a part I see as exciting, not necessarily replacing a string quartet.”
Stine gave a talk on Wednesday in tandem with Associate Professor of Technology in Music and Related Arts Steven Kemper on how the AI generation of audio file music works. As a software engineer and an artist, Stine understands the lineage of music technology that accumulates in AI, as well as the limits of electronics in representing humanity. His work focuses largely on biological systems and tension between nature and technology. He is currently creating a piece in collaboration with a deep sea researcher in New Zealand, using loud sonar to produce echograms which can capture fish DNA deep underwater, as well as a piece with a museum in Ottawa, Canada, that’s using machine learning to better understand the linguistic and communicative element of bird song.
“When you work with technology, particularly computers, that do things very perfectly, it reveals the beauty of imperfection,” he said.
