Music tool: Siya Mthembu says society’s anxiety around AI reflects a long history of fear surrounding new technologies.
Artificial Intelligence (AI) is rapidly reshaping industries across the globe and the music and performance sector is no exception. What was once considered experimental technology is now capable of generating complete songs, replicating vocal styles, composing lyrics and producing instrumentals that closely resemble human-created music.
As AI music tools become more sophisticated and accessible, an important question emerges: can listeners still distinguish between music created by humans and music generated by machines?
This development presents both exciting opportunities and significant ethical concerns for the future of music.
On one hand, AI has become a valuable tool for many producers, composers and sound engineers. Artists are increasingly using AI to assist with production processes such as mastering, beat creation, vocal enhancement and songwriting support.
In this context, AI functions as a creative assistant rather than a replacement for human artistry. Much like digital production software and autotune transformed music production in previous decades, AI has the potential to improve efficiency and expand creative possibilities.
However, the growing sophistication of AI-generated music introduces a more complex challenge: What is authenticity?
Historically, music has been deeply connected to human experience. Songs have served as expressions of emotion, identity, culture, memory, struggle and celebration.
Audiences often form emotional connections with artists because they believe the music reflects lived experience and genuine human feeling. Imperfections in performance, vocal strain, emotional vulnerability and personal storytelling have traditionally contributed to the authenticity of artistic expression.
AI-generated music complicates this relationship.
Today, listeners may unknowingly consume AI-generated songs that imitate human voices, emotions and musical styles with remarkable precision. As a result, the listener becomes an important stakeholder in this debate. The issue is no longer only about protecting artists or preserving creative industries; it is also about protecting transparency and trust within the listening experience itself.
This raises an important ethical question: should listeners have the right to know whether the music they are consuming was created primarily by a human artist or
generated by AI?
One possible solution may lie in the introduction of “human artist verification” systems on Digital Streaming Platforms (DSPs) and music websites. Social media platforms already use verification badges to confirm the authenticity of public figures, brands and creators. Similarly, streaming platforms could implement verification systems that identify artists whose music is primarily human-created, while also disclosing the extent to which AI tools were involved in the creative process.
Such an approach would not necessarily discourage innovation or restrict technological advancement. Rather, it would promote transparency and allow listeners to make informed choices about the content they engage with.
The rise of AI music also introduces an emerging tension point for the broadcasting industry, particularly radio. Radio has historically played an important role in shaping culture, discovering artists and connecting audiences to authentic human stories through music.
However, as AI-generated songs become increasingly difficult to distinguish from human-created music, broadcasters may soon face a challenging editorial and ethical question: Should radio stations playlist AI-generated music?
On one hand, radio stations exist to serve audience preferences. If AI-generated music performs well in listener testing, attracts audiences and delivers commercially successful content, broadcasters may feel pressure to include it in playlists. Some industry stakeholders may even argue that listeners care more about music that sounds good than about how it was created.
On the other hand, widespread inclusion of AI-generated music could fundamentally reshape the music ecosystem. Radio airplay has long served as a critical platform for artist development, cultural representation and music industry sustainability.
If AI-generated artists begin competing for airtime alongside human musicians, concerns may arise regarding fair exposure, royalty distribution, employment opportunities and the long-term value placed on human creativity.
The implications extend beyond music selection alone. Broadcasters may eventually need to consider whether AI-generated songs should be disclosed on-air, whether separate policies should exist for AI-assisted versus fully AI-generated music and how stations can maintain audience trust in an environment where authenticity becomes increasingly uncertain.
For public broadcasters and culturally driven stations, African Language Services stations, in particular, the issue may become even more significant. Radio has often functioned as a platform for storytelling, identity formation, language preservation and social connection.
Creativity: Benjamin Jephta supports AI as a creative tool, but warns against fully automated music-making.
The introduction of AI-generated music therefore raises broader questions about culture itself: can algorithms authentically represent human experience, community realities, and social memory?
At the same time, the debate surrounding AI music remains philosophically complex. If a song generated by AI evokes genuine emotion, inspires listeners or creates meaningful cultural impact, some may argue that its origin becomes less important than its effect. Others maintain that art derives its value precisely from human experience, vulnerability and intention, qualities that machines cannot truly possess.
This tension reflects a broader societal challenge surrounding AI and creativity. As technology increasingly imitates human capabilities, institutions, industries and audiences will need to reconsider how authenticity, originality and artistic ownership are defined in the digital age.
The future of music may therefore depend not on whether AI can create music but on how society chooses to coexist with AI-generated creativity. This includes establishing ethical standards around voice cloning, songwriting attribution, disclosure policies, copyright protection, artist identity verification and broadcasting regulations.
Ultimately, the rise of AI music forces the industry and its audiences to confront a fundamental question: when technology can convincingly imitate human creativity, what does authenticity truly mean?
The answer to that question may shape the future of music, broadcasting and cultural expression for generations to come.