The AI music scene is exploding, with tens of thousands of new tracks emerging daily. We’re on the brink of normalizing how we consume and perceive AI-generated music.
According to data from Deezer, a French music streaming service, approximately 50,000 fully AI-generated songs are added to its platform each day. While most of these tracks may not achieve mainstream success, a select few have already garnered millions of listens. This surge raises critical questions about the future of music: what will it sound like, and how will listeners engage with it?
Deni Béchard, a senior science writer at Scientific American, has been exploring these questions through a personal experiment. For nearly a month, he limited himself to listening solely to music created using the AI music app Suno. His goal was to critically assess his engagement with AI-generated music and how it compares to traditional music.
During a conversation with Noel King on the Today, Explained podcast, Béchard discussed his insights and experiences with AI music. He explained how Suno works, noting that by entering a prompt, the app produces two songs based on that input. Béchard described his creative process, which involves experimenting with variations in instrumentation and vocals. He mentioned one amusing outcome, a song titled “Organ Trafficking,” which he characterized as a playful, ironic rap track.
Reflecting on his listening habits, Béchard pointed out that much of the mainstream music he enjoys is heavily processed and designed for broad appeal, often lacking a personal touch. Surprisingly, he found that the music he generated with AI didn’t feel notably different from the mainstream tracks he typically consumes.
When asked if he could differentiate between AI-generated songs and those made by human artists, he admitted he likely wouldn’t be able to tell the difference. This realization leads to the conclusion that AI music is becoming increasingly sophisticated and convincingly human-like.
In discussing popular AI tracks now trending on platforms like Spotify, Béchard noted that many have a soulful and gritty authenticity. He cited examples such as Xania Monet and Solomon Ray, emphasizing that these songs resonate with listeners as if they stem from genuine emotional experiences. This authenticity, he believes, can help bridge the gap in perceptions about AI music.
Béchard expressed a newfound curiosity about AI music creation. Previously hesitant about the disconnect between machine and human emotion, he now finds himself contemplating unique combinations of musical styles, such as pairing a banjo with hip-hop. He admitted he might continue producing AI music even after his experiment concludes.
As he considers the implications of his experiment, Béchard predicts a future where younger generations will view the current debates about AI and music as outdated. He believes that society will adapt to AI music more rapidly than anticipated, but acknowledges the importance of ongoing discussions about artist protections and fair compensation. Ultimately, he sees AI music fitting seamlessly into our lives in the years to come, despite the complexities it introduces.

