AI Music Scams: When Fake Bands Hire Real Musicians

Quick Summary
AI music is flooding streaming platforms at 60,000 songs a day. Here's what the Neon Oni scandal reveals about where the industry is heading.
In This Article
The AI Music Scam Nobody Saw Coming
Nearly 34% of tracks uploaded to Spotify daily are AI-generated. That number alone should stop any music industry professional in their tracks. But the more revealing story isn't the volume — it's what happens when an AI music operation gets caught, pivots in real time, and somehow ends up building something legitimate.
Related Post
That's exactly what happened with Neon Oni, a so-called Japanese metal band that accumulated 80,000 monthly Spotify listeners, racked up 1.2 million streams on a single track, and sold merchandise to fans who genuinely connected with the music — before it emerged that the entire band was fabricated by a single AI prompter in Europe using tools like Suno. No Tokyo musicians. No band members. Seven fictional personas built to sell a product.
What followed is the first documented case of an AI band transitioning into a real one — and it raises uncomfortable questions about authenticity, exploitation, and where the line sits between innovation and deception in the modern music business.
How Neon Oni Built — Then Nearly Destroyed — Its Audience
The operation was methodical. Music came first: a Kawaii Japanese metal aesthetic designed to carve out a niche corner of the indie scene. Then came the infrastructure — Spotify profile, streaming numbers, and eventually music videos and Instagram content to deepen fan engagement.
It worked. Fans included Neon Oni in their Spotify Wrapped top five. Comments celebrated the band as an emerging force. Merchandise sold. The illusion held — until it didn't.
As more visual content appeared, sharp-eyed fans spotted the telltale artifacts of AI-generated video: flickering textures, inconsistent facial features, unnatural movement. Reddit threads and Facebook groups erupted. When the band's management started deleting comments raising questions, the community took that as confirmation. Trust evaporated fast.
Here's the business lesson buried in that collapse: the deletion of those comments was the actual scam. The music being AI-generated is debatable. Actively suppressing legitimate audience concerns to protect revenue is not.
From Fake to Real: A Pivot Nobody Predicted
Rather than disappear, the person behind Neon Oni made an unexpected move. They issued a public statement acknowledging the AI origins of the project, then recruited seven actual musicians from Tokyo bands to perform the AI-generated catalogue live.
The framing they offered: the community demanded something real, so they built it.
This is now a functioning business model. The AI prompter generates the music. Human musicians perform it live. Merchandise continues to sell. Whether you find that inspiring or troubling likely depends on where your money sits in the music industry ecosystem.
For context, consider two analogies the situation invites:
- The cover band comparison: A band performing AI-generated songs live isn't structurally different from a cover band performing someone else's catalogue. Cover bands have existed for decades and nobody calls them fraudulent.
- The ghostwriter comparison: The AI prompter for digital singer Zaniah Monet — who signed a multi-million dollar record deal — is now reportedly planning to hire a real singer to perform the AI-generated music live. That mirrors an artist performing songs written entirely by a ghostwriter, which has been standard industry practice since the 1960s.
Both comparisons hold up logically. Neither fully addresses the issue of fan deception, which is where the real ethical weight sits.
The Darker Side: When AI Music Actively Harms Real Artists
Continue Reading
Related Guides
Keep exploring this topic
The Rise of Tiny Empires: Microbusiness Booms
Business & Money
Wingbits Explained: How to Earn Crypto by Tracking Airplanes in Real Time
Business & Money
The Fascinating History of Money: From Barter to Bitcoin
Business & Money
Cryptocurrency Origins: Tracing the Latest Chapter in Money’s Story
Business & Money
Neon Oni is the provocative headline. The more serious story is what's happening to working musicians who never agreed to participate in any of this.
Singer-songwriter Murphy Campbell's case is a precise example of how badly the current system can fail real artists. An entity called Timeless Sounds IR fed YouTube videos of her performances into an AI engine, generated imitation versions of her songs, uploaded them to streaming platforms, and began collecting revenue. Then — and this is where it escalates — that same entity used its distributor, Vidia, to file copyright claims against Campbell's own original YouTube videos. She is now generating zero YouTube revenue from her own performances while Vidia collects on AI imitations of her work.
This isn't a grey area. It's straightforward intellectual property theft enabled by infrastructure gaps that platforms have been slow to close. Three specific failure points enabled it:
- Distributor accountability: Vidia accepted and distributed AI-generated imitations without adequate verification.
- Platform enforcement: Streaming services and YouTube processed copyright claims without flagging the circular absurdity of the situation.
- Detection lag: By the time the damage was done, the revenue had already moved.
Spotify has removed over 75 million AI tracks flagged as low-effort. That's a meaningful start. But it doesn't address cases where the AI content is sophisticated enough to pass basic filters, or where the harm comes through copyright manipulation rather than mere flooding.
Detection Exists — Platforms Just Lack the Incentive to Use It
Here's a number worth sitting with: streaming platform DA has deployed an AI detection tool with 98% accuracy. That capability exists right now. The question isn't technological — it's economic.
More content on streaming platforms, AI or human, generates more streaming revenue for the platform. Removing AI tracks aggressively cuts into that inventory. Until regulators, artists, or consumers create sufficient pressure, the financial incentive points in the wrong direction.
What would change the calculus:
- Mandatory AI disclosure: Legislation requiring all AI-generated content to be labelled at upload, with liability attached to false declarations.
- Distributor liability: Holding distributors like Vidia financially responsible for copyright harm caused by content they knowingly or negligently distribute.
- Fan-driven platform pressure: Organised listener campaigns demanding AI labelling as a condition of continued subscription spend — the only language streaming platforms speak fluently.
The tools exist. The will is the variable.
What the Neon Oni Model Signals for the Broader Industry
Forget the scandal framing for a moment. The Neon Oni trajectory — AI-generated music achieving genuine audience connection, then converting into a live human performance act — is a business model that will be replicated intentionally and at scale.
That creates a strategic question for every working musician, label, and manager: if AI can generate a marketable sound and human performers can then execute it live, what exactly is the competitive moat for traditionally developed artists?
The answer, at least for now, is authentic origin narrative. Audiences who discover they've been deceived don't forget it. The fans who felt betrayed by Neon Oni's AI origins weren't upset that the music was generated by an algorithm — many of them said they'd continue listening. They were upset that the band's management actively lied and deleted their questions.
Free Weekly Newsletter
Enjoying this guide?
Get the best articles like this one delivered to your inbox every week. No spam.
Transparency is emerging as the one asset AI-generated projects genuinely cannot replicate: a verifiable human story behind the sound. Artists who understand that and communicate it clearly hold ground that no prompter can automate away.
Conclusion: Adapt Fast or Get Outpaced
The AI music landscape is moving faster than most industry participants are prepared for. Sixty thousand AI-generated songs per day on a single platform isn't a future projection — it's the current baseline. The Neon Oni case, Murphy Campbell's experience, and the Zaniah Monet deal are not isolated incidents. They are early data points in a structural shift.
For working musicians, the practical priorities are clear:
- Monitor your likeness: Set up regular searches for AI-generated content that mimics your voice or style.
- Document your originals: Timestamped, verifiable records of your original recordings are your primary legal asset.
- Engage your audience directly: Platform algorithms can be gamed. A direct relationship with your listeners — email lists, direct channels — cannot.
- Understand distributor terms: Who you distribute through determines your exposure when disputes arise. Read the terms around AI content before uploading.
The Neon Oni story ends, for now, with real musicians performing AI music on real stages. Whether that's progress or something else depends entirely on who was asked, and whether they were told the truth upfront.
Frequently Asked Questions
What is the Neon Oni AI band scandal? Neon Oni was presented as a seven-member Japanese metal band on streaming platforms, accumulating over 80,000 monthly Spotify listeners and 1.2 million streams on a single track. In reality, the entire band was created by a single AI prompter in Europe using tools like Suno. After fans identified AI-generated artifacts in the band's videos and management began deleting critical comments, the creator publicly acknowledged the AI origins and recruited real Tokyo musicians to perform the catalogue live.
How much of the music on Spotify is AI-generated? Approximately 34% of tracks uploaded to Spotify daily are estimated to be AI-generated. Spotify has removed over 75 million AI tracks it classified as low-effort content, but the volume of daily uploads means enforcement remains a significant ongoing challenge.
Can streaming platforms detect AI music accurately? Yes. Streaming platform DA has deployed an AI detection tool that operates at 98% accuracy. The barrier to broader implementation is not technical capability but economic incentive — more content on platforms generates more streaming revenue, regardless of whether it is human or AI-generated.
How can AI music harm real artists financially? Beyond flooding platforms with competing content, AI can be used to clone an artist's voice and style without consent, upload imitations to streaming services, and then file copyright claims against the original artist's content. In the case of musician Murphy Campbell, this process resulted in her losing all YouTube revenue from her own original videos while a third-party distributor collected earnings from AI imitations of her work.
Is it legal to perform AI-generated music live with human musicians? Currently, no legislation specifically prohibits human musicians from performing AI-generated compositions live. The legal and ethical questions centre primarily on disclosure — whether audiences and venues are informed that the music was AI-generated — and on whether the AI training data used to create the music involved unlicensed use of copyrighted works. This area of law is evolving rapidly across multiple jurisdictions.
Frequently Asked Questions
The AI Music Scam Nobody Saw Coming
Nearly 34% of tracks uploaded to Spotify daily are AI-generated. That number alone should stop any music industry professional in their tracks. But the more revealing story isn't the volume — it's what happens when an AI music operation gets caught, pivots in real time, and somehow ends up building something legitimate.
That's exactly what happened with Neon Oni, a so-called Japanese metal band that accumulated 80,000 monthly Spotify listeners, racked up 1.2 million streams on a single track, and sold merchandise to fans who genuinely connected with the music — before it emerged that the entire band was fabricated by a single AI prompter in Europe using tools like Suno. No Tokyo musicians. No band members. Seven fictional personas built to sell a product.
What followed is the first documented case of an AI band transitioning into a real one — and it raises uncomfortable questions about authenticity, exploitation, and where the line sits between innovation and deception in the modern music business.
How Neon Oni Built — Then Nearly Destroyed — Its Audience
The operation was methodical. Music came first: a Kawaii Japanese metal aesthetic designed to carve out a niche corner of the indie scene. Then came the infrastructure — Spotify profile, streaming numbers, and eventually music videos and Instagram content to deepen fan engagement.
It worked. Fans included Neon Oni in their Spotify Wrapped top five. Comments celebrated the band as an emerging force. Merchandise sold. The illusion held — until it didn't.
As more visual content appeared, sharp-eyed fans spotted the telltale artifacts of AI-generated video: flickering textures, inconsistent facial features, unnatural movement. Reddit threads and Facebook groups erupted. When the band's management started deleting comments raising questions, the community took that as confirmation. Trust evaporated fast.
Here's the business lesson buried in that collapse: the deletion of those comments was the actual scam. The music being AI-generated is debatable. Actively suppressing legitimate audience concerns to protect revenue is not.
From Fake to Real: A Pivot Nobody Predicted
Rather than disappear, the person behind Neon Oni made an unexpected move. They issued a public statement acknowledging the AI origins of the project, then recruited seven actual musicians from Tokyo bands to perform the AI-generated catalogue live.
The framing they offered: the community demanded something real, so they built it.
This is now a functioning business model. The AI prompter generates the music. Human musicians perform it live. Merchandise continues to sell. Whether you find that inspiring or troubling likely depends on where your money sits in the music industry ecosystem.
For context, consider two analogies the situation invites:
- The cover band comparison: A band performing AI-generated songs live isn't structurally different from a cover band performing someone else's catalogue. Cover bands have existed for decades and nobody calls them fraudulent.
- The ghostwriter comparison: The AI prompter for digital singer Zaniah Monet — who signed a multi-million dollar record deal — is now reportedly planning to hire a real singer to perform the AI-generated music live. That mirrors an artist performing songs written entirely by a ghostwriter, which has been standard industry practice since the 1960s.
Both comparisons hold up logically. Neither fully addresses the issue of fan deception, which is where the real ethical weight sits.
The Darker Side: When AI Music Actively Harms Real Artists
Neon Oni is the provocative headline. The more serious story is what's happening to working musicians who never agreed to participate in any of this.
Singer-songwriter Murphy Campbell's case is a precise example of how badly the current system can fail real artists. An entity called Timeless Sounds IR fed YouTube videos of her performances into an AI engine, generated imitation versions of her songs, uploaded them to streaming platforms, and began collecting revenue. Then — and this is where it escalates — that same entity used its distributor, Vidia, to file copyright claims against Campbell's own original YouTube videos. She is now generating zero YouTube revenue from her own performances while Vidia collects on AI imitations of her work.
This isn't a grey area. It's straightforward intellectual property theft enabled by infrastructure gaps that platforms have been slow to close. Three specific failure points enabled it:
- Distributor accountability: Vidia accepted and distributed AI-generated imitations without adequate verification.
- Platform enforcement: Streaming services and YouTube processed copyright claims without flagging the circular absurdity of the situation.
- Detection lag: By the time the damage was done, the revenue had already moved.
Spotify has removed over 75 million AI tracks flagged as low-effort. That's a meaningful start. But it doesn't address cases where the AI content is sophisticated enough to pass basic filters, or where the harm comes through copyright manipulation rather than mere flooding.
Detection Exists — Platforms Just Lack the Incentive to Use It
Here's a number worth sitting with: streaming platform DA has deployed an AI detection tool with 98% accuracy. That capability exists right now. The question isn't technological — it's economic.
More content on streaming platforms, AI or human, generates more streaming revenue for the platform. Removing AI tracks aggressively cuts into that inventory. Until regulators, artists, or consumers create sufficient pressure, the financial incentive points in the wrong direction.
What would change the calculus:
- Mandatory AI disclosure: Legislation requiring all AI-generated content to be labelled at upload, with liability attached to false declarations.
- Distributor liability: Holding distributors like Vidia financially responsible for copyright harm caused by content they knowingly or negligently distribute.
- Fan-driven platform pressure: Organised listener campaigns demanding AI labelling as a condition of continued subscription spend — the only language streaming platforms speak fluently.
The tools exist. The will is the variable.
What the Neon Oni Model Signals for the Broader Industry
Forget the scandal framing for a moment. The Neon Oni trajectory — AI-generated music achieving genuine audience connection, then converting into a live human performance act — is a business model that will be replicated intentionally and at scale.
That creates a strategic question for every working musician, label, and manager: if AI can generate a marketable sound and human performers can then execute it live, what exactly is the competitive moat for traditionally developed artists?
The answer, at least for now, is authentic origin narrative. Audiences who discover they've been deceived don't forget it. The fans who felt betrayed by Neon Oni's AI origins weren't upset that the music was generated by an algorithm — many of them said they'd continue listening. They were upset that the band's management actively lied and deleted their questions.
Transparency is emerging as the one asset AI-generated projects genuinely cannot replicate: a verifiable human story behind the sound. Artists who understand that and communicate it clearly hold ground that no prompter can automate away.
Conclusion: Adapt Fast or Get Outpaced
The AI music landscape is moving faster than most industry participants are prepared for. Sixty thousand AI-generated songs per day on a single platform isn't a future projection — it's the current baseline. The Neon Oni case, Murphy Campbell's experience, and the Zaniah Monet deal are not isolated incidents. They are early data points in a structural shift.
For working musicians, the practical priorities are clear:
- Monitor your likeness: Set up regular searches for AI-generated content that mimics your voice or style.
- Document your originals: Timestamped, verifiable records of your original recordings are your primary legal asset.
- Engage your audience directly: Platform algorithms can be gamed. A direct relationship with your listeners — email lists, direct channels — cannot.
- Understand distributor terms: Who you distribute through determines your exposure when disputes arise. Read the terms around AI content before uploading.
The Neon Oni story ends, for now, with real musicians performing AI music on real stages. Whether that's progress or something else depends entirely on who was asked, and whether they were told the truth upfront.
Frequently Asked Questions
What is the Neon Oni AI band scandal? Neon Oni was presented as a seven-member Japanese metal band on streaming platforms, accumulating over 80,000 monthly Spotify listeners and 1.2 million streams on a single track. In reality, the entire band was created by a single AI prompter in Europe using tools like Suno. After fans identified AI-generated artifacts in the band's videos and management began deleting critical comments, the creator publicly acknowledged the AI origins and recruited real Tokyo musicians to perform the catalogue live.
How much of the music on Spotify is AI-generated? Approximately 34% of tracks uploaded to Spotify daily are estimated to be AI-generated. Spotify has removed over 75 million AI tracks it classified as low-effort content, but the volume of daily uploads means enforcement remains a significant ongoing challenge.
Can streaming platforms detect AI music accurately? Yes. Streaming platform DA has deployed an AI detection tool that operates at 98% accuracy. The barrier to broader implementation is not technical capability but economic incentive — more content on platforms generates more streaming revenue, regardless of whether it is human or AI-generated.
How can AI music harm real artists financially? Beyond flooding platforms with competing content, AI can be used to clone an artist's voice and style without consent, upload imitations to streaming services, and then file copyright claims against the original artist's content. In the case of musician Murphy Campbell, this process resulted in her losing all YouTube revenue from her own original videos while a third-party distributor collected earnings from AI imitations of her work.
Is it legal to perform AI-generated music live with human musicians? Currently, no legislation specifically prohibits human musicians from performing AI-generated compositions live. The legal and ethical questions centre primarily on disclosure — whether audiences and venues are informed that the music was AI-generated — and on whether the AI training data used to create the music involved unlicensed use of copyrighted works. This area of law is evolving rapidly across multiple jurisdictions.
About Zeebrain Editorial
Our editorial team is dedicated to providing clear, well-researched, and high-utility content for the modern digital landscape. We focus on accuracy, practicality, and insights that matter.
More from Business & Money
Explore More Categories
Keep browsing by topic and build depth around the subjects you care about most.

