About TRAICE
Why disclosure matters
and why current and future methods could fail.
AI disclosure in music is inevitable. The question is whether it will be flattened into a binary label or expressed in language that reflects reality.
The reality of creation
Modern music creation is hybrid, iterative, and non-linear. Hover (desktop) or tap (mobile) each step to learn more.
A common modern workflow mixing human authorship with multiple AI tools.
Human idea
AI Tool A
melody / lyrics / vocals
Human edits
rewrites, replays
AI Tool B
sound design / stems
Human arrangement
performance
Mix / master
human + AI tools
Final audio
hybrid, non-linear
Why other methods fail
Detection tools work best when creation is singular, linear, and untouched. Music is heading in the opposite direction. Click each method to compare.
Uses models to guess whether audio resembles AI-generated material. Must constantly update to new model outputs.
Embeds imperceptible signals in audio at the point of generation.
Attaches signed metadata at time of generation: model, time, company.
Forces a yes/no classification on complex creative workflows.
Creator explains where, how, and how much AI was used: role-based, non-binary, contextual.
Key insight
As AI becomes more integrated and iterative, hard-cut detection becomes impossible. Voluntary, structured disclosure is not a compromise. It’s the only approach that scales with reality.
The full story
This platform was not built because I think AI music is “fake.” Nor was it built because I think artists owe their fans or the public an explanation of how they create their art. Furthermore, and most prominently, I didn’t build this platform to police taste, creativity, or the usage of tools.
I built TRAICE because the music industry is moving towards AI disclosure. However, the ways and the methods in which they are pursuing it are fundamentally misaligned with how music with these AI tools are actually made.
Ready to explore?
See how creators are using AI today.