The Three Types of Metadata in the Music Business

Metadata issues wreak so much havoc not just to the labels, artists, songwriters but consumers alike.

As I’ve written before, metadata is about the lifeblood of the music business and is expected to hold more and more relevance as it evolves. The importance of metadata some five years ago is matchless to that of today and so it goes. As new means of consumption emerge and become normalized, its important to anticipate what changes are coming down the line and be able to factor it in too.

FYI, If the metadata is incomplete at the ingestion stage, everything that follows is compromised. Now, with metadata, there is a two-tier system at play; there’s the metadata that rightsholders provide at ingestion stage and the one platforms integrate to ease discovery and user experience.

Now, Metadata issues wreak so much havoc not just to the labels, artists, songwriters but consumers alike; in that it hurts users’ experience on streaming services, robs songwriters/producers of well-deserved credit and that's about the tip. For better or worse, its impact on a songwriter's career is relatively significant.  Consenting the release of a song without requisite metadata is equivalent to writing a report/book without appending your name, which in this case will see you miss out on just royalties but sales, play counts, credit, discovery, referrals etc. Metadata might seem inconsequential on the surface but imagine - every time a listener searches for a song on a streaming service, every time a publisher/distributor attributes royalties, every time Apple Music’s algorithm queues up a song; Metadata is at play during these instances. Think about it as something that makes the music industry go round.

The last I wrote on this topic over a year ago, I didn’t mention the types of metadata there are. 

But here we go; In the music industry, there are three types of metadata viz

  • Descriptive metadata

  • Ownership metadata

  • Recommendation metadata

Descriptive Metadata:

This metadata holds details about the content of a song in the form of text tags. It includes info such as 

  • Song title 

  • Release date 

  • Track number 

  • Performer/Artist Name

  • Cover art

  • Genre

  • Title version (Radio edit, Live, Explicit, Bonus)

Descriptive metadata is often at play when a listener searches, organizes, presents the music in some way. This metadata is more typical as it’s used to identify a song on platforms, attribute plays/spins to a song, to build an artist page on DSPs or to organize a music library. Imagine a case where you encounter certain errors on DSPs like misspelled song or artist name, wrong artwork, mixed up release dates etc. More often than not, such error stems from faulty descriptive metadata.

Ownership metadata

This metadata has more to do with revenue allocation, ensuring that every party involved in creating a song is remunerated accordingly. Like we’ve discussed in the past, when a song is consumed via streaming, broadcast, sync or so; everyone involved in its making from artist(s) to songwriter, producer are entitled to royalties. Ownership metadata specifies the contractual agreement of each release for the purpose of proper royalty calculation and payment accordingly. We know royalty allocation itself is already complicated, incorrect/inconsistent metadata only makes it worse. Some human error here or database glitch there has/could cost some contributors hundreds-thousands of dollars. A recent report has it that the US ‘black box’ revenue has amounted to about $700m in royalties due to faulty metadata. 

Recommendation metadata

This metadata is different from the other two as those are clear-cut, definitive etc., in that there can only be one song title, only one right way to spell an artist’s name, a specified revenue allocation per creator etc. However, Recommendation metadata consists of subjective tags that aim to reflect elements of a song, describing how it sounds. This could be in the form of Mood labels, genre tags, songs/artist similarity etc. This holds as much importance especially presently, with the way a song can bear certain fusion which makes its classification even more subjective. I mean, we see such debate on Twitter a lot more these days. Recommendation metadata empowers the recommendation engine to broker connection between tracks. Like we know, platforms like Spotify, Soundcloud have amassed critical acclaim for their music discovery/recommendation services for users. Also, think about smart speakers changing the way we access and discover music. Even voice-mediated music consumption is seeing users move from structured queries to amorphous requests like “Alexa, play me something I like”. With this, comes a new challenge for recommendation engines across streaming platforms and search engines, as they figure out what best song to play to an exact person per time and artists ought to double down on making this less cumbersome. This really isn’t the future, it may not be as prevalent yet over here but it's happening.

To evade possible inconvenience:

  • Keep track of your metadata throughout

  • Ensure split sheets are defined before any work ever leaves the studio

  • Double - triple check your metadata submission

  • Follow all guidelines to a T

Quiz coming up on Friday, ready?