The Connected Media Experience is a consortium formed to promote technical standards for enhanced digital media packages such as music, movie, television and eBook releases. It’s origins go back to 2007 and was conceived as a platform for providing a rich experience for enjoying media across a variety of devices. Early on, the importance of rich semantic information to inform and enhance an Experience was recognized as an important differentiator. CME is intended to include social aspects allowing users to connect and interact with each other, so a rich means of identifying and describing elements of a release, and the release itself, is an important design consideration.
The Music Industry has also developed rich meta-data release descriptions in the form of DDEX (Digital Data Exchange). However, DDEX’ goals are focused on music distribution for business-to-business transactions, rather than to suit the needs of the consumer.
Ultimately, the group decided against including Semantic Web technologies to describe rich releases, in favor of a publisher-friendly model based on HTML5 and Widgets to achieve presentational objectives, without the rich semantic representational component. (The format does include limited semantic data in a proprietary format). This note attempts to discuss some of the reasons behind this decision, and lessons that the Semantic Web community might learn from the experience.
In 2007, the author was contracted by Gracenote and Warner Music Group to help develop use cases, demonstrations and a high level architecture for a Connected Media Experience (CMX, as it was known at the time). Early demos made use of Adobe Flash and Flex technologies to create compelling experiences on mobile and desktop platforms, and a relational metadata representation using a proprietary (ad-hoc) XML schema. However, this was found to cause many interoperability problems and was ultimately abandoned in favor of open technologies.
As CME progressed, other major music labels including Universal Music Group and Sony Music (then Sony BMG) joined to create the Connected Media Experience Standards Setting Organization for furthering development of the specification and to solicit contributions from other interested industry members to promote such a standard. (The author served as Chairman of the Technical Working Group until March of this year.)
The Music Ontology was introduced as a rich metadata format using RDF and OWL to describe music releases and content. It is based on Friend-of-a-Friend (FOAF) and Functional Requirements of Bibliographic Records (FRBR) to describe albums, tracks, contributors, musical works, performances and releases. The needs of describing releases beyond music ultimately drove CME to create their own Ontology taking elements from FOAF, FRBR, Music Ontology, and DDEX.
Throughout the course of development, CME members had difficulty in accepting the advantages semantic technologies being used, which led to low participation and lack of involvement in generating the specifications. Fundamentally, the difficulty of working with the technologies led the group to abandon a rich semantic representation of a release and settle on more established web technologies and proprietary metadata formats.
The basic idea of the CME Vocabulary was to allow a simple hierarchal representation of Work/Production/Signal/Manifestation with relationship to a Release/Collection.
Given a particular encoding, say of Hoagy Carmichael’s “Stardust”, a simple Manifestation might be described as follows:
[ a cme:Encoding, mo:MusicalManifestation; dc:title "Stardust"@en-us; dc:format "audio/mpeg"^^dc:MediaType; cme:duration "PT3M53S"^^xsd:duration; dc:issued "1978-04-01"^^xsd:date ] .
As this represents a specific formatted manifestation of a recorded signal, we can add more information:
<#stardust> a cme:Audio, mo:Signal; dc:title "Stardust"@en-us; cme:displayArtist <http://dbpedia.org/data/Willie_Nelson>; cme:lyrics <http://www.metrolyrics.com/stardust-lyrics-willie-nelson.html>; mo:isrc "XX-XXX-XX-00000"^^cme:ISRCType; mo:label <http://dbpedia.org/data/Columbia_Records>; cme:encoding [ a cme:Encoding, mo:MusicalExpression; dc:title "Stardust"@en-us; dc:format "audio/mpeg"^^dc:MediaType; cme:duration "PT3M53S"^^xsd:duration; dc:issued "1978-04-01"^^xsd:date ] .
However, there’s more we can say about this recording, for instance, that it was recorded at a particular time with various performers:
[ a mo:Performance; dc:title "Studio recording of Stardust"@en-us; dc:created "1977-12-12"^^dc:date; mo:producer <http://dbpedia.org/data/Booker_T._Jones>; mo:singer <http://dbpedia.org/data/Willie_Nelson>; mo:performer <http://dbpedia.org/data/Chris_Ethridge>, <http://dbpedia.org/data/Paul_English>, <http://dbpedia.org/data/Booker_T._Jones>; cme:expression <#stardust> ] . <#stardust> a cme:Audio, mo:Signal .
We also know that Hoagy Charmichael composed the song “Stardust”
<http://dbpedia.org/data/Stardust_(song)> a mo:MusicalWork dc:title "Stardust"@en-us; mo:composer <http://dbpedia.org/data/Hoagy_Carmichael>; db:created "1927-10-31"^^xsd:date mo:performed_in [ a mo:Performance; dc:title "Studio recording of Stardust"@en-us; dc:created "1977-12-12"^^dc:date; mo:producer <http://dbpedia.org/data/Booker_T._Jones>; mo:singer <http://dbpedia.org/data/Willie_Nelson>; mo:performer <http://dbpedia.org/data/Chris_Ethridge>, <http://dbpedia.org/data/Paul_English>, <http://dbpedia.org/data/Booker_T._Jones>; cme:expression <#stardust> ] . <#stardust> a cme:Audio, mo:Signal ....
We could continue this to describe multiple performances, expressions or encodings.
A particular encoding might appear on many different albums or playlists, so we can’t encode information such as track number with the
cme:Audio. This is encoded in a Collection contained within an Release:
<> a cme:PrimaryRelease; owl:seeAlso <http://dbpedia.org/data/Stardust_(album)>; dc:title "Stardust"@en-us; cme:displayArtist <http://dbpedia.org/data/Willie_Nelson>; cme:parentalWarning "unspecified"^^cme:ParentalWarningType; mo:grid "A1-a1788-aaaaaaaaaa-b"^^cme:GRid; cme:presentation <js/authored.js>; cme:audioCollection [ a cme:AudioCollection; dc:title "Songs"@en-us; cme:item [ a cme:Item; cme:itemNumber "1"; cme:expression <#stardust> ] ] . <http://dbpedia.org/data/Stardust_(song)> a mo:MusicalWork .... <#stardust> a cme:Audio, mo:Signal ....
There is much more that can be said about an album, including links to reviews, alternate performances, videos, photos and so forth. RDF provides an expressive mechanism for describing such rich metadata.
RDF is based strongly on the notion of universal resource identifiers to identify particular resources or concepts. Using, the so-called follow your nose principle, a specific agent might use identifiers contained within a release to discover more information about a particular subject; for instance reviews of the album stored on DBPedia or elsewhere.
As a release described as an RDF Graph, using the principle of “Anyone can say anything about anything”, additional information can be authored about a given release. This might be useful for adding premium content such as extra audio tracks, music videos or concert photos. Moreover, consumers may choose to use a CME release as a creative starting point by creating alternate user interface skins, personal pictures or anything else they might be interested in; this is one aspect of the Connected aspect of The Connected Media Experience.
Giving CME release elements URIs allows them to be used for other social activities, such as Activity Streams, Facebook “Like” operations, or other mechanisms.
Demise of CME Semantic Releases
In many ways, the music industry is not ready for many of the open aspects of an RDF format; the concept of using existing universal identifiers (such as DBPedia URIs) that they do not directly control can be a barrier, and they are not yet prepared to maintain their own publicly available repository of unique identifiers representing their artists, musical works and releases.
Artists are naturally concerned that their product is presented in a manner consistent with their original design intentions. Understandably, they want to insure that their intellectual product is portrayed as intended. However, this desire can come in conflict with the read/write web where fans often make use of authored material in mashups and other derived works. Coming to a reasonable understanding of fair use, and how this can be moderated remains an important challenge.
The industry has made great strides is in improving their use of ISRC identifiers, which in the past were not always reliable. ISRC, along with GRid, ISWC and ISNI identifiers can be useful in differentiating resources, and typically cannot be dereferenced. They are also not URNs, so they’re not appropriate to form an owl:sameAs relationship with, for example, DBPedia URIs.
It’s important to note that nothing in CME excludes RDF and a rich set of metadata, and we may yet see CME releases that use the original design principles to achieve similar objectives. What won’t be there is a base-level of metadata in every CME release that platforms can depend upon for extending the basic experience.
Lessons for the Music Industry
The concept of rich music (media) releases in an era of pervasive access to free content is an ongoing issue for the music industry. CME is an attempt to provide consumers a reason to own their media, rather that obtain it alternatively. Providing rich curated data about subjects of interest to consumers can be one way in which future exists for content owners who legitimately need to profit from their artistry.
Giving up control about information, including the presentation of artistic works, is a barrier for music publishers. Existing contractual obligations do not necessarily align with the expectations of consumers.
Certainly, the web is full of bad data, and relying on an external service which does not provide content owners a reliable way of ensuring that data is essential. Even getting a handle on their own internal use of identifiers, for instance having a single identifier to describe performers on different releases, much less on performance that cross label boundaries, is a big challenge for legacy systems that were not intended to be used for curating publicly available information.
The major labels do work with metadata services to provide accurate information, and many retailers use these 3rd party information services, along with proprietary identifiers to provide consumers with limited metadata about music releases. For 3rd party information services, there is a cost to maintaining quality metadata, which often means that reliable information remains behind pay walls.
As mentioned above, where there is a large amount of metadata available through various open data sources, it is often of poor quality. Finding a balance of allowing for curation of such data by content owners is an important step in bringing about reliable rich metadata. To some, the lesson here is not one of control, but one of clean-up and publish: “if you don’t give them what they want, they’ll get it from someplace else”.
The very lack of quality metadata about musical releases from the major labels is responsible for the rise of several services that provide such information, for example Gracenote, AMG, MusicBrainz and FreeDB. Providing curated information about music releases in standardized RDF formats is a potential business opportunity for such companies.
Lessons for the Semantic Web Community
RDF was founded by academics to be logically consistent and rich. To a large degree, it continues to be dominated by academic interests. This has led to a rich and consistent representational format with very well thought out elements (e.g., entailment, inference, semantic equivalence, etc.). However, the pace of change can be slow and outreach to the open web community is not necessarily a priority.
There is a fair appreciation within the major music labels of the value and promise of RDF as a means of providing rich metadata. The fact is, though, that proprietary metadata formats are much simpler to implement and manage. According to a key opinion maker: “I bet the average developer can get a simple XML-based music metadata system up and running in less time than it would take to read the Music Ontology document. We can get most (all?) of the benefits of RDF through simpler means.” To be fair, closed world systems are easier to implement and manage; a goal of RDF is to allow for datasets to be shared and mixed, doing so requires shared vocabularies and representations.
RDFa is one example of an RDF technology that came from the open web community and has had astounding uptake. It is estimated that ~4% of all web content now includes some amount of RDFa description . But, other areas in need of standardization (e.g., JSON RDF representation) remain mired in controversy and/or apathy.
Given HTML’s strong support for lists (e.g.,
dl, …), it is amazing that RDFa has no basic markup support for RDF lists. Even if it did, RDF lists are based on linked-lists, rather than flat collections. This makes it almost impossible to query an RDF graph to determine the constituent elements of a container such as a playlist or album without using higher-level semantic constructs (see Ordered List Ontology).