Thursday, March 26, 2009

News Content APIs: Uniform Access or Content Silos

The first generation of online news content applications was designed for consumption by humans. With the massive amounts of online content now available, machine processable structured data will be the key to findability and relevance. Major news organizations like the New York Times (NYT), the National Public Radio (NPR), and The Guardian have recently opened up their content repositories through APIs.

These APIs have generated a lot of excitement in the content developers community and are certainly a significant step forward in the evolution of how news content is processed and consumed on the web. The APIs allow developers to create new interesting mashup applications. An example of such a mashup is a map of the United States showing how the stimulus money is being spent by municipalities across the country with hotspots to local newspaper articles about corruption investigations related to the spending. The stimulus spending data will be provided by the Stimulus Feed on the recovery.gov site as specified by the "Initial Implementation Guidance for the American Recovery and Reinvestment Act" document. This is certainly an example of mashup that US tax payers will like.

For news organizations, these APIs represent an opportunity to grow their ad network by pushing their content to more sites on the web. That's the idea behind the recent release of The Guardian Open Platform API.

APIs and Content Silos

The emerging news content APIs typically offer a REST or SOAP web services interface and return content in XML, JSON, or ATOM feeds. However, despite the excitement that they generate, these APIs can quickly turn into content silos for the following reasons:

  • The structure of the content is often based on a proprietary schema. This introduces several potential interoperability issues for API users in terms of content structure, content types, and semantics.
  • It is not trivial to link content across APIs
  • Each API provides its own query syntax. There is a need for universal data browsers and a query language to read, navigate, crawl, and query structured content from different sources.

XML, XSD, XSLT, and XQuery

Migrating content from HTML to XML (so called document-oriented XML) has many benefits. XSLT enables media-independent publishing (single sourcing) to multiple devices such as Amazon's Kindle e-reader and "smart phones". With XQuery, sophisticated contextualized database-like queries can be performed, turning the content itself into a database. In addition, XQuery allows the dynamic assembly of content where new compound documents can be constructed on the fly from multiple documents and external data sources. This allows publishers to repurpose content into new information products as needed to satisfy new customer demands and market opportunities.

However XSD, XSLT, and XQuery operate at the syntax level. The next level up in the content technology stack is semantics and reasoning and that's where RDF, OWL, and SPARQL come into play. To illustrate the issue, consider three news organizations, each with their own XML Schema for describing news articles. To describe the author of an article, the first news organization uses the <creator> element, the second the <byline> element, and the third the <author> element. All of these three distinct element names have exactly the same meaning. Using an OWL ontology, we can establish that these three terms are equivalent.

Semantic Web and Linked Data to the Rescue

Semantic web technologies such as RDF, OWL, and SPARQL can help us close the semantic gap and also open up new opportunities for publishers. Furthermore, with the decline in ad revenues, news organizations are now considering charging users for accessing content online. Semantic web technologies can enrich content by providing new ways to discover and explore content based on user context and interests. An interesting example is a mashup application built by Zemanta called Guardian topic researchr which extract entities (people, places, organizations, etc.) from The Guardian Open Platform API query results and allows readers to explore these entities further. In addition, the recently unveiled Newssift site by the Financial Times is an indication that the industry is starting to pay attention to the benefits of "semantic search" as opposed to keyword search.

The rest of this post outlines some practical steps for migrating news content to the Semantic Web. For existing news content APIs, an interim solution is to create Semantic Web wrappers around these APIs (more on that later). The long term objective however should be to fully embrace the Semantic Web and adopt Linked Data principles in publishing news content.

Adopt the International Press Telecommunication Council (IPTC) News Architecture (NAR)

The main reason for adopting the NAR is interoperability at the content structure, content types, and semantic levels. Imagine a mashup developer trying to integrate news content from three different news organizations. In addition to using three different element names (<creator>, <byline>, and <author>) to describe the same concept, these three organizations use completely different XML Schemas to describe the structure and types of their respective news content. That can lead to a data mapping nightmare for the mashup developer and the problem will only get worse as the number of news sources increases.

The NAR content model defines four high level elements: newsItem, packageItem, conceptItem, and knowledgeItem. You don't have to manage your content internally using the XML structure defined by the NAR. However, you should be able to map and export your content to the NAR as a delivery format. If you have fields in your content repository that do not map to the NAR structure, then you should extend the standard NAR XML Schema using the appropriate XML Schema extension mechanism that allows you to clearly identify your extension elements in your own XML namespace. Provide a mechansim such as dereferenceable URIs to allows users to obtain the meaning of these extensions elements.

The same logic applies to the news taxonomy that you use. Adopting the IPTC News Codes which specified 1300 terms used for categorizing news content will greatly facilitate interoperability as well.

Adopt or Create a News Ontology

Several news ontologies in RDFS or OWL format are now available. The IPTC is in the process of creating an IPTC news ontology in OWL format. To facilitate semantic interoperability, news organizations should use this ontology when it becomes available. In mapping XML Schemas into OWL, ontology best practices should be followed. For example, if mapped automatically, container elements in the XML Schema could generate blank nodes in the RDF graph. However, blank nodes cannot be used for external RDF links and are not recommended for Linked Data applications. Also, RDF reification, RDF containers, and RDF collections are not SPARQL-friendly and should be avoided.

While creating the news ontology, you should reuse or link to other existing ontologies such as FOAF and Dublin Core using OWL elements like owl:equivalentProperty, owl:equivalentClass, rdfs:subClassOf, or rdfs:subPropertyOf.

Similarly, existing taxonomies should be mapped to an RDF compatible format using the SKOS specification. This makes it possible to use an owl:Restriction to constrain the value of a property in the OWL ontology to be an skos:Concept or skos:ConceptScheme.

Generate RDF Data

Assign a dereferenceable HTTP URI for each news item and use content negotiation to provide both an XHTML and an RDF/XML representation of the resource. When the resource is requested, an HTTP 303 See Other redirect is used to serve XHTML or RDF/XML depending on whether the browser's Accept header is text/html or application/rdf+xml. The W3C Best Practice Recipes for Publishing RDF Vocabularies explains how dereferenceale URIs and content negotiation work in the Semantic Web.

The RDF data can be generated using a variety of techniques. For example, you can use an XSLT-based RDFizer to extract RDF/XML from news item already marked up in XML. There are also RDFizers for relational databases. Entity extraction tools like Open Calais can also be useful particularly for extracting RDF metadata from legacy news items available in HTML format.

Link the RDF data to external data sources such as DBPedia and Geonames by using RDF links from existing vocabularies such as FOAF. For example, an article about US Treasury Secretary Timothy Geithner can use foaf:base_near to link the news item to a resource describing Washington, DC on DBPedia. If there is an HTTP URI that describes the same resource in another data source, then use owl:sameAs links to link the two resources. For example, if a news item is about Timothy Geithner, then you can use owl:sameAs to link to Timothy Geithner's data page on DBPedia. An RDF browser like Tabulator can traverse those links and help the reader explore more information about topics of interest.


Expose a SPARQL Endpoint

Use a Semantic Web Crawler (an extension to the Sitemap Protocol) to specify the location of the SPARQL endpoint or an RDF dump for Semantic Web clients and crawlers. OpenLink Virtuoso is an RDF store that also provides a SPARQL endpoint.

Provide a user interface for performing semantic searches. Expose the RDF metadata as facets for browsing the news items.

Provide a Semantic Web Wrapper for existing APIs.

A wrapper provides a deferenceable URI for every news item available through an existing news content API. When an RDF browser requests the news item, the Semantic Web wrapper translates the request into an API call, transforms the response from XML into RDF, and send it back to the Semantic Web client. The RDF Book Mashup is an example of how a Semantic Web Wrapper can be used to integrate publicly available APIs from Amazon, Google, and Yahoo into the Semantic Web.

Conclusion

The Semantic Web is still an obscure topic in the mainstream developers community. I hope I've outlined few practical steps you can take now to take advantage of the new Web of Linked Data.