Linked data

skos-history

"What's new?" and "What has changed?" are common user questions when a new version of a vocabulary is published - be it a thesaurus, a classification, or a simple keyword list. Making use of the regular structure of SKOS files, changes can be derived from the differences of the versions (deltas), and can be grouped to get an overview of additions, deletions/deprecations, hierachy or label changes. The resulting reports should be apprehensable by humans and processable by machines. skos-history aims at developing a set of processing practices and a supporting ontology to this end.

Publishing SPARQL queries live

SPARQL queries are a great way to explore Linked Data sets - be it our STW with it's links to other vocabularies, the papers of our repository EconStor, or persons or institutions in economics as authority data. ZBW therefore offers since a long time public endpoints. Yet, it is often not so easy to figure out the right queries. The classes and properties used in the data sets are unknown, and the overall structure requires some exploration. Therefore, we have started collecting queries in our new SPARQL Lab, which are in use at ZBW, and which could serve as examples to deal with our datasets for others.

A major challenge was to publish queries in a way that allows not only their execution, but also their modification by users. The first approach to this was pre-filled HTML forms (e.g. http://zbw.eu/beta/sparql/stw.html). Yet that couples the query code with that of the HTML page, and with a hard-coded endpoint address. It does not scale to multiple queries on a diversity of endpoints, and it is difficult to test and to keep in sync with changes in the data sets. Besides, offering a simple text area without any editing support makes it quite hard for users to adapt a query to their needs.

And then came YASGUI, an "IDE" for SPARQL queries. Accompanied by the YASQE and YASR libraries, it offers a completely client-side, customable, Javascript-based editing and execution environment. Particular highlights from the libraries' descriptions include:

Other editions of this work: An experiment with OCLC's LOD work identifiers

Large library collections, and more so portals or discovery systems aggregating data from diverse sources, face the problem of duplicate content. Wouldn't it be nice, if every edition of a work could be collected beyond one entry in a result set?

The WorldCat catalogue, provided by OCLC, holds more than 320 million bibliographic records. Since early in 2014, OCLC shares its 197 million work descriptions as Linked Open Data: "A Work is a high-level description of a resource, containing information such as author, name, descriptions, subjects etc., common to all editions of the work. ... In the case of a WorldCat Work description, it also contains [Linked Data] links to individual, oclc numbered, editions already shared in WorldCat." The works and editions are marked up with schema.org semantic markup, in particular using schema:exampleOfWork/schema:workExample for the relation from edition to work and vice versa. These properties have been added recently to the schema.org spec, as suggested by the W3C Schema Bib Extend Community Group.

ZBW contributes to WorldCat, and has 1.2 million oclc numbers attached to it's bibliographic records. So it seemed interesting, how many of these editions link to works and furthermore to other editions of the very same work.

Link out to DBpedia with a new Web Taxonomy module

ZBW Labs now uses DBpedia resources as tags/categories for articles and projects. The new Web Taxonomy plugin for DBpedia Drupal module (developed at ZBW) integrates DBpedia labels, stemming from Wikipedia page titles, via a comfortable autocomplete plugin into the authoring process. On the term page (example), further information about a keyword can be obtained by a link to the DBpedia resource. This at the same time connects ZBW Labs to the Linked Open Data Cloud.

The plugin is the first one released for Drupal Web Taxonomy, which makes LOD resources and web services easily available for site builders. Plugins for further taxonomies are to be released within our Economics Taxonomies for Drupal project.

Extending econ-ws Web Services with JSON-LD and Other RDF Output Formats

From the beginning, our econ-ws (terminology) web services for economics produce tabular output, very much like the results of a SQL query. Not a surprise - they are based on SPARQL, and use the well-defined table-shaped SPARQL 1.1 query results formats in JSON and XML, which can be easily transformed to HTML. But there are services, whose results not really fit this pattern, because they are inherently tree-shaped. This is true especially for the /combined1 and the /mappings service. For the former, see our prior blog post; an example of the latter may be given here: The mappings of the descriptor International trade policy are (in html) shown as:

concept prefLabel relation targetPrefLabel targetConcept target
<http://zbw.eu/stw/descriptor/10616-4> "International trade policy" @en <http://www.w3.org/2004/02/skos/core#exactMatch> "International trade policies" @en <http://aims.fao.org/aos/agrovoc/c_31908> <http://zbw.eu/stw/mapping/agrovoc/target>
<http://zbw.eu/stw/descriptor/10616-4> "International trade policy" @en <http://www.w3.org/2004/02/skos/core#closeMatch> "Commercial policy" @en <http://dbpedia.org/resource/Commercial_policy> <http://zbw.eu/stw/mapping/dbpedia/target>

That´s far from perfect - the "concept" and "prefLabel" entries of the source concept(s) of the mappings are identical over multiple rows.

ZBW Labs als Linked Open Data

Als Versuchsumgebung für neue, Linked Open Data basierte Publikationstechnologien entwickeln wir ZBW Labs nun neu als Semantic Web Applikation. Die HTML-Seiten sind mit RDFa angereicht. Dafür werden die RDF-Vokabulare Dublin Core, DOAP (Description of a Project) und andere Vokabulare benutzt. Das Schema.org Vokabular, das ebenfalls als RDFa eingebettet ist, soll die Sichtbarkeit bei Suchmaschinen erhöhen.

Mit der neuen Version von ZBW Labs schaffen wir zugleich eine Umgebung, um mit neuen Möglichkeiten des elektronischen und linked-data-Publizierens zu spielen. Zugleich erlaubt sie redaktionelle Beiträge von allen an den Labs-Projekten Beteiligten, und Kommentare und andere Formen der Partizipation von interessierten Web-Usern.

Im Basissystem Drupal 7 gehört RDFa bereits zum Kern des CMS und kann auf Feldebene konfiguriert werden. Als Erweiterung kommen die Drupal-Module RDFx, SPARQL Views und Schema zum Einsatz. Zahlreiche weitere fertige Komponenten (insbesonder das Views und das neue Entity Reference Modul) machen es leicht, die einzelnen Data Items auf den Webseiten zu präsentieren und zu verknüpfen. Das Drupabl-Modul Zen erlaubt in seiner aktuellen Version die Benutzung von HTML5 und RDFa 1.1, und ermöglicht ein "responsive Design" mit einer angepassten Benutzungsoberfläche auf Smartphones und Pads.

EconStor LOD

 Die Linked Open Data-Bewegung hat bereits einige Publikationen von bibliografischen Metadaten für das Semantic Web hervorgebracht. Mit der Publikation der Metadaten unseres Dokumentenservers http://econstor.eu haben wir nun selbst mehr als 40.000 Titel aus wirtschaftswissenschaftlichen Arbeitspapieren als RDF-Tripel aufbereitet. Der Datensatz enthält dabei Verlinkungen zu etablierten, ihrerseits als LOD veröffentlichten, wirtschaftswissenschaftlichen Thesauri wie dem STW oder der JEL.

Pressemappe 20. Jahrhundert

Hier finden Sie die historischen Pressearchive der ZBW als Linked Open Data. Knapp 7000 Personen- und Firmendossiers mit 250.000 Presseartikeln und Geschäftsberichten werden damit bis hinunter auf Seitenebene eindeutig adressier- und zitierbar, können mit dem DFG-Viewer komfortabel angezeigt werden und sind vielfältig mit Daten in der Linked Data Cloud verknüpft.

Seiten

RSS - Linked data abonnieren