I’m very excited to pass on the announcement of an upcoming event put together by Corey Harper, Jason Kucsma and Ben Vershbow.

From Corey:

Please note that the smaller afternoon session has already filled up there is now a wait list, but we still have slots open for the morning plenary session.



LOD-LAM-NYC: A Day of Linked Data Discussion & Activities for the NY Metropolitan Area

Thurs, Feb 23, 9:00am-6:00pm

There is no fee to attend, but registration is required.

Following the success of the LOD-LAM Summit (http://lod-lam.net/summit/) in June, 2011, discussions of Cultural Heritage Linked Data have continued at a variety of Regional LOD-LAM (Linked Open Data for Libraries, Archives, and Museums) events. These events, characterized by their “Unconference” style and focus on cutting edge Semantic Web technologies, have continued to further the goals defined in the World Wide Web Consortiums Library Linked Data Incubator Report and the various outputs of the Stanford Linked Data

Continuing this conversation, we would like to announce LOD-LAM-NYC, two related events that add up to a day of Linked Data discussions for the Cultural Heritage Sector in the NY Metropolitan Area on February 23, 2012. The event will be comprised of two separate sessions, a morning plenary, and a smaller afternoon “hands-on” workshop. While these events are being offered free-of-charge, separate registration is required for each (see below for links).

This event, co-organized by METRO, The New York Public Library’s NYPL Labs, and New York University, sponsored by METRO, and hosted by NYPL, will accommodate 175 attendees for the morning sessions. The afternoon workshop will be smaller, with space for up to 40 participants.

Learn more & register at http://www.metro.org/en/art/488/.

LODLAM Meetup, NYPL Labs, July 2011

vocabulary alignment, meaning and understanding in the world museum

We live in a world of silos. Silos data. Silos of culture. Linked Open Data aims to tear down these silos and create unity among the collections, their data and their meaning. The World Museum awaits us.

It comes as no surprise that I begin this post with such Romantic allusions. Our discussions of vocabularies –  as technical behemoths and cultural artefacts – were lively and florid at a recent gathering of researchers library and museum professionals at LODLAM-NZ. Metaphors  of time and tide – depicted beautifully in this companion post by Ingrid Mason, highlight issues of their expressive power of their meaning over time and across cultures. I present a very broad technical perspective on the matter beginning with a metaphor for what I believe represents the current state of digital cultural heritage : a world of silos.

Among these silos lie vocabularies that describe their collections and induce meaning to their objects. Originally employed to assist cataloguers and disambiguate terms, vocabularies have grown to encompass rich semantic information, often pertaining to the needs of that institution, their collection or their creator communities. Vocabularies themselves are cultural artefacts representing a snapshot of sense making. Like the objects that they describe, vocabularies can depict a range of substance from Cold War paranoia to escapist and consumerist Disneyfication. Inherent within them are the world views, biases, and focal points of their creators. An object’s source vocabulary should always be recorded as a significant part of it’s provenance. Welcome to the recursive hell of meta-meta-data.

Within the context of the museum, vocabularies form the backbone from which collection descriptions are tagged, catalogued or categorised. But there are many vocabularies, and the World Museum needs a universal language. LODLAM-NZ embraced the enthusiasm of a universal language but also understood the immense technical challenges that follow vocabulary alignment and, in many cases, natural language processing in general. However, if done successfully, alignment does a few great things: it normalises the labels that we assign to objects so that a unity of inferencing, reasoning and understanding can occur across vast swathes of collections; it can provide semantic context to those labels for even deeper, more compelling relations among the objects and it can be used to disambiguate otherwise flat or non-semantic meta-data, such as small free-text fields and social tags.

Vocabulary alignment is the process of putting two vocabularies side-by-side, finding the best matches, and joining the dots.

In many cases, alignment is straight forward – a simple string match on the the aligned terms could be sufficient to create a match. However, as the above example shows, aligning can require a lot more intuition – ceremonial exchange from the Australian Museum’s thesaurus could map to the ceremoniesexchange and gift concepts from the Getty’s Art and Architecture Thesaurus. This necessary one-to-many relation, along with other possible quirks and anomalies such as missing terms, semantic differences between term use and interpretation, and the general English language bias of many natural language processing tools make such a task fraught with difficulty, especially when alignment occurs across vocabularies that address specific cultural groups.

The challenges of alignment are compounded when the source terms come from non-semantic sources, such as unstructured free text (labels, descriptions and comments) and user tags. Let’s say for example that someone has tagged an object with the term gold. Now, could they mean “this object is made of gold” or “this object has a golden colour”? The Getty’s Art and Architecture thesaurus has the term gold in both senses of the word. We could use a tool called SenseRelate::Allwords that gives us the correct WordNet concept (based on the context of an object’s description label) but then we need to align the WordNet gold to the AAT’s gold. Performing these two computations in a pipeline significantly increases the risk that the tag as ‘misinterpreted’ – or even worse, it’s original meaning and intention is skewed or lost altogether. Vocabulary alignment, if not done correctly, has the potential to dilute, skew, or destroy the meaning of its terms.

Over the past few years, elaborate algorithms have been developed to try and address these alignment challenges. However they often don’t work on the unpredictable and highly heterogenous nature of cultural data-sets, or their performance differs across and even within vocabularies. And when things do go wrong, problems are often hard to diagnose and even more difficult to solve.

But researchers have brought humans back into the equation. The idea is that, within the alignment process, machines do the heavy lifting and processing on very simple and straight-forward natural language processing tasks while humans fine tune the steps of the process until they are satisfied with their results. This paper, by Ossenbruggen et al., describes what they call interactive alignment. Their Amalgame tool allows humans to make judgements about the nature of the vocabularies being aligned, fine-tune parameters and analyse, select or discard matching results. This mixed initiative approach empowers both computers and humans to solve tough problems. Likewise, the  vocabularies (or ontologies within the computer science science realm), while encoded in bits and bytes, are only realised in the minds of their creators, their users and conversely, the people that interact with the objects.

The concept of meaning, understanding and encoding – and the crucial differences between the three concepts, seeded a reflective discussion at LODLAM-NZ. Even in light of the technical issues, how can we ensure accurate alignment that preserves the sense making of the objects from both their custodians and creator communities? What vocabularies do we use, what vocabularies should we align to and why? What are the dangers of doing this? We could not find the answers to these questions – to steal an anecdote from Michael Lascarides, the best we could do is create better questions, and more importantly, a broader understanding of alignment on both technical and social dimensions.

Recently Published Reports

I’ve just seen two recently published reports that will certainly be of interest and thought I’d share here:

The Stanford Linked Data Workshop Technology Plan. “If instantiated at several institutions, will demonstrate to end users the value of the Linked Data approach to recording machine operable facts about the products of teaching, learning, and research. The most noteworthy advantage of the Linked Open Data approach is that it allows the recorded facts , in turn, to become the basis for new discovery environments.” Personally, I love their push for CC0 and I also really liked the push to publish early and often and not wait until things are perfect, with recommended workflows. [Thanks to Jerry Persons for feeding input from the LODLAM Summit into the Stanford working group and to this technology plan. Thanks Rachel Frick for flagging this with #LODLAM on Twitter]

Proceedings of the 1st International Workshop on Semantic Digital Archives, Berlin, Sept.29, 2011. Fantastic collection of papers from this workshop, and a really good preface that summarizes the meeting, each of the papers, and the growing presence of Linked Data in libraries, archives and museums. [thanks Johan Oomen for flagging this with #LODLAM on Twitter]


The following was cross posted on the Open Knowledge Foundation blog on 12/20/2011.

I recently traveled to Wellington, New Zealand to take part in the National Digital Forum of New Zealand (#ndf2011), which was held at the national museum of New Zealand, Te Papa. Following the conference, the amazing team at Digital NZ hosted and organized a Linked Open Data in Libraries, Archives & Museums unconference (#lodlam). The two events were well attended by Kiwis as well as a large number of international attendees from Australia, and a few from as far as the US, UK and Germany.

When it comes to innovative digital initiatives in cultural heritage, the rest of the world has been looking to New Zealand and Australia for some time. Federated metadata exchanges and search has been happening across institutions in projects like Digital NZ and Trove. I was able to learn more about the Digital NZ APIs as well as those from Museum VictoriaPowerhouse Museum, and State Records New South Wales. In fact, the remarkable proliferation of APIs in Australasia has allowed us to consider the possibilities of Linked Open Data to harvest and build upon data held in databases in multiple institutions.

Given the extent to which tools for opening access to data have been developed here, I was surprised by the level of frustration that exists around copyright issues. There’s a clear sense that government is moving too slowly in making materials available to the public with open licensing. We talked a lot about the idea of separately licensing metadata and assets (i.e. information about a photo vs the digital copy of the photo), as has been happening across Europe and increasingly the United States. There are strong advocates within the GLAM sector (galleries, libraries, archives & museums) here, and demonstrating use cases utilizing openly licensed metadata will go far in helping to move those conversations forward with policy makers.

To that end, a session was convened to explore the possibilities of an international LODLAM project focused on World War I, the centennial commemoration of which is fast approaching. The Civil War Data 150 project we’ve been slowly moving forward in the US may provide a rough framework to build from. At least a half dozen or more libraries, archives and museums have expressed interest in participating in a WWI project already. First steps may be identifying openly licensed datasets to be contributed, key vocabularies and ontologies to apply, and ideas for visualizations that would leverage the use of Linked Open Data. For anything to happen here, someone will need to take the lead in organizing (not me, we’re still trying to build some tools around the Civil War Data 150 concept!). Good notes were posted on the LODLAM blog about the conversation and how to convene future conversations. Anyone who gets involved with this, please spread the word and keep the LODLAM community apprised of your progress and ways to contribute.

We also had a workshop on using Google Refine by Carlos Arroyo from the Powerhouse Museum, with props to the FreeYourMetadata crew. Some lively sessions dug into just what and how Linked Data is and some of the pitfalls and potentials. Another session explored the importance and potential of local vocabularies, and how they can contribute to Linked Data implementations. One great example was the vocabulariessurrounding Maori artifacts (Taonga) at Te Papa, and how publishing those datasets can aid other museums around the world to better describe and provide digital access to Maori collections.

As I’ve attended various LODLAM meetups since June, I’ve noticed clear momentum from one to another as these conversations progress rapidly, with those further along helping those of us just learning. After LODLAM-DC I realized the importance of including library, archive, and museum vendors in all of these gatherings. At LODLAM-NZ I could see the potential of bringing together developers in the GLAM sector and those utilizing Linked Data in commercial settings. In places like San Francisco, where commercial interests are already leading the charge on Linked Data (which is not a bad thing) and there’s an active Semantic Web developer community, the GLAM sector may be playing catchup. But the sheer number of datasets potentially available as open data coming from the GLAM sector, together with the expertise of managing massive amounts of structured data, creates a space ripe for collaboration and experimentation, and these lines will continue to blur.