INF2186: This class has me got thinking…

Despite the fact that I’ve been working with geospatial metadata for many years – both in the creation and maintenance of metadata for geospatial datasets, but also in the MARC fields containing spatial information that I insert into original catalogue records for maps – this course got me thinking about all of the other ways that geographic information can be held in metadata.

Over the course of INF2186, we saw geographic information stored using the vocabulary encoding scheme of ISO 3166, which would allow objects to be mapped to the country level by using an appropriate spatial dataset of countries and joining on a field containing the same country codes. If the correct syntax encoding scheme is employed for the storage of a street address stored as a metadata property (perhaps belonging to the Dublin Core class Location), its contents can be mapped using any one of a number of geocoding APIs. As I illustrated in my earlier post about using the Bounding Box Tool, employing a syntax encoding scheme to store bounding coordinates in CSV format allows for interoperability with dcTerms:spatial, providing the potential to map out spatial resources in a format similar to what Scholars GeoPortal does here. These means of connecting fields in a database or repository to locations on a map are not new to me, but thinking about them through the concept of encoding schemes brings a different context to the work that I perform on a regular basis. Such encoding schemes were omnipresent in application profiles we examined in this course, but now I see the examples I just raised as yet another demonstration of interoperability.

In the closing weeks of this course, I learned a lot of new things about geographic data storage, like the geo: namespace for storage of locations in qnames, and the expression of spatial relationships through RDF triples (Allemang and Hendler 2011, 36-37). I’m working on another paper right now that, among other things, asks if maps of unceded indigenous territories can be housed in map libraries using Library of Congress cartographic classification, which assumes that the majority of locations fit nicely within a hierarchy of recognized administrative and political boundaries. I wonder if using semantic data modelling like RDF to express processes like colonial dispossession in larger systems of knowledge organization is possible, and if such a model could be used in map libraries.

On a very different note, I was interested to read Christian Becker and Christian Bizer’s article on the infrastructure behind and the possibilities of DBpedia Mobile (2009) and the consolidation of many different kinds of spatial data resources into one interface. Of course, it never hurts to be mindful about the political ramifications of such structures for spatial data storage, particularly when locative media is overlaid with myriad data by mere virtue of their proximity. In week nine, we read Ann Cavoukian’s primer on metadata and surveillance (2013), which lays out the concerns we should all take up regarding geolocative devices and media, metadata, and privacy. Thinking through the seemingly infinitely flexible and extensible systems used to house such metadata boggles my mind from a database perspective, especially when combined with the general mistrust of “big data” analysis that I share.

The skill learned in this class that I’d most like to develop is that of abstract modelling and diagramming, which I guess I’ll get to do in other courses, as this is only the end of my first of 5.5 years in this program. While the many entity-relationship diagrams we encountered in this course demonstrate the value of communicating the core essence of a project to stakeholders in its development and use, one line in a table in Willis, Greenberg and White’s review of metadata schemas for scientific data management got me thinking about the temporal importance of such models. In a thought-provoking claim, they note that “[a] well-defined metadata scheme will likely outlive its initial rendering. Abstraction allows needs be captured a way that supports multiple renderings over time” (2012, 1515). Given the forward-thinking-for-backwards-compatibility chatter that goes on all around me in libraries (hello Windows!), employing such diagrammatic tools as signposts to keep us focused on the core functionality of systems is really smart.

On that note, I’m also interested in diving more into linked data in the bibliographic world – as a cataloguer, I’ve heard so much about the transition from MARC to FRBR, RDA and BIBFRAME, but I’m not sure how or when such infrastructure is going to meet the work that I do. In any case, this course has been great in helping me see metadata structures where I hadn’t thought to look for it in my life, reconceptualize the work that I do on a daily basis, and better understand the future of library and information systems. Thanks for reading!

Works cited

Allemang, D., and J. Hendler. 2011. Semantic Web for the working ontologist: effective modeling in RDF and OWL. Waltham, MA: Morgan Kaufmann/Elsevier.

Becker, C., and C. Bizer. 2009. Exploring the geospatial semantic web with DBpedia Mobile. Web Semantics 7: 278-286.

Cavoukian, A. 2013. A primer on metadata: separating fact from fiction. Toronto: Privacy by Design.

Willis, C., J. Greenberg, and H. White. 2012. Analysis and synthesis of metadata goals for scientific data. JASIST 63: 1505-1520.