One last note! While we’re on the topic of cats and metadata, I wholeheartedly suggest following @HistoricalCats on Twitter. It’s an bot that tweets cat-related metadata records harvested by the Digital Public Library of America from various digital collections. Cute historic photos abound, though occasionally the “cat” keyword turns up some less-than-happy cats-in-science materials. Check out the DPLA’s application profile, a derivative of the Europeana Data Model.
This last week was rather stressful, with the strike and final assignments and lousy Smarch weather and all. I booked the week off work a while back with the intention of staying home and writing my final paper on cartographic cataloguing and critical human geography, but I ended up dividing my time between union-related activities and self-care for my burnout brain. Part of that self-care involved discovering the best and most relaxing mobile game I’ve come across in a while (possibly ever): ねこあつめ (Neko Atsume), now just referred to as “the cat game” with the friends and colleagues whom I have enlisted into downloading it. The point of the game is to entice cats to come to your garden by arranging different toys in it.
Like I described in my post on graphic design and metadata, reviewing and understanding the properties of objects is critical for any kind of print or interactive design project, including video games. Since the various preferences and characteristics of our cat friends are important to attracting them back to the garden, these various properties are stored in a “cat notebook”, in which you can review autocompiled notes about the cats you have met.
Forgive my rough understanding of this “meowtadata” screen, as I have lost most of my Japanese language skills. Here we meet an adorable tortoiseshell cat named Sabigara-san, whom I photographed hanging out in a cardboard box on one visit. She has a “wild” personality,
has brought me 180 fish (the currency of the game) (never mind – I’m not sure what this translates to) and has visited the garden eight times. Her three favourite things in the garden are the cardboard box, a paper bag, and the two-level cat tower. I have taken several photos of her, which are accessible from this screen. After several days of playing this game, cats have started to give me gifts – though this little fuzzybutt has yet to do so, any present received would be linked from this screen as well. One bit of meowtadata that does not appear here but on is displayed on the main cat notebook screen (where all cats are shown) is the last time each one visited the garden, stored in YYYY/DD/MM format. This piece of information lets me know that the mysterious Baseball Cat, dressed in uniform and ready to play, has only visited my garden when I am asleep. 🙁
Despite the fact that much of the data exposed through cats’ visits to the garden was pre-engineered by the game’s programmers, we still see some static (cat characteristics) and dynamic (toy preference) properties captured as they come to play, as well as a simple model for multimedia storage and linked data (through the photo albums and gifts – gift:Hairbrush isGivenBy cat:KuroNeko-chan!). Metadata (or meowtadata) is everywhere!
As I end on this light-hearted feline note, this blogging exercise prompted some interesting self-reflection on metadata, a realm I have spent a considerable amount of time working in over the last six years. It simultaneously helped me understand the work I do in the library better, and really served to expand my vision of what metadata is and where it manifests, in everyday objects and systems like photolab envelopes and adorable cat games. (This capstone project also made me finally do something with the domain I registered over five years ago…!)
Despite the fact that I’ve been working with geospatial metadata for many years – both in the creation and maintenance of metadata for geospatial datasets, but also in the MARC fields containing spatial information that I insert into original catalogue records for maps – this course got me thinking about all of the other ways that geographic information can be held in metadata.
Over the course of INF2186, we saw geographic information stored using the vocabulary encoding scheme of ISO 3166, which would allow objects to be mapped to the country level by using an appropriate spatial dataset of countries and joining on a field containing the same country codes. If the correct syntax encoding scheme is employed for the storage of a street address stored as a metadata property (perhaps belonging to the Dublin Core class Location), its contents can be mapped using any one of a number of geocoding APIs. As I illustrated in my earlier post about using the Bounding Box Tool, employing a syntax encoding scheme to store bounding coordinates in CSV format allows for interoperability with dcTerms:spatial, providing the potential to map out spatial resources in a format similar to what Scholars GeoPortal does here. These means of connecting fields in a database or repository to locations on a map are not new to me, but thinking about them through the concept of encoding schemes brings a different context to the work that I perform on a regular basis. Such encoding schemes were omnipresent in application profiles we examined in this course, but now I see the examples I just raised as yet another demonstration of interoperability.
In the closing weeks of this course, I learned a lot of new things about geographic data storage, like the geo: namespace for storage of locations in qnames, and the expression of spatial relationships through RDF triples (Allemang and Hendler 2011, 36-37). I’m working on another paper right now that, among other things, asks if maps of unceded indigenous territories can be housed in map libraries using Library of Congress cartographic classification, which assumes that the majority of locations fit nicely within a hierarchy of recognized administrative and political boundaries. I wonder if using semantic data modelling like RDF to express processes like colonial dispossession in larger systems of knowledge organization is possible, and if such a model could be used in map libraries.
On a very different note, I was interested to read Christian Becker and Christian Bizer’s article on the infrastructure behind and the possibilities of DBpedia Mobile (2009) and the consolidation of many different kinds of spatial data resources into one interface. Of course, it never hurts to be mindful about the political ramifications of such structures for spatial data storage, particularly when locative media is overlaid with myriad data by mere virtue of their proximity. In week nine, we read Ann Cavoukian’s primer on metadata and surveillance (2013), which lays out the concerns we should all take up regarding geolocative devices and media, metadata, and privacy. Thinking through the seemingly infinitely flexible and extensible systems used to house such metadata boggles my mind from a database perspective, especially when combined with the general mistrust of “big data” analysis that I share.
The skill learned in this class that I’d most like to develop is that of abstract modelling and diagramming, which I guess I’ll get to do in other courses, as this is only the end of my first of 5.5 years in this program. While the many entity-relationship diagrams we encountered in this course demonstrate the value of communicating the core essence of a project to stakeholders in its development and use, one line in a table in Willis, Greenberg and White’s review of metadata schemas for scientific data management got me thinking about the temporal importance of such models. In a thought-provoking claim, they note that “[a] well-defined metadata scheme will likely outlive its initial rendering. Abstraction allows needs be captured a way that supports multiple renderings over time” (2012, 1515). Given the forward-thinking-for-backwards-compatibility chatter that goes on all around me in libraries (hello Windows!), employing such diagrammatic tools as signposts to keep us focused on the core functionality of systems is really smart.
On that note, I’m also interested in diving more into linked data in the bibliographic world – as a cataloguer, I’ve heard so much about the transition from MARC to FRBR, RDA and BIBFRAME, but I’m not sure how or when such infrastructure is going to meet the work that I do. In any case, this course has been great in helping me see metadata structures where I hadn’t thought to look for it in my life, reconceptualize the work that I do on a daily basis, and better understand the future of library and information systems. Thanks for reading!
Allemang, D., and J. Hendler. 2011. Semantic Web for the working ontologist: effective modeling in RDF and OWL. Waltham, MA: Morgan Kaufmann/Elsevier.
Becker, C., and C. Bizer. 2009. Exploring the geospatial semantic web with DBpedia Mobile. Web Semantics 7: 278-286.
Cavoukian, A. 2013. A primer on metadata: separating fact from fiction. Toronto: Privacy by Design.
Willis, C., J. Greenberg, and H. White. 2012. Analysis and synthesis of metadata goals for scientific data. JASIST 63: 1505-1520.
I came into INF2186 knowing a fair bit about metadata and its importance, but one concept that I hadn’t really considered, despite being around it in action for years, was metadata standards and interoperability. Data interoperability is something we talk about all the time in GIS systems (the Data Interoperability extension, allowing legacy formats and files created in other programs to be opened in ArcGIS, is a must-have!), but I didn’t realize that metadata interoperability is crucial to the catalogues that we access most of our data through. Turns out I was contributing to that interoperability over the years I’ve worked at MDL by creating metadata just by filling out the fields in our data inventory (not realizing at first it was ISO 19139-compliant!) and in turn by writing documentation for our staff members to clarify what should be entered into each field, and how.
Here’s an example of a metadata record in our data inventory. This is a historical climate dataset for Canada, stored as annual, national-scale raster files for use in GIS software. These are the metadata properties we display to users:
These provide information about the producer and nature of the dataset, its spatial reference parameters, licensing details, and include keywords and a description for discoverability. These properties are set with freeform text fields, date fields compliant with W3c-DTF, and picklists of our own internal taxonomy vocabularies. There are a few more metadata properties that aren’t visible here, including one YES/NO property that allows our metadata to be harvested by Scholars GeoPortal. Here’s what the same dataset looks like over there (alas, I can’t permalink it):
Check out that bounding box created with the tool I mentioned in my previous post!
If we click on the “Details” button, we get to see the formatted metadata that was harvested from the MDL inventory.
Some of these fields, including contact information, are populated based on the fact that the metadata pertaining to this dataset was harvested from the MDL record. But hey, this is interoperability at work! I didn’t really understand how this harvesting worked before I took this course, I just knew that it did, so that’s one more thing at work I have a better understanding of thanks to INF2186.
My favourite metadata tool remains perpetually open in a browser tab at work: Klokan Technologies’ Bounding Box Tool, an easy-to-use utility that generates bounding box coordinates (in other words, the latitude and longitude values that enclose a given space) for given areas on-the-fly. This metadata is important for capturing the spatial extent of items and describing them in a consistent manner. For every paper map I catalogue into original MARC records, I record the extent in the 034 and 255 fields, and for every geospatial dataset in the MDL data inventory, the bounding box is entered into its own field in the metadata record. Given the different storage requirements of these two databases, it is very convenient that users are offered a choice of 12 (!) different syntax encoding schemes for capturing coordinates – I personally use MARC VTLS (which pops the coordinates into the appropriate subfields for quick copying and pasting) and CSV for these respective applications.
While I’ve been using the Bounding Box Tool for several years, it was only in this course that I learned the term “syntax encoding scheme”, and the flexibility that Klokan continues to develop into it makes it a fantastic tool for anyone working with geospatial resources and catalogues.