I wanted to make some remarks about the “Ease of use” and “Learn curve” ratings which I gave in the ORE/POWDER comparison table that I blogged about here the other day. It may seem that I came out a little harsh on ORE and a little easy on POWDER. I just wanted to rationalize the justification for calling it that way. (By the way, the revised comparison table includes a qualification to those ratings.)
My primary interest was from the perspective of a data provider rather than a data consumer. What does it take to get a resource description document (“resource map”, “description resource” or “sitemap”) ready for publication?
Following right on from yesterday’s post on ORE and POWDER, I’ve attempted to map the worked examples in the ORE User Guide for RDF/XML (specifically Sect. 3) to POWDER to show that POWDER can be used to model ORE, see
Resource Maps Encoded in POWDER
(A full explanation for each example is given in the RDF/XML Guide, Sect. 3 which should be consulted.)
This could just all be sheer doolally or might possibly turn out to have a modicum of instructional value – I don’t know.
I’ve been reading up on POWDER recently (the W3C Protocol for Web Description Resources) which is currently in last call status (with comments due in tomorrow). This is an effort to describe groups of Web resources and as such has clear similarities to the Open Archives Initiative ORE data model, which has been blogged about here before.
In an attempt to better understand the similarities (and differences) between the two data models, I’ve put up the table which directly compares the two heavyweight contendors OAI-ORE and POWDER and also (unfairly) places them alongside the featherweight Sitemaps Protocol for reference.
So the other day Noel O’Boyle made me feel guilty when he pinged me and asked about the possibility using one of the Crossref APIs for creating a Ubiquity extension. You see, I had played with the idea myself and had not gotten around to doing much about it. This seemed inexcusable- particularly given how easy it is to build such extensions using the API we developed for the WordPress and Moveable Type plugins that we announced earlier in the year.
So, Google’s Knol is now live (see this announcement on Google’s Blog). There’ll be comment aplenty about the merits of this service and how it compares to other user contributed content sites. But one curious detail struck me. In terms of citeability, compare how a Knol contribution (or “knol”) may be linked to as may be a corresponding entry in Wikipedia (here I’ve chosen the subject “Eclipse”):
Roy Tennant in a post to XML4Lib announces a new list of library APIs hosted at
A useful rough guide for us publishers to consider as we begin cultivating the multiple access routes into our own content platforms and tending to the “alphabet soup” that taken together comprises our public interfaces.
For anybody interested in the why’s and wherefore’s of OpenURL, Jeff Young at OCLC has started posting over on his blog Q6: 6 Questions - A simpler way to understand OpenURL 1.0: Who, What, Where, When, Why, and How (note: no longer available online). He’s already amassing quite a collection of thought provoking posts. His latest is The Potential of OpenURL (note: no longer available online), from which:
OpenURL has effectively cornered the niche market where Referrers need to be decoupled from Resolvers.
I just ran across the final report from the CLADDIER project. CLADDIER comes from the JISC and stands for “CITATION, LOCATION, And DEPOSITION IN DISCIPLINE & INSTITUTIONAL REPOSITORIES”. I suspect JISC has an entire department dedicated to creating impossible acronyms (the JISC Acronym Preparation Executive?)
Anyhoo- the report describes a distributed citation location and updating service based on the linkback mechanism that is widely used in the blogging community.
I think this is an interesting approach and is one that I talked about briefly (PDF) at the UKSG’s Measure for Measure seminar last June.
I’ve just returned from Frankfurt Book fair and noticed that there has been some recent in the The NLM Style Guide for Authors, Editors and Publishers recommendations concerning citing blogs.
Which reminds me of an issue that has periodically been raised here at Crossref- should we be doing something to try and provide a service for reliably citing more ephemeral content such as blogs, wikis, etc.?
The first thing to note is that this demo (the Acrobat plugin) is an application. And that comes with its own baggage, i.e. this is a Windows only plugin and is targeted at Acrobat Reader 8. On a wider purview the application merely bridges an identifier embedded in the media file and the handle record filed against that identifier and delivers some relevant functionality. The data (or metadata) declared in the PDF and in the associated handle if rich enough and structured openly can also be used by other applications. I think this is a key point worth bearing in mind, that the demo besides showing off new functionalities is also demonstrating how data (or metadata) can be embedded at the respective endpoints (PDF, handle).
Some initial observations follow below.