As posted here on the SRU Implementors list, the OASIS Search Web Services Technical Committee has announced the release of five Committee Drafts, informally known as:
Abstract Protocol Definition (APD)
Binding for SRU 1.2
Auxiliary Binding for HTTP GET
Binding for OpenSearch Links to specific document formats are given at the bottom of the mail. A list of the TC public documents is also available here.
Interesting post from Google, in which they say:
“Recently, even our search engineers stopped in awe about just how big the web is these days — when our systems that process links on the web to find new content hit a milestone: 1 trillion (as in 1,000,000,000,000) unique URLs on the web at once!”
Puts Crossref’s 32,639,020 unique DOIs into some kind of perspective: 0.0033%. But nonetheless that trace percentage still seems to me to be reasonably large, especially in view of it forming a persistent and curated set.
Oh wow! A rather remarkable plea here from Dan Brickley on the public-lod mailing list which calls for the registrant of the dbpedia.org DNS entry to top it up with another 5+ years worth of clocktime. Some quotes:
_“The idea of such a cool RDF namespace having only 6 months left on the DNS registration gives me the worries.”
“If you could add another 5-10 years to the DNS registration I’d sleep easier at night.
So, Google’s Knol is now live (see this announcement on Google’s Blog). There’ll be comment aplenty about the merits of this service and how it compares to other user contributed content sites. But one curious detail struck me. In terms of citeability, compare how a Knol contribution (or “knol”) may be linked to as may be a corresponding entry in Wikipedia (here I’ve chosen the subject “Eclipse”):
Tony’s post highlights Knol’s “service” URIs. Another issue is that many Knol entries have nice long lists of unlinked references. The HTML code behind the references is very sparse.
Might the DOI be of use in linking out from these references? I think so. Then, of course, there’s the issue of DOIs for Knols…
CrossTech is two years old (less one month) and we have now seen some 145 posts. Breaking the posts down by poster we arrive at the following chart:
Note this is not any real attempt at vainglory, more a simple excuse to play with the wonderful Google Chart API. Also, above I’ve taken the liberty of putting up an image (.png), although the chart could have been generated on the fly from this link (or tinyurl here).
Roy Tennant in a post to XML4Lib announces a new list of library APIs hosted at
A useful rough guide for us publishers to consider as we begin cultivating the multiple access routes into our own content platforms and tending to the “alphabet soup” that taken together comprises our public interfaces.
Andy Powell has published on Slideshare this talk about metadata - see his eFoundations post for notes. It’s 130 slides long and aims
“to cover a broad sweep of history from library cataloguing, thru the Dublin Core, Web search engines, IEEE LOM, the Semantic Web, arXiv, institutional repositories and more.”
Don’t be fooled by the length though. This is a flip through and is a readily accessible overview on the importance of metadata.
The PRISM metadata standards group issued a press release yesterday which covered three points:
The Cookbook provides “a set of practical implementation steps for a chosen set of use cases and provides insights into more sophisticated PRISM capabilities. While PRISM has 3 profiles, the cookbook only addresses the most commonly used profile #1, the well-formed XML profile. All recipes begin with a basic description of the business purpose it fulfills, followed by ingredients (typically a set of PRISM metadata fields or elements), and, closes with a step-by-step implementation method with sample XMLs and illustrative images.
With PDF now passed over to ISO as keeper of the format (as blogged here on CrossTech), Kas Thomas (on CMS Watch’s TrendWatch) blogs here that Adobe should now do the right thing by XMP and look to hand that over too in order to establish it as a truly open standard. As he says:
“Let’s cut to the chase. If Adobe wants to demonstrate its commitment to openness, it should do for XMP what it has already done for PDF: Put it in the hands of a legitimate standards body.