CrossTech is two years old (less one month) and we have now seen some 145 posts. Breaking the posts down by poster we arrive at the following chart:
Note this is not any real attempt at vainglory, more a simple excuse to play with the wonderful Google Chart API. Also, above I’ve taken the liberty of putting up an image (.png), although the chart could have been generated on the fly from this link (or tinyurl here).
Roy Tennant in a post to XML4Lib announces a new list of library APIs hosted at
https://web.archive.org/web/20080730080413/http://techessence.info/apis//
A useful rough guide for us publishers to consider as we begin cultivating the multiple access routes into our own content platforms and tending to the “alphabet soup” that taken together comprises our public interfaces.
Andy Powell has published on Slideshare this talk about metadata - see his eFoundations post for notes. It’s 130 slides long and aims
“to cover a broad sweep of history from library cataloguing, thru the Dublin Core, Web search engines, IEEE LOM, the Semantic Web, arXiv, institutional repositories and more.”
Don’t be fooled by the length though. This is a flip through and is a readily accessible overview on the importance of metadata.
The PRISM metadata standards group issued a press release yesterday which covered three points:
PRISM Cookbook
The Cookbook provides “a set of practical implementation steps for a chosen set of use cases and provides insights into more sophisticated PRISM capabilities. While PRISM has 3 profiles, the cookbook only addresses the most commonly used profile #1, the well-formed XML profile. All recipes begin with a basic description of the business purpose it fulfills, followed by ingredients (typically a set of PRISM metadata fields or elements), and, closes with a step-by-step implementation method with sample XMLs and illustrative images.
With PDF now passed over to ISO as keeper of the format (as blogged here on CrossTech), Kas Thomas (on CMS Watch’s TrendWatch) blogs here that Adobe should now do the right thing by XMP and look to hand that over too in order to establish it as a truly open standard. As he says:
“Let’s cut to the chase. If Adobe wants to demonstrate its commitment to openness, it should do for XMP what it has already done for PDF: Put it in the hands of a legitimate standards body.
I blogged here back in Jan. 2007 about Adobe submitting PDF 1.7 for standardization by ISO. From yesterday’s ISO press release this:
“The new standard, ISO 32000-1, Document management – Portable document format – Part 1: PDF 1.7, is based on the PDF version 1.7 developed by Adobe. This International Standard supplies the essential information needed by developers of software that create PDF files (conforming writers), software that reads existing PDF files and interprets their contents for display and interaction (conforming readers), and PDF products that read and/or write PDF files for a variety of other purposes (conforming products).
For anybody interested in the why’s and wherefore’s of OpenURL, Jeff Young at OCLC has started posting over on his blog Q6: 6 Questions - A simpler way to understand OpenURL 1.0: Who, What, Where, When, Why, and How (note: no longer available online). He’s already amassing quite a collection of thought provoking posts. His latest is The Potential of OpenURL (note: no longer available online), from which:
OpenURL has effectively cornered the niche market where Referrers need to be decoupled from Resolvers.
This test form shows handle value data being processed by JavaScript in the browser using an OpenHandle service. This is different from the handle proxy server which processes the handle data on the server - the data here is processed by the client.
Enter a handle and the standard OpenHandle “Hello World” document is printed. Other processing could equally be applied to the handle values. (Note that the form may not work in web-based feed readers.
This is a follow-on to an earlier post which set out the lie of the land as regards DOI services and data for DOIs registered with Crossref. That post differentiated between a native DOI resolution through a public DOI service which acts upon the “associated values held in the DOI resolution record” (per ISO CD 26324) and other related DOI protected and/or private services which merely use the DOI as a key into non-public database offering.
Following the service architecture outlined in that post, options for exposing public data appear as follows:
-
Private Service
-
Publisher hosted – Publisher private service
-
Protected Service
-
Crossref hosted – Industry protected service
-
Crossref routed – Publisher private service
-
Public Service
-
Handle System (DOI handle) – Global public service (native DOI service)
-
Handle System (DOI ‘buddy’ handle) – Publisher public service
(Continues below.)
<p>
With Library of Congress sometime back (Feb. ’08) announcing LCCN Permalinks and NLM also (Mar. ’08) introducing simplified web links with its PubMed identifier one might be forgiven for wondering what is the essential difference between a DOI name and these (and other) seemingly like-minded identifiers from a purely web point of view. Both these identifiers can be accessed through very simple URL structures:
With Library of Congress sometime back (Feb. ’08) announcing LCCN Permalinks and NLM also (Mar. ’08) introducing simplified web links with its PubMed identifier one might be forgiven for wondering what is the essential difference between a DOI name and these (and other) seemingly like-minded identifiers from a purely web point of view. Both these identifiers can be accessed through very simple URL structures:
- https://lccn.loc.gov/2003556443
And the DOI itself can be resolved using an equally simple URL structure:
So, why does DOI not just present itself as a simple database number which is accessed through a simple web link and have done with it, e.g. a page for the object named by the DOI “10.1000/1” is retrieved from the DOI proxy server at http://dx.doi.org/?
Essentially the typical DOI link presents an elementary web-based URL which performs a useful redirect service. What is different about this and, say a PURL, which offers a similar redirect service? What’s the big deal?
(Continues below.)