Was just reminded (thanks, Tim) of the possibility of using a special tag in bookmarking services to tag links to documents of interest to a given community. I think this is a fairly well-established practice. Note that e.g. the OAI-ORE project is using Connotea to bookmark pages of interest and tagging them “oaiore” which can then be easily retrieved using the link http://web.archive.org/web/20160402182544/http://www.connotea.org/.
I would suggest that Crossref members might like to consider using the tag “crosstech” in bookmarking pages about publishing technology, so that the following links might be used to retrieve documents of interest to this readership:
This D-Lib paper by Altman and King looks interesting: “A Proposed Standard for the Scholarly Citation of Quantitative Data”. (And thanks to Herbert Van de Sompel for drawing attention to the paper.) Gist of it (Sect. 3) is
_“We propose that citations to numerical data include, at a minimum, six required components. The first three components are traditional, directly paralleling print documents. … Thus, we add three components using modern technology, each of which is designed to persist even when the technology changes: a unique global identifier, a universal numeric fingerprint, and a bridge service. They are also designed to take advantage of the digital form of quantitative data.
An example of a complete citation, using this minimal version of the proposed standards, is as follows:
**Micah Altman; Karin MacDonald; Michael P. McDonald, 2005, “Computer Use in Redistricting”,
The next Crossref Forward Linking Webinar is coming on Monday April 30th , 2007 at 12:00pm.
Registration is now available: [The next Crossref Forward Linking Webinar is coming on Monday April 30th , 2007 at 12:00pm.
Registration is now available:]1
Agenda is coming soon.
Following up on his earlier post (which was also blogged to CrossTech here), Leigh Dodds is now [Following up on his earlier post (which was also blogged to CrossTech here), Leigh Dodds is now]3 the possibility of using machine-readable auto-discovery type links for DOIs of the form
These LINK tags are placed in the document HEAD section and could be used by crawlers and agents to recognize the work represented by the current document.
XML:UK is holding a one-day conference entitled titled “Publishing 2.0” at Bletchley Park on Wednesday 25th April 2007. Bletchley Park was the location of the United Kingdom’s main codebreaking establishment during the Second World War and is now a museum (and has a train station!). The event will examine some of the more cutting-edge applications of XML technology to publishing. With keynotes by Sean McGrath and Kate Warlock and a series of must-see presentations, this will be the place to be on the last Wednesday in April.
Just a quick note to mention that we’ve now set up a new mailing list email@example.com for public discussion of OTMI - the Open Text Mining Interface proposed by Nature. See the list information page here for details on subscribing to the list and to access the mail archives.
And many thanks to the Crossref folks for hosting this for us!
This post on Adobe’s Creative Solutions PR blog may be worth a gander:
_“This new update, the Adobe XMP 4.1, provides new libraries for developers to read, write and update XMP in popular image, document and video file formats including: JPEG, PSD, TIFF, AVI, WAV, MPEG, MP3, MOV, INDD, PS, EPS and PNG. In addition, the rewritten XMP 4.1 libraries have been optimized into two major components, the XMP Core and the XMP Files.
Apologies to blog yet another of my posts to Nascent, this time on Agile Descriptions - a talk I gave the week before last before the LC Future of Bibliographic Control WG. (Don’t worry - I shan’t be making it a habit of this.) But certain aspects of the talk (powerpoint is here) may be interesting to this readership, in particular the slides on microformats and how these are tentatively being deployed on Nature Network, and also a detailed anatomy of OTMI files.
I just posted this entry on Nascent, Nature’s Web Publishing blog, about Nature’s new look for web feeds which essentially boils down to our using the RSS 1.0 ‘mod_content’ module to add in a rich content description for human consumption to complement our long-standing commitment to machine-readable descriptions. We are thus able to deliver full citation details in our RSS feeds as XHTML in CDATA sections for humans and as DC/PRISM properties for machines, the whole encoded in our feed format of choice - RSS 1.