2025 June 17
Evolving the preprint evaluation world with Sciety
This post is based on an interview with Sciety team at eLife.
Last week Pablo Fernicola sent me email announcing that Microsoft have finally released a beta of their Word plugin for marking-up manuscripts with the NLM DTD. I say âfinallyâ because weâve know this was on the way and have been pretty excited to see it. We once even hoped that MS might be able to show the plug-in at the ALPSP session on the NLM DTD, but we couldnât quite manage it.
Just announced on the handle-info and semantic-web mailing lists is the OpenHandle project on Google Code. This may be of some interest to the DOI community as it allows the handle record underpinning the DOI to be exposed in various common text-based serializations to make the data stored within the records more accessible to Web applications. Initial serializations include RDF/XML, RDF/N3, and JSON.
Weâd be very interested in receiving feedback on this project - either on this blog or over on the project wiki.
On March 3rd the Open Archives Initiative held a roll out meeting of the first alpha release of the ORE specification (http://www.openarchives.org/ore/) . According to Herbert Van de Sompel a beta release is planned for late March / early April and a 1.0 release targeted for September. The presentations focused on the aggregation concepts behind ORE and described an ATOM based implementation. ORE is the second project from the OAI but unlike its sibling PMH it is not exclusively a repository technology. ORE provides machine readable manifests for related Web resources in any context. For instance, DOI landing pages (aka splash page) are human readable resources containing links to any number of resources related to the work identified by the DOI. An ORE instance for the DOI (called a Rem or resource map) would describe the same set of resources in a machine friendly format. A standardized form of redirection understood by the DOI proxy would yield the Rem instead of normal page e.g.
Following on from my previous post about prism:doi I didnât mention, or reference, the ongoing ISO work on DOI, Indeed I hadnât realized that the DOI site now has a status update on the ISO work:
_âThe DOIÂź System is currently being standardised through ISO. It is expected that the process will be finalised during 2008. In December 2007 the Working Group for this project approved a final draft as a Committee Draft (standard for voting) which is now being processed by ISO. Copies of the Committee Draft (SC9N475) and an accompanying explanatory document detailing issues dealt with during the standards process (SC9N474) are provided here for information.
The new PRISM spec (v. 2.0) was published this week, see the press release. (Downloads are available here.)
This is a significant development as there is support for XMP profiles, to complement the existing XML and RDF/XML profiles. And, as PRISM is one of the major vocabularies being used by publishers, I would urge you all to go take a look at it and to consider upgrading your applications to using it.
One caveat. Thereâs a new element <tt>prism:doi</tt>
(PRISM Namespace, 4.2.13) which sits alongside another new element <tt>prism:url</tt>
(PRISM Namespace, 4.2.55). Unfortunately the <tt>prism:doi</tt>
element is shown to take DOI proxy URL as its value - and not the DOI string itself, e.g.
From the beginning our OpenURL resolver has had a non standard feature of returning metadata in response to a request instead of redirecting to the referrent. This feature returned one of our older XML formats which is a bit limited as to the fields it contains.
Sometime after our resolver was deployed we introduced a more verbose XML format for DOI metadata called âUNIXREFâ. This was always available to regular queries against the Crossref system but was never introduced to the OpenURL resolver (for no particular reason).
OK, after a number of delays due to everything from indexing slowness to router problems, Iâm happy to say that the first public beta of our WordPress citation plugin is available for download via SourceForge. A Movable Type version is in the works.
And congratulations to Trey at OpenHelix who became laudably impatient, found the SourceForge entry for the plugin back on February 8th and seems to have been testing it since. He has a nice description of how it works (along with screenshots), so I wonât repeat the effort here.
Having said that, I do include the text of the README after the jump. Please have a look at it before you install, because it might save you some mystification.
I just ran across the final report from the CLADDIER project. CLADDIER comes from the JISC and stands for âCITATION, LOCATION, And DEPOSITION IN DISCIPLINE & INSTITUTIONAL REPOSITORIESâ. I suspect JISC has an entire department dedicated to creating impossible acronyms (the JISC Acronym Preparation Executive?)
Anyhoo- the report describes a distributed citation location and updating service based on the linkback mechanism that is widely used in the blogging community.
I think this is an interesting approach and is one that I talked about briefly (PDF) at the UKSGâs Measure for Measure seminar last June. I think that, like most proponents of p2p distributed architectures, they massively underestimate the problem of trust in the network. They fully knowledge the problem of linkback spam, but their hand-wavy-solution(tm) of using whitelists just means the system effectively becomes semi-centralized again (you have to have trusted keepers of the whitelists).
BISG and BIC have published a discussion paper called âThe identification of digital book contentâ - https://web.archive.org/web/20090920075334/http://www.bisg.org/docs/DigitalIdentifiers_07Jan08.pdf. The paper discusses ISBN, ISTC and DOI amongst other things and makes a series of recommendations which basically say to consider applying DOI, ISBN and ISTC to digital book content. The paper highlights in a positive way that DOI and ISBN are different but can work together (the idea of the âactionable ISBNâ and aiding discovery of content). However, it doesnât go into much depth on any of the issues or really explain how all these identifiers would work together and the critical role that metadata plays.
The recently discussed (announced?) Google Knol project could make Google Scholar look like a tiny blip in the the scholarly publishing landscape.
I love the comment an authority:
âBooks have authorsâ names right on the cover, news articles have bylines, scientific articles always have authors â but somehow the web evolved without a strong standard to keep authors names highlighted. We believe that knowing who wrote what will significantly help users make better use of web content.â
Destacando nuestra comunidad en Colombia
2025 June 05