The Crossref Nominating Committee is inviting expressions of interest to join the Board of Directors of Crossref for the term starting in 2021. The committee will gather responses from those interested and create the slate of candidates that our membership will vote on in an election in September. Expressions of interest will be due Friday, June 19, 2020.
The role of the board at Crossref is to provide strategic and financial oversight of the organization, as well as guidance to the Executive Director and the staff leadership team, with the key responsibilities being:
After 20 years in operation, and as our system matures from experimental to foundational infrastructure, it’s time to review our documentation.
Having a solid core of education materials about the why and the how of Crossref is essential in making participation possible, easy, and equitable.
As our system has evolved, our membership has grown and diversified, and so have our tools - both for depositing metadata with Crossref, and for retrieving and making use of it.
To help better support the discovery, sale and analysis of books, Jennifer Kemp from Crossref and Mike Taylor from Digital Science, present seven reasons why publishers should collect chapter-level metadata.
Book publishers should have been in the best possible position to take advantage of the movement of scholarly publishing to the internet. After all, they have behind them an extraordinary legacy of creating and distributing data about books: the metadata that supports discovery, sales and analysis.
Hello, I’m Paul Davis and I’ve been part of the Crossref support team since May 2017. In that time I’ve become more adept as a DOI detective, helping our members work out whodunnit when it comes to submission errors.
If you have ever received one of our error messages after you have submitted metadata to us, you may know that some are helpful and others are, well, difficult to decode. I’m here to help you to become your own DOI detective.
We test a broad sample of DOIs to ensure resolution. For each journal crawled, a sample of DOIs that equals 5% of the total DOIs for the journal up to a maximum of 50 DOIs is selected. The selected DOIs span prefixes and issues.
The results are recorded in crawler reports, which you can access from the depositor report expanded view. If a title has been crawled, the last crawl date is shown in the appropriate column. Crawled DOIs that generate errors will appear as a bold link:
Click Last Crawl Date to view a crawler status report for a title:
The crawler status report lists the following:
Total DOIs: Total number of DOI names for the title in system on last crawl date
Checked: number of DOIs crawled
Confirmed: crawler found both DOI and article title on landing page
Semi-confirmed: crawler found either the DOI or the article title on the landing page
Not Confirmed: crawler did not find DOI nor article title on landing page
Bad: page contains known phrases indicating article is not available (for example, article not found, no longer available)
Login Page: crawler is prompted to log in, no article title or DOI
Exception: indicates error in crawler code
httpCode: resolution attempt results in error (such as 400, 403, 404, 500)
httpFailure: http server connection failed
Select each number to view details. Select re-crawl and enter an email address to crawl again.