Some of you who have submitted content to us during the first two months of 2021 may have experienced content registration delays. We noticed; you did, too.
The time between us receiving XML from members, to the content being registered with us and the DOI resolving to the correct resolution URL, is usually a matter of minutes. Some submissions take longer - for example, book registrations with large reference lists, or very large files from larger publishers can take up to 24 to 48 hours to process.
TL;DR: We have a Community Forum (yay!), you can come and join it here: community.crossref.org.
Community is fundamental to us at Crossref, we wouldn’t be where we are or achieve the great things we do without the involvement of you, our diverse and engaged members and users. Crossref was founded as a collaboration of publishers with the shared goal of making links between research outputs easier, building a foundational infrastructure making research easier to find, cite, link, assess, and re-use.
Event Data uncovers links between Crossref-registered DOIs and diverse places where they are mentioned across the internet. Whereas a citation links one research article to another, events are a way to create links to locations such as news articles, data sets, Wikipedia entries, and social media mentions. We’ve collected events for several years and make them openly available via an API for anyone to access, as well as creating open logs of how we found each event.
2020 wasn’t all bad. In April of last year, we released our first public data file. Though Crossref metadata is always openly available––and our board recently cemented this by voting to adopt the Principles of Open Scholarly Infrastructure (POSI)––we’ve decided to release an updated file. This will provide a more efficient way to get such a large volume of records. The file (JSON records, 102.6GB) is now available, with thanks once again to Academic Torrents.
Continuing our blog series highlighting the uses of Crossref metadata, we talked to David Sommer, co-founder and Product Director at the research dissemination management service, Kudos. David tells us how Kudos is collaborating with Crossref, and how they use the REST API as part of our Metadata Plus service.
At Kudos we know that effective dissemination is the starting point for impact. Kudos is a platform that allows researchers and research groups to plan, manage, measure, and report on dissemination activities to help maximize the visibility and impact of their work.
We launched the service in 2015 and now work with almost 100 publishers and institutions around the world, and have nearly 250,000 researchers using the platform.
We provide guidance to researchers on writing a plain language summary about their work so it can be found and understood by a broad range of audiences, and then we support researchers in disseminating across multiple channels and measuring which dissemination activities are most effective for them.
As part of this, we developed the Sharable-PDF to allow researchers to legitimately share publication profiles across a range of sites and networks, and track the impact of their work centrally. This also allows publishers to prevent copyright infringement, and reclaim lost usage from sharing of research articles on scholarly collaboration networks.
How is Crossref metadata used in Kudos?
Since our launch, Crossref has been our metadata foundation. When we receive notification from our publishing partners that an article, book or book chapter has been published, we query using the Crossref REST API to retrieve the metadata for that publication. That data allows us to populate the Kudos publication page.
We also integrate earlier in the researcher workflow, interfacing with all of the major Manuscript Submission Systems to support authors who want to build impact from the point of submission.
More recently, we started using the Crossref REST API to retrieve citation counts for a DOI. This enables us to include the number of times content is cited as part of the ‘basket of metrics’ we provide to our researchers. They can then understand the performance of their publications in context, and see the correlation between actions and results.
A Kudos metrics page, showing the basket of metrics and the correlation between actions and results
What are the future plans for Kudos?
We have exciting plans for the future! We are developing Kudos for Research Groups to support the planning, managing, measuring and reporting of dissemination activities for research groups, labs and departments. We are adding a range of new features and dissemination channels to support this, and to help researchers to better understand how their research is being used, and by whom.
What else would Kudos like to see in Crossref metadata?
We have always found Crossref to be very responsive and open to new ideas, so we look forward to continuing to work together. We are keen to see an industry standard article-level subject classification system developed, and it would seem that Crossref is the natural home for this.
We are also continuing to monitor Crossref Event Data which has the potential to provide a rich source of events that could be used to help demonstrate dissemination and impact.
Finally, we are pleased to see the work Crossref are doing to help improve the quality of the metadata and supporting publishers in auditing their data. If we could have anything we wanted, our dream would be to prevent “funny characters” in DOIs that cause us all kinds of escape character headaches!
Thank you David. If you would like to contribute a case study on the uses of Crossref Metadata APIs please contact the Community team.