We’re equally sad and proud to report that Rachael Lammey is moving on in her career to the very lucky team at 67Bricks. Her last day at Crossref is today, Friday 16th February. Which is too soon for us, but very exciting for her!
It’s hard to overstate Rachael’s impact on Crossref’s growth and success in her 12 years here. She started as a Product Manager where she developed that role into a broad and central function, and soon moved into the newly-formed community team as International Outreach Manager where she grew important programs such as Sponsors, Ambassadors, a series of ‘LIVE’ events around the world, and she went on to manage her own team and establish some of the most important strategic relationships that Crossref now feels fortunate to have.
Great news to share: our Executive Director, Ed Pentz, has been selected as the 2024 recipient of the Miles Conrad Award from the USA’s National Information Standards Organization (NISO). The award is testament to an individual’s lifetime contribution to the information community, and we couldn’t be more delighted that Ed was voted to be this year’s well-deserved recipient.
During the NISO Plus conference this week in Baltimore, USA, Ed accepted his award and delivered the 2024 Miles Conrad lecture, reflecting on how far open scholarly infrastructure has come, and the part he has played in this at Crossref and through numerous other collaborative initiatives.
Metadata about research objects and the relationships between them form the basis of the scholarly record: rich metadata has the potential to provide a richer context for scholarly output, and in particular, can provide trust signals to indicate integrity. Information on who authored a research work, who funded it, which other research works it cites, and whether it was updated, can act as signals of trustworthiness. Crossref provides foundational infrastructure to connect and preserve these records, but the creation of these records is an ongoing and complex community effort.
A few months ago we announced our plan to deprecate our support for the Open Funder Registry in favour of using the ROR Registry to support both affiliation and funder use cases. The feedback we’ve had from the community has been positive and supports our members, service providers and metadata users who are already starting to move in this direction.
We wanted to provide an update on work that’s underway to make this transition happen, and how you can get involved in working together with us on this.
My name is Isaac Farley, Crossref Technical Support Manager. We’ve got a collective post here from our technical support team - staff members and contractors - since we all have what I think will be a helpful perspective to the question: ‘What’s that one thing that you wish you could snap your fingers and make clearer and easier for our members?’ Within, you’ll find us referencing our Community Forum, the open support platform where you can get answers from all of us and other Crossref members and users. We invite you to join us there; how about asking your next question of us there? Or, simply let us know how we did with this post. We’d love to hear from you!
A little about us and what drives the team
I’m fortunate to manage a great team - Evans, Kathleen, Paul, Poppy, and Shayn - who enjoy and are hardwired to guide. We have different strengths and interests, but the thing that unites us is that we are energized when we can unpick tricky problems for all of you, our members and users. In 2023, the technical support team answered around 11,000 questions from all of you. We do that with one-to-one requests sent to us via email and within our support center (using a closed-source software called Zendesk). And, we’ve been providing more and more support in our Community Forum, where we’re aiming for open interactions, so we can all learn from the rich exchanges with all of you (the Forum has an integration with Zendesk, so posts made in the Forum are delivered to us there, so our team won’t miss any of your questions).
We established in the previous paragraph that we have a great technical support team who all pride themselves on helping you. But we’re also human; the reality is that many of those ~11,000 technical support questions asked of us in 2023 were repetitive, and there are always trends in the questions asked. That’s another important reason why we’re hoping to have more and more of these questions asked and answered within our Community Forum; again, so we can all learn from one another. We know certain parts of content registration, metadata retrieval, and everything in between are, well, complicated. The Crossref learning curve can be steep for all of us. Collectively, our technical support team has more than 25 years of Crossref experience, and we’re continuously learning new things about the Research Nexus and the scholarly ecosystem from one another and all of you.
Learning through this complexity is one of the most enriching parts of our days. Our daily stand-up, modeled off of different software development methodologies, where together we troubleshoot tangly questions from all of you, share ideas, and just keep up-to-date on the latest from across the organization leads to a lot of knowledge exchange. So, years ago, we decided to transform the issues we discuss in those stand-ups into public-facing posts in our Community Forum. It gave us the opportunity to share much-needed examples in a new community space; and, we knew, since these were the issues we were all discussing and learning from ourselves, that many of you would also benefit from us surfacing the topics openly. We call these posts tickets of the month, since the majority of topics we discuss have originated from tickets in our support center.
Examples of some of the most popular topics in the last two-plus years have been:
Like I said, these posts originated from real-life questions of us from our community members. In most cases, we’ve been asked these questions by many of you. These Community Forum posts are our attempts to unlock understanding of our services, rich metadata, or the larger Research Nexus. Said another way: we all see value in putting in the effort to post one more example or answer that nuanced question. Perhaps one of our posts will include an example that really resonates with you and/or your work.
In that spirit, I asked Evans, Kathleen, Paul, Poppy, and Shayn to answer this question below (yes, I’m going to weigh in, too):
What’s that one thing that you wish you could snap your fingers and make clearer and easier for our members?
Evans, Technical Support Specialist
As a publisher and a Crossref member, at one point or another, you might have made a mistake in the metadata deposited for a given DOI. I’m sure after the slight ‘shock’, the next question you had in mind was, ‘How can I correct this mistake?’ Well, here is a simplified guide on how to do that correction/update!
Can I modify/ update the metadata of a registered DOI?
As indicated by my colleague Shayn below in this blog post, Crossref DOIs are designed to be persistent (and cannot be changed/deleted once registered). And YES, you can update the metadata associated with any of your registered DOIs whenever necessary, at no additional fee.
How can I perform a standard metadata update?
To add, change, or remove any metadata element from your existing records, you generally just need to resubmit your complete metadata record with the correct/new changes included. How you choose to update a DOI metadata record is highly dependent on the content registration tool/platform you are using/comfortable working with, as described below:
OJS: Navigate to the article record you wish to update, add in your new metadata/delete relevant metadata fields, and deposit it again using the Crossref import/export plugin. You must be running at least OJS 3.1.2 and have the Crossref import/export plugin enabled.
Web deposit form: Open the web deposit form, and re-enter all the metadata, including the new changes - leave the relevant field blank to delete it, or add in your new metadata to update it - and resubmit the form (note: there are a handful of exceptions to this for the web deposit form).
Depositing XML files with Crossref: Make changes to the relevant XML file and resubmit it to Crossref via the admin tool. When making an update, you must supply all the bibliographic metadata for the record being updated, not just the fields that need to be changed. During the update process, we overwrite the existing metadata with the new information you submit, and insert null values for any fields not supplied in the update. This means, for example, that if you’ve supplied an online publication date in your initial deposit, you’ll need to include that date in subsequent deposits if you wish to retain it. Note that the value included in the element must be incremented each time a DOI is updated.
If you’re looking for real-life examples of other members who have updated their metadata, the Community Forum is a great starting point. If you have follow-up questions on any of the existing threads, I invite you to post a message today.
Kathleen, Technical Support Specialist
One of my favorite types of queries to tackle are those regarding content registration problems. I love a good mystery and getting to the bottom of why that pesky submission just didn’t succeed. Sometimes members come to us with an error message and specific questions about what has gone awry. But, in fact, two of the most common questions we receive are: 1) I deposited something; did it work? and 2) I deposited something; why isn’t it showing up?!
To address the first question of whether your submission went through or not, I wrote a forum post back last June talking about how to use the admin tool to see whether your registration was successful or not. We know there are also email alerts and perhaps status messages within your own registration platform, but using the admin tool is a great way to concretely check where your submission has ended up. If it’s not there, we didn’t get it!
Using the admin tool is also a great way to get more details about the submission and more information in case the submission happened to fail. You may have had the experience in which you contacted us with a question about a failed deposit, and we asked you for the submission ID. You can find that info in the admin tool! And we ask for that, because that helps us get to the bottom of those error message mysteries.
And, as for the second question of when will your DOI be active, my colleague, Paul, wrote a fantastic post on the forum (with an excellent flowchart and all!), explaining when you can expect to see your DOI up and running. Often members will submit a deposit and expect the DOI to resolve immediately. When that doesn’t happen, many think that something has gone wrong or perhaps there is an error, but, in fact, our systems may still be updating and processing the metadata.
I recommend giving these two posts a read if you’re at all concerned about whether you’re depositing your content correctly or not. Hopefully, they’ll help ease your content-registration worries.
Isaac, Technical Support Manager
Oh, thanks for asking! Many of our members, after receiving one of our reports, will respond to us in support with a message similar to: ‘What did I do wrong? Please help me fix this. I don’t want to be out of compliance!’
The receipt of one of our reports does not necessarily mean that you’ve done anything wrong. In truth, the reports we send to our official member contacts are produced using very simple logic. It’s true that they may signal larger, more complicated problems, but we really need your help to determine next steps (and, in some cases, no action is needed because there is no issue for members to fix (e.g., many failed resolutions within the resolution reports)).
Let’s look at the conflict and resolution reports since those are the reports we get the most questions about:
Conflict reports are the most complicated of our reports to navigate. But, the reports are generated using simple logic: if you register two or more DOIs with matching bibliographic metadata, we’ll flag those DOIs as being in conflict, which will generate a warning message at the time of registration and a subsequent conflict report. When members receive this report, we often get the sense that members simply want us, the technical support team, to tell them how to fix it. The problem is we don’t know your content, so we don’t know if the two DOIs do represent a duplicate, or if both DOIs, while having very similar bibliographic metadata, are legitimate and will be maintained going forward (e.g., for errata). Paul wrote a great post in our community forum about what conflicts are and how to resolve them.
Resolution reports, like conflict reports, are generated using simple logic: a resolution is the result of a click on that DOI. If a DOI has been registered, that click results in a successful resolution. If that DOI has not been registered, that click results in a failed resolution. Our monthly report is a count of those resolutions - successful and failed. Failures can represent content registration errors in a member’s workflow. Or, they can signal that an end user has made a mistake when attempting to click the DOI in question. So, for example, an end user perhaps added an extra period onto their DOI link. Instead of trying to resolve https://doi.org/10.5555/cupnfcm2wj, a legitimate DOI, they added a period to the end and tried to resolve https://doi.org/10.5555/cupnfcm2wj. instead. That extra period at the end of the DOI has made it a completely different DOI that is not registered with us, thus they get a failed resolution. This is pretty common. For members with content being regularly clicked, there will be user errors in the logs appearing as failed resolutions. The first question members should ask themselves when reviewing the failed .csv report within the resolution report is: ‘are any of these DOIs legitimate DOIs that I thought we had registered?’ We have more on the basics of resolution reports also over in our Community Forum.
Paul, Technical Support Specialist & R&D Support Analyst
I know we were asked to name “one thing” but I have two that are closely related. May I snap my fingers twice and fix two issues? [Of course, Paul! Take it away!]
Paul’s first snap
One of the most asked questions we get in support is “why is my DOI not working?” 90% of the time it is down to a failed submission. A good proportion of those failures are a result of title mismatches between the deposited container title and the one we have stored on the system here. There are other error messages that occur, too, which I wrote about back in 2020.
So, “why do we fail submissions because of title differences?” You might ask.
Well, the title and ISSN/ISBN and/or the title level DOIs act like locks to the title record, which need the right keys to unlock the title so that you can add or update the records against it. So if you don’t match what was in the original submission, you get a failure. Without that stringent check, we would have way too many iterations of titles and matching to those would be a nightmare. Not to mention sorting those DOIs into one container in the REST API.
If a title update is required due to an error with an original title deposit, then these need to be made by the support team, so get in touch with us on the Community Forum.
And, a second
Permissions against titles and DOIs: Lots of our members don’t realise that each DOI has its own permissions against the prefix that currently ‘owns’ or is associated with that DOI in the background.
It would be fair to assume you can tell just by looking at a DOI who the current publisher is, based on the prefix at the start —but that’s not always the case. Things can (and often do) change. Individual journals get purchased by other publishers, and whole organizations get bought and sold.
What you can tell from looking at a DOI prefix is who originally registered it, but not necessarily who it currently belongs to. That’s because if a journal (or whole organization) is acquired, DOIs don’t get deleted and re-registered to the new owner. The update will of course be reflected in the relevant metadata, but the prefix on the DOI will stay the same. It never changes—and that’s the whole point, that’s what makes the DOI persistent.
Isaac also wrote this in much more detail and explains the internal Crossref processes in his blog “What can often change, but always stays the same?“
These permissions are very important to understand when it comes to title transfers and working with updating your metadata against transferred DOIs to prevent duplicate DOIs for the same work.
Poppy, Technical Support Contractor
As a researcher myself, I’d like to talk about references in a journal article, book, conference paper, etc. (I’ll just use ‘article’ going forward for simplicity). These are the references included in an article by the author. References in one article result in citations for another article. It’s the thing every author dreams of and accruing citations can be a big deal for authors, journals, and publishers.
For readers, articles with no references can be less discoverable using systems that use citation links for relevance, and that discoverability is of critical importance for our members who decide to register references with us. We all want your content to be shared, cited, linked, and used far and wide.
We receive many questions from authors asking why citations don’t show up; it’s usually due to metadata deposits with no references included. There may be an assumption that our process is like Google Scholar, which crawls full text and websites. This misunderstanding has a big impact on references and citation counts. However, as we do not store a copy of the paper, our intake system does not extract references from the article, regardless if they have a DOI. This is one of the main reasons that Crossref citation counts are lower than services that use extraction methods. We only store the data that a publisher registers and maintains with us. On deposit of a metadata record that includes references, our system performs a matching process - if there is a match, a cited-by connection is applied to the metadata. With deposits with no references, however, there is no data to match to other articles (and, therefore limitations on the discoverability and no cited-by count increase).
An article with no references has big impacts for the authors, the journal, the publisher, researchers, and ultimately, the readers. This can mean decreased distribution of the content itself, reduced citation counts for cited articles, lower impact metrics for journals, and can ultimately affect value for publishers. For example, researchers just don’t include articles without references for scientometric analysis.
Our documentation on references includes the elements for both structured and unstructured data. Including the DOI in the structured data is best practice as it provides a precise location with rich data for matching. If the matcher does not see a link between the deposited DOI and the cited DOI at the time of deposit, then the references are stored to be crawled with other matching algorithms later. So, we’re always working to create those rich cited-by linkages between works (raising the content’s profile and overall discoverability), no matter when you register reference metadata. You can also see how your publisher is doing on depositing references by viewing their Participation Report. If you are an author, you can check if your DOIs that were registered contained any references by using our REST API. Don’t see them? You can always contact the editor of the journal or the publisher that published your paper and ask them to add them. Didn’t hear back? Just drop us a line in the Community Forum, we’re happy to help.
Shayn, Technical Support Specialist
Let’s ‘zoom out’ to the big picture. What are DOIs for? What makes them useful? What are we all doing here anyway?!
There are a lot of different answers to those questions. It’s a complex picture. But, way back in the late ‘90s, the DOI system was designed in order to allow for the creation of unique and persistent identifiers. Crossref members use these identifiers to represent their research outputs and publications. This allows for reliable linking to those items, and the ability to identify and communicate the relationships between them, notably (but not exclusively!) citation relationships.
So, what do we mean when we say that Crossref DOIs should be unique and persistent? In basic terms, unique means that there is only a single Crossref DOI registered for a given citable research output. And, persistent meaning that the DOI associated with a given research output today will continue to be associated with, and link to, that same research output indefinitely into the future.
Yes, there are some grey areas, and we know that everything doesn’t always work 100% perfectly all the time. But, the more deviations from persistence and uniqueness, the harder it becomes for end-users, publishers, Crossref, and other services which make use of our metadata to reliably find research outputs and reliably relate them to one another. It weakens the value and utility of DOIs for everyone.
So, what does this mean in practice?
Be certain that every item you register with Crossref is something you can maintain in the long-term.
Have an arrangement with an archive that can take responsibility for your content if your organization stops hosting it or ceases to exist.
Don’t register things that you know will only exist for a short time.
When you’re about to register new content, be absolutely sure that it hasn’t been registered already, either by your organization or any other organization.
If you acquire a new journal from another publisher, have a process in place to check what content has already been registered and adopt the use of the DOIs registered by the prior publisher for that content. We can always provide a list of the existing DOIs for a journal.
If you publish books, and have a co-publishing agreement with another publisher, distributor, or hosting platform, be aware that one of those other parties may have already registered DOIs for your books. Adopt the use of those DOIs rather than assigning and registering new ones. And, if you don’t want them to do that going forward, communicate that to your co-publishing partners.
When mistakes happen, inadvertently resulting in duplicate DOIs for a single item, identify them quickly. Alias the new duplicate DOI to the long-standing original DOI, and remove all instances of the new DOI from your website or platform.
Ensure that your publishing software, platform, or journal management system can accommodate DOIs with various prefixes for the same publication. You should be able to use (display, link, update metadata and URLs for) the DOIs registered for older content by any prior publishers as easily as you use the DOIs that you registered yourself for more recent content.
Things like persistence and uniqueness can sound like theoretical abstractions, but they actually play an important role in the day-to-day grind of your publishing operations. Their impact on linking, citing, discovery, and analysis of your content is concrete and important. Thus, it’s not surprising that we often hear from members and others in the research community who share this commitment to persistence, uniqueness, and overall rich, accurate metadata. You’ll see that play out in the Community Forum where members and users get involved to troubleshoot issues, compare notes, and share ideas with us and one another. We appreciate the commitment to the Research Nexus and the overall spirit to serve in this growing community. Like we said at the top, we’re all wired to contribute in this way, so building an open, welcoming space that moves us forward excites us.
Again, we invite you to join the discussion on this and many other Crossref-related topics over in our Community Forum.