The act of registering a DOI (Digital Object Identifier) for scholarly content is sometimes conflated with the notion of conferring a seal of approval or other mark of good quality upon an item of content. This is a fundamental misunderstanding.
A DOI is a tool, not a badge of honor.
The presence of a Crossref DOI on content sends a signal that:
- The owner of the content would like to be formally cited if the content is used in a scholarly context.
- The owner of the content considers that it is worthy of being made persistent.
Beyond the DOI
For Crossref, a DOI is just one of several types of metadata we register, albeit an important one.
Metadata about scholarly works extends beyond the DOI. In addition to bibliographic details, layers of information accompanying published works may now extend to data that describes the research, such as the source of research funding. It may also include non-descriptive information that facilitates usage, such as copyright and access permissions.
In fact, this “richer” metadata can tell you more about the context of the content deposited for a published work than you might realize.
Author data - Crossref metadata may include information specifying the author’s unique ORCID, allowing you to find other works by the same person.
Copyright and access indicators - You can view the license terms under which the full content may be available, which is very helpful for scholars who want to access the full content for research and teaching or for text and data mining.
Funding data - Metadata may also include the identity of the grant-making institution that funded the research, so that the funder and, in the case of publicly funded research, the general public and other researchers, have visibility on the resulting research outputs.
Clinical Trials data - Similarly, when research involves a clinical trial, (testing of medicines and treatments on human beings), Crossref metadata can enhance output visibility by displaying the clinical trial number and the related clinical trial registry.
Like the full content they describe, these metadata have become research resources in their own right. Unfortunately, too much metadata is entered into Crossref with missing, incomplete, or duplicated fields. This “bad” metadata slows the pace of discovery, confounding attempts to find and understand scholarly content and its context.
As a community, we really need to do something about that.
“The Map is not the Territory”
And the metadata is not the content. In Metadata (MIT Press), Jeffrey Pomerantz quotes Alfred Korzybski’s insight that a map is a simplified representation of a territory, a tool of abstraction that allows us to find our way. Jennifer Lin contributed the concept of the scholarly road map as a useful metaphor for the way we use metadata about scholarly works to find our way between and among them in the digital world.
Metadata deposited with Crossref amounts to pieces of information-structured, descriptive, administrative, contextual-about published works that humans can read and machines can use to automate linking and retrieval. The systematic development of such metadata allows us to make sense of such complex information by finding, linking, citing, and assessing scholarly content.
If you want to understand how Crossref acts as a map of scholarly metadata, try searching for content on search.crossref.org (our human API interface). Or simply talk with us @CrossrefOrg and via email@example.com.