
Most of us rely on vetted experts, brand names, seals of approval, and other signals of trust to help us decide on matters ranging from how to treat a dental abscess to which automobile is most fuel efficient. The resources needed to distinguish trustworthy scientific findings from those that are biased, irreproducible, or even fabricated are more elusive.
That’s a problem, because the ability to make such distinctions is essential, given how relevant science is to everyday decisions such as when to vaccinate your child or whether it is safe to consume genetically engineered foods, especially in this age of misinformation.
We believe that scientists and the journals that publish their work should do more to clearly and consistently signal to one another — as well as to the public who rely on their findings — which studies have satisfied standards that convey trustworthiness.
Scientists have time-honored criteria for deciding which research results to trust. They look for work that has been assessed by independent, expert peer reviewers who have taken on the task of scrutinizing a study’s design and modeling, as well as how the authors collected, processed, and interpreted the data. Scientists also look for evidence that the research is open to independent verification and replication. This includes ready access to underlying data — with appropriate exceptions, such as to protect the privacy of human subjects — as well as to the study’s methods, computer code, and materials.
Not only are scientists more likely to trust research that is open to such forms of interrogation, but journals and funding agencies are increasingly requiring it.
Scientists are also more likely to trust findings when they are able to scrutinize factors that might have biased how the authors approached the problem or interpreted the data. Such an assessment requires that researchers disclose their sources of funding and all relationships and interests that have to potential to influence the results of the study.
The public intuitively seeks similar standards. In a recent Annenberg Public Policy Center survey, more than 6 in 10 respondents said that when deciding whether to accept a fresh scientific finding, they would want to know whether the authors disclosed the identities of their funders. Evidence that the public values science’s culture of critique was also on display by the fact that more than half of those surveyed reported that they were more likely to trust a result that had been peer reviewed.
While far from fail safe, such scrutiny is an important part of the scientific ethos that presupposes that evidence be subject to independent examination. Of course, being transparent about research funders and surviving peer review do not guarantee that the findings of a research paper are robust, but a scholarly article that meets both standards is likely to be more trustworthy than one that does not.
It is sometimes difficult, however, to know which papers have met these criteria. With the rise of so-called predatory journals that charge authors for publishing without performing peer review, even scientists sometimes have difficulty determining whether research has been independently vetted. At the same time, journals inconsistently signal whether their authors have honored transparency requirements or whether compliance with them has been verified.
When journals mandate that competing interests be revealed, it isn’t necessarily clear which outside interests should be disclosed. Nor is the time period: Should only past relationships be disclosed? What about future ones? And recent news stories have illustrated the difficult time that reviewers and editors have verifying the accuracy of disclosures.
Along with our colleagues Veronique Kiermer and Richard Sever, we argue in a recent issue of Proceedings of the National Academy of Sciences that now is the time to more clearly signal the trustworthiness of individual scientific articles through the transparent use of checklists and “badges” to verify that a study has met certain standards of trustworthiness.
By describing a journal’s or publishing platform’s expectations of the quality controls a study must have completed before submission, a checklist improves the integrity of the science while also communicating to readers the standards for trustworthiness the submission had to meet. Such checklists should require that the researchers: have confirmed the nature of each author’s contributions; disclosed all potentially biasing relationships; and complied with the archiving standards of their fields to ensure access to the data, materials, and computer code needed to replicate the work.

The capacity of a signal to communicate trustworthiness is enhanced if it is unambiguous, unavoidable, cannot be readily counterfeited, is delivered by a trusted source, and telegraphs its meaning to the intended audience. To serve these ends, we support the use of badges, such as those supplied by the Center for Open Science, to telegraph the dimensions of trustworthiness merited by a study at each stage of publication.
The center’s current badges verify that a study complies with requirements for open data and materials, or that the experimental protocol was registered ahead of the conduct of the work — that’s needed to thwart the temptation to alter a hypothesis to suit the outcome of the experiment. Other possible badges could indicate that a journal has checked the article for plagiarism or image manipulation, or conducted an independent statistics review.
Forward and backward hyperlinks should be used to tie replications to the original study, and those that are successfully replicated should earn additional badges. Such signaling creates incentives for following the practices the badges celebrate.
Signals that a finding is unreliable are valuable as well. Accordingly, tying notices of retraction to a study’s metadata is a powerful means of protecting scientific knowledge. We also urge that hyperlinks be used to tie editorial “expressions of concern” and other updates to suspect articles. When someone accesses a study online, instead of getting just the study as originally published, he or she would also get an alert in the form of a watermark or something else clearly showing that the paper has been retracted and a link taking the reader to the retraction notice.
As scientists, we and our colleagues are dismayed when members of the public are misled by discredited studies or don’t know how to identify trustworthy ones. It is about time for the scientific community to address this problem with consistent and meaningful signals showing which studies honor the norms that sustain trust.
Marcia McNutt is the president of the National Academy of Sciences. Kathleen Hall Jamieson is director of the Annenberg Public Policy Center at the University of Pennsylvania.
If I had the means, I would like to create a website dedicated to the post publication, widespread reviewing of scientific articles. a TrueReview.org, where verified (through institutional email/credit card/or ID proof) reviewers can anonymously review published articles assigning a compound score on scientific merit, experimental robustness, etc. …. and possibly reproducibility as well. Such scores could be additionally reinforced by a trustworthiness index based on the number of scores received. As on shopping websites, the higher the number of reviews, the more confident you are in trusting the assessed quality of a product. Additional checks could be implemented to avoid self reviewing/promotion of articles. Although, the higher the number of reviewers, the lower the weight of this kind of interference would be.
In this way, published articles would be subjected to a vastly more reliable peer reviewing process by the scientific community at large, counting hundreds, or even thousands of feedbacks. And, would weed out (by a combination of low scores and/or trustworthiness index) the large amount of rubbish that is published daily on scientific journals, including the famous and highly regarded ones.
I use pubpeer.com for some of these functions. They have a nice browser plugin that overlays journal article pages with user comments and criticisms associated with the article. Most articles do not have comments, however pubpeer users are quite astute in finding duplicated figures and manipulated data.
Pubmed used to have a comment feature, but it was discontinued.