I had the pleasure recently of writing an article with my respected colleague and Associate Editor for the Journal of Nursing Education, Dr. Teri A. Murray, on identifying the current state of considering diversity, equity, and inclusion contributions as part of academic appointment and review (). In that article, we discussed the importance of expanding definitions and reconsidering metrics for evaluating academic scholarship. As always, when writing, I learn so much more on the topic than when I began. While many educators, researchers, and institutions may have adopted current best practices in evaluation of research and scholarship, I have also seen recent evidence that outdated notions of research quality, such as the journal impact factor (JIF), still exist in nursing, in nominations for research awards in a regional research society as well as in documents related to academic review. Look at most nursing journals' webpage and the journal impact factor is easily found and updated annually. The JIF was never meant to be an indicator of research quality. It was actually developed to aid librarians in making decisions about library journal purchases (). A recent survey of academic review documents across a representative sample of universities indicates that many universities still equate the JIF with research quality (). In fact, 40% of research-intensive universities surveyed mentioned the JIF in their academic review documents. A recent analysis of over 45,000 research studies concluded that citation counts and the JIF to be weak and inconsistent predictors of research quality ().
In 2012, journal publishers and editors wrote the San Francisco Declaration on Research Assessment (), which recommends using other metrics than the JIF to evaluate research, such as the impact of individual research on policy and practice. It is time to move away from evaluating the quality of research based on the journal it was published in and instead evaluate the impact of the research itself. There are several recommended frameworks suggested for this evaluation.
Belcher and colleagues () conducted a systematic review of articles that proposed or evaluated criteria for research quality. They concluded that there were four general principles to consider when evaluating research studies: (1) relevance, or the importance of the research to the problem under study or to society; (2) credibility, or robust data, methods, and findings leading to a logical interpretation; (3) legitimacy, or a fair and ethical research process; and, (4) effectiveness, or the research generates knowledge to contribute to solution of the problem.
The Becker Medical Library Model for Assessment of Research Impact is another recommended framework. This framework evaluates research impact, activities, and output according to advancement of knowledge, clinical implementation, legislation and policy, and economic and community impact ().
What can you do to promote this reimagination? First, explore the DORA website and have conversations with your librarian and those who organize your academic review criteria about best practices for evaluating research. Ensure those who participate in reviewing candidates for appointment, promotion, and tenure are included in this conversation. Consider moving from a focus on journal metrics to a focus on article metrics, such as Altmetrics (alternative metrics) or author metrics like the h-index. Al-metrics look at the online activity of an article, such as how many times it was downloaded, shared on social media, and mentioned in policy reports or news outlets. The h-index measures citation impact and productivity of an author and may not be a reliable metric for early career researchers. Google Scholar also reports the i10-index or the number of papers that have at least 10 citations. Another author metric to assess impact is the Relative Citation Ratio (), which records the citations for each article relative to standards within the field in which the paper is published next. Consider adopting a research impact model to support faculty in documenting the impact of their scholarly work. Additionally, if your library does not have a LibGuide on the topic, ask for one to be developed or review ones that already exist. Two LibGuides that I have found helpful are from the Ohio State University Health Sciences Library () and University of Washington Health Science Library (), both of which discuss impact factors and ways to document research and scholarly impact. Lastly, explore if there are resources within your university to support and recognize the faculty's research expertise, such as Pure (Elsevier, 2023), which is a research information management system that can provide universities with a searchable database of research activity and external collaborations of faculty investigators, programs, and departments.
What can you do to elevate and promote your research and scholarly expertise? First, make sure your bibliographies in Google Scholar, PubMed and other databases are accurate. Then, role model in your professional documents or websites how to use author and article metrics, rather than journal metrics. Another way to accentuate your research, identify you and your scholarly contributions, and document your contribution as a journal peer reviewer is to set up an ORCID (https://orcid.org/) account with an open researcher and contributor ID.
We need to move beyond proxy metrics, such as the JIF, to measure nursing's true impact on health outcomes. To do that, we need to advocate and promote best practices in evaluation of research and scholarly impact, locally within our academic settings, as well as regionally and nationally.
Joanne Noone, PhD, RN, CNE,
ANEF, FAAN
Professor and Director – Master's in
Nursing Education Program
A.B. Youmans Spaulding
Distinguished Professor
Oregon Health & Science University
School of Nursing
Editorial Board Member, Journal of
Nursing Education
References
- Belcher , B. M., Rasmussen , K. E., Kemshaw , M. R., Zornes , D. A. (2016). Defining and assessing research quality in a transdisciplinary context. Research Evaluation, 25, 1–17–.
- Dougherty , M. R., Horne , Z. (2022). Citation counts and journal impact factors do not capture some indicators of research quality in the behavioural and brain sciences. Royal Society Open Science, 9(8), 220334.
- DORA. (n.d.). San Francisco Declaration on Research Assessment. https://sfdora.org/read/.
- Elsevier (2023). PURE. https://www.elsevier.com/solutions/pure
- McKiernan , E. C., Schimanski , L. A., Muñoz Nieves , C., Matthias , L., Niles , M. T., Alperin , J. P. (2019). Use of the Journal Impact Factor in academic review, promotion, and tenure evaluations. eLife, 8, e47338. PMID:
- National Institutes of Health. (n.d.) ICite. https://icite.od.nih.gov/
- Noone , J., Murray , T. A. (2023). Addressing diversity, equity, and inclusivity contributions in academic review. Nurse Educator, 49(1): 25–30–. PMID:
- The Ohio State University Health Sciences Library. (2023). Measuring scholarly impact. https://hslguides.osu.edu/researchimpact
- University of Washington Health Sciences Library. (2023). Impact factors. https://guides.lib.uw.edu/hsl/impactfactors
- Washington University School of Medicine. (2018). Assessing the impact of research. https://becker.wustl.edu/impact-assessment/