Skip to Main Content

Measuring Research Impact

Traditional Citation Metrics

Traditional metrics generally focus on the citation as the key measurement for impact. Impact can be measured at the journal-level, the article-level, or the author-level. If an article or other work of intellectual output has been cited by another, the assumption is that this is a sign of the influence of one author's work on another. 

Still, questions remain as to why and when research is cited. In a 1965 article, Norman Kaplan asks: 

"How often are the works of others cited without having read them carefully? How often are citations simply lifted from the bibliography in someone else's work without either reading or giving credit to the man who did the original search of the literature? How often are citations tacked on after the paper is completed as an afterthought and window dressing?"¹

Haustein, Bowman, & Costas (2015) bring similar questions to bear on altmetrics, prompting us to consider why a researcher might choose to save, mention, review, or cite an article online.²  

Regardless of why a particular piece of research is cited, in academia the citation is often still the most prominent mark of scholarly impact. Researchers often look to publish in journals with high citation-based rankings, and the institutions of higher education they work for often look to citation metrics when reviewing researchers' work for tenure and promotion. Citation metrics can also influence how universities are perceived by their peers and by grant-funding agencies. 

The tabs in this section of the guide will review common traditional citation metrics. 


¹ Kaplan, N. (1965). The norms of citation behavior: Prolegomena to the footnote. American Documentation, 16(3), 179. 

² Haustein, S., Bowman, T. D., & Costas, R. (2015). "Interpreting 'altmetrics': Viewing acts on social media through the lens of citation and social theories. Preprint arXiv:1502.05701. Published version in Sugimoto, C. R. (Ed). (2016). Theories of Informetrics and Scholarly Communication. DeGruyter. https://ebookcentral.proquest.com/lib/uva/detail.action?docID=4426417.

Journal metrics use citations to attempt to measure the impact of a particular journal during a given year. Three journal metrics - Impact Factor, Eigenfactor Score, and Article Influence Score - are introduced below. Although some of the calculations are based on the "average article" in a journal, the metrics still are focused on ranking impact at the journal-level rather than impact of any specific article within that journal. Other example metrics used to compare journals are listed at the bottom of this tab. 

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Impact Factor (IF), also known as Journal Impact Factor (JIF), is a citation metric that was devised in the 1960s by Eugene Garfield and Irving H. Sher to help them select journals for the Science Citation Index. Impact factors are calculated by Clarivate Analytics (formerly The Institute for Scientific Research, ISI), and reported in the Journal Citation Reports (JCR)

NOTE The Impact Factor is meant to measure a journal's impact (not a particular article's impact or an author's impact). IF is calculated independent of the size of a journal, so it allows researchers to compare journals regardless of size or publication frequency.

How Impact Factor is calculated

The impact factor of a journal "is a measure of the frequency with which the 'average article' in a journal has been cited in a particular year or period."¹  A journal's yearly IF is calculated based on citations of articles from the two previous years divided by the number of articles published in those years. 

In order to calculate the IF for a journal for 2017, for instance, you would need to use the following equation: 

# of citations in 2017 to articles published in 2016  +  # of citations in 2017 to articles published in 2015
____________________________________________________________________________
# of articles published in 2016  +  # of articles published in 2015

Advantages of the Impact Factor metric

  • If small review journals or specialty journals were compared with their larger peers within a discipline based only on number of articles published or number of citations attained, they would suffer lower rankings. Impact factor evens the playing field for journals within a discipline. 
  • Impact factor has a long history and is a well-known standard within the sciences. 

Critiques of the Impact Factor metric

  • Impact Factors vary widely by discipline. Journals in JCR's Cell & Tissue Engineering category, for example, have a 2017 median impact factor of 3.560, while the journals grouped in the category of Mathematical and Computational Biology have a 2017 median impact factor of 1.619. Some disciplines publish more and cite at a higher rate than others. Such wide variations makes comparing IFs across disciplines very difficult. Only science and social science journals indexed in Web of Science are used in calculating impact factors. Humanities disciplines are not included. Citations in books (and of books) are also not measured.
  • Some critics have pointed out that IF only takes into account citations from the last two years. For researchers who want a longer period for citations, JCR also has available 5 Year Impact Factors.
  • Others worry that researchers can game the system by citing themselves, thus increasing the number of times their works have been cited. In response, Clarivate Analytics also makes available an Impact Factor without Journal Self Cites.
  • Critics have also pointed out that whether a journal piece is considered an article (vs. a review or other type of piece not counted in calculating a journal's impact factor) is subjective. Clarivate Analytics makes those decisions.
  • Like other traditional citation metrics, IF does not take into account impact of a journal beyond citations in other scholarly works. In 2006, the editors of PLoS Medicine wrote a critical piece on what they entitled "The Impact Factor Game."  It notes, in part: "[A] journal's impact factor says nothing at all about how well read and discussed the journal is outside the core scientific community or whether it influences health policy. For a journal such as PLoS Medicine, which strives to make our open-access content reach the widest possible audience—such as patients, health policy makers, non-governmental organizations, and school teachers—impact factor is a poor measure of overall impact."²

Suggested Further Reading

Garfield, E. (2005). The Agony and the ecstasy—The history and meaning of the journal impact factor. International Congress on Peer Review And Biomedical Publication. Chicago, IL, Sept. 16, 2005. http://garfield.library.upenn.edu/papers/jifchicago2005.pdf

¹ "The Clarivate Analytics Impact Factor." Clarivate. Retrieved 11 March 2019. https://clarivate.com/essays/impact-factor/ 

²The PLoS Medicine Editors (2006) The impact factor game. PLOS Medicine 3(6): e291. https://doi.org/10.1371/journal.pmed.0030291

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Eigenfactor Score

In 2007, Carl Bergstrom and Jason West founded the Eigenfactor score as a citation metric that measure's a journal's "total importance to the scientific community."¹ Like the Impact Factor, the Eigenfactor score is calculated using journals that are indexed in Web of Science. Eigenfactor scores can be searched in  Journal Citation Reports (JCR)

Key Characteristics of the Eigenfactor Score

  • Takes into account the citation network of articles (similar to Google's PageRank).
  • Uses 5 years of citation data (as opposed to the 2-yr impact factor)
  • Size of a journal matters -- the more articles published in a journal, the higher the Eigenfactor score for that journal
  • All Eigenfactor scores in JCR added together = 100. 

Understanding a Journal's Eigenfactor Score

  • A journal's Eigenfactor score denotes the percentage of influence that journal has compared to other indexed publications. For example, in JCR, the 2017 highest Eigenfactor score is for the journal PLoS One. It's Eigenfactor Score for 2017 is 1.86200. We can understand that to mean that PLoS One has 1.862% of the total influence of all JCR-indexed publications. 
  • Having all of the scores so low (the highest, as noted, being 1.862), it can be confusing to make comparisons. Therefore, also available is a Normalized Eigenfactor, which makes the average journal score = 1PLoS One has a Normalized Eigenfactor of 217.41500, which means it has 217.4 times the influence of the average journal in the JCR.

Suggested Further Reading

About the Eigenfactor project. Eigenfactor.orghttp://eigenfactor.org/about.php   

¹  About the Eigenfactor project. Eigenfactor.orghttp://eigenfactor.org/about.php   

____________________________________________________________________________________________________________________

Article Influence Score 

The Eigenfactor Project, which created the Eigenfactor Score, also has calculated a metric called the Article Influence Score. The Article Influence Score "measures the average influence, per article, of the papers in a journal,"²  making it comparable to the Impact Factor.  

Key Characteristics and Understanding the Article Influence Score 

  • The Article Influence Score for a journal is calculated using 5 years of citation data
  • Article Influence Scores are normalized so that the mean article in the JCR has an article influence of 1.00.
  • CA: A Cancer Journal for Clinicians, has the highest 2017 article influence score at 41.156.  We can understand this to mean that the average article in CA has 41 times the influence of the mean article in the JCR. 

Suggested Further Reading

About the Eigenfactor project. Eigenfactor.orghttp://eigenfactor.org/about.php   

¹  About the Eigenfactor project. Eigenfactor.orghttp://eigenfactor.org/about.php   

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Other Journal-Level Metrics 

The following is a list of examples of other citation metrics that are currently being used or that have been suggested, along with links to sites or articles where you can find out more.

Cabell's Classification Index (CCI) & Difficulty of Acceptance (DA) https://www2.cabells.com/metrics 

CiteScore (Scopus) https://journalmetrics.scopus.com/index.php/Faqs 

SCImago Journal Rank https://www.scimagojr.com/aboutus.php 

Y Factor (combination of impact factor and PageRank) https://arxiv.org/abs/cs/0601030

Article metrics use citations to attempt to measure the impact of a particular article over time. Dimensions Badges are a relatively new way to measure article impact and are introduced below.  

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Beginning in 2018, Dimensions badges have been offering citation metrics for publications (the Dimensions badges complement the altmetrics found in Altmetric donut badges that measure online activity). Created by Digital Science, which also includes the company Altmetric, Dimensions badges and their accompanying metrics are article-level citation metrics as opposed to journal-level metrics like IF and Eigenfactor or author-level metrics such as h-index. Sometimes Dimensions metrics are calculated for books rather than articles.

Dimensions badge The number in the middle of the badge indicates the number of times that the article has been cited by other publications indexed in the Dimensions database (as of April 2019, Dimensions claims to have "more than 96m articles/books/chapters in the Dimensions index and we are continually adding more" -- Dimensions Support Center). Other metrics currently included in the Dimensions citation metrics include:

  • Recent citations - citations received within the last two years
  • Field Citation Ratio (FCR) - "the relative citation performance of an article, when compared to similarly-aged articles in its Field of Research subject area."¹ 
  • Relative Citation Ratio (RCR) - "the relative citation performance of the article when compared to other articles in its area of research"²

Like the Altmetric badge, the Dimensions badge has another icon that may also be used to indicate Dimensions citation metrics are included. Here is an example from the University of Michigan Press:

University of Michigan Press book with Dimensions icon.

Example book record from University of Michigan Press showing Dimensions icon.

Dimensions offers other content types in addition to publication information: grants, patents, clinical trials, and policy documents. However, at this time only the Publications information is freely available - other content is restricted to organizational subscriptions.  Find out more about the Dimensions badge for citations at https://dimensions.freshdesk.com/support/solutions/articles/23000012817-what-is-the-dimensions-badge-details-page- and read more about Dimensions at  https://www.dimensions.ai/ and  https://dimensions.freshdesk.com/support/solutions

 

¹ ²  "What is the Dimensions Badge Details Page?" Dimensions Support Home. Accessed 9 April 2019. https://dimensions.freshdesk.com/support/solutions/articles/23000012817-what-is-the-dimensions-badge-details-page- 

Author metrics use citations to attempt to measure the impact of a researchers scholarship or productivity. The h-index is introduced below, followed by examples of other author-level metrics at the bottom of this tab. 
____________________________________________________________________________________________________________________

The h-index, short for Hirsch index, is a measure of author impact rather than journal or article impact (although some vendors have adapted it to measuring journal impact as well). Jorge E. Hirsch, professor of Physics at the University of California San Diego, founded the h-index in 2005. 

How h-index is calculated

In a 2005 article in PNAS (Proceedings of the National Academy of Sciences of the United States of America), Hirsch proposed the h-index "defined as the number of papers with citation number ≥h, as a useful index to characterize the scientific output of a researcher."¹ He explains how h-index is calculated:

"A scientist has index h if h of his or her Np papers have at least h citations each and the other (Np – h) papers have ≤h citations each."²

In other words, a scientist has an index of h if he/she has written at least h papers that have been cited at least h times.  h-index cannot therefore be higher than the number of the highest cited item, nor can it be higher than the total number of papers published by that author.  

Here are a few h-index examples to help explain further: 

1. Researcher A has written 1 paper that was cited once. Researcher A h-index = 1

2. Researcher B has written 1 paper that was cited once and a second paper that was cited 10 times. Researcher B h-index = 1

3. Researcher C has written 5 papers. The first was cited 10 times, the second 6 times, the third 15 times, the fourth 11 times, and the fifth 9 times. Researcher C h-index = 5 

Advantages of the h-index metric

  • Unlike impact factor and the Eigenfactor metrics, h-index attempts to measure an author's relevance and impact.
  • An h-index is available for authors in a variety of databases, including Web of Science, Scopus, and Google Scholar. (Note that the h-index reported for a researcher in one database may not match the h-index for the same person in another, since different databases may index different papers by that author). 

Critiques of the h-index metric

  • As with many other metrics, comparing the h-index for researchers in different disciplines is difficult, because citation frequency varies widely. 
  • The h-index favors researchers who have been in a field longer (and thus had more time to publish).
  • The h-index is based only on a researcher's articles, and weighs single- and multi-authored articles the same. 
  • As noted above, h-index varies by database. One example researcher had an h-index of 80 according to Google Scholar, but only an h-index of 56 according to Web of Science. 

Suggested Further Reading

Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output. PNAS 102(46): 16569-16572. https://doi.org/10.1073/pnas.0507655102 

Impactstory Team. (2014). Four great reasons to stop caring so much about the h-index. Impactstory blog. http://blog.impactstory.org/four-great-reasons-to-stop-caring-so-much-about-the-h-index/

¹ ²  Hirsch, J. E. (2005). An index to quantify an individual's research output. PNAS 102(46): 16569-16572. https://doi.org/10.1073/pnas.0507655102 

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Other Author Metrics 

The following is a list of examples of other citation metrics that are currently being used or that have been suggested, along with links to sites or articles where you can find out more.

The g-index was introduced by Leo Egghe in 2006 as an alternative to the h-index. The g-index score gives more weight to highly cited papers. An author's g-index is always equal to or higher than their h-index.

The i-10 index, introduced by Google in 2011, is the number of articles with at least 10 citations in Google Scholar. The figure is calculated as part of one's Google Scholar profile but has little applicability outside that context.  

  • Web of Science (UVA subscription) is a citation database that allows you to create a Citation Report and calculate an h-index.
  • Google Scholar (free) allows scholars to create a profile that calculates h-index, i-10 index, and total number of citations, for the past 5 years and for all available dates. Google Scholar Citations also allows authors to keep track of citations to their articles.
  • Publish or Perish (free) draws from Google Scholar citations to calculate a scholar's h-index along with variants such as g-index and e-index. The software program must be downloaded and installed.
  • Scopus is a large abstract and citation database that provides traditional citation metrics, along with altmetric data, in an effort to provide comprehensive article-level metrics that demonstrate the impact and reach of a particular work.