«PrevHighly Cited Researchers Spotlight Series: Mazen Omar Hasna NextThe Reuters Top 100: Europe’s Most Innovative Universities»

This month brings the customary June release of the annual update to Journal Citation Reports™ (JCR), the longstanding resource for journal evaluation. Updated annually for the last four decades, JCR listings are based on citations recorded in the Web of Science™. Or, to put it another way, JCR listings reflect the collective judgment of scientists and scholars themselves when they select the journal literature they deem most significant and useful to their ongoing research.

Although JCR offers a suite of evaluative tools, the most scrutiny has inevitably fallen on one of its indicators, the Journal Impact Factor. Indeed, it seems that, following their publication in JCR, Journal Impact Factors take on a life of their own, beyond the control and approval of Thomson Reuters, sometimes leading to misinterpretation and misuse of the data.

One Measurement Out of Several

Having first been made available to the public in 1975, Journal Impact Factor figures have been a source of controversy for almost as long as they have been published. In a 2005 article, Dr. Eugene Garfield, creator of what is now the Web of Science and co-inventor of the Journal Impact Factor, discussed the measure’s long history and the misconceptions arising from its improper use.

Throughout the JCR’s existence, Thomson Reuters has not swerved from its initial stance: the Journal Impact Factor is a specific measurement, a ratio between citations received by a given journal and the number of citable items (confined to substantive reports of scientific or scholarly results) published by the journal during the same time period. In other words, it is a measure of the frequency with which the average article in a given journal has been cited.

Originally designed as a tool to help librarians manage their collections and acquisitions, the Journal Impact Factor serves best in such a carefully defined role: pointing to journals that, according to citations, wield significant impact in their respective specialty areas. This information is of use not only to librarians but to researchers seeking the most visible and prestigious journals in which to publish, and historians and sociologists of science who are interested in publication patterns and the concentration and diffusion of knowledge.

What the Journal Impact Factor is not is a measure of a specific paper, or any kind of proxy or substitute metric that automatically confers standing on an individual or institution that may have published in a given journal.

It bears repeating that the Journal Impact Factor is a single measurement that must not be considered in isolation, but which must instead be weighed in the context of the specific specialty area and other factors. For example, citation patterns differ by field: in one subject area, citations might accrue comparatively rapidly, while in other fields a paper might require several years to accumulate citations. Therefore, the basic Journal Impact Factor score is not a useful measurement by which to compare journals in different fields. Fortunately, the JCR now offers several refinements and advanced metrics that provide a more detailed view of journal impact.

Additional Measures

In addition to the original two-year measurement reflected in the standard Journal Impact Factor, JCR includes a five-year figure, providing a fuller, more retrospective view of a journal’s impact.

A different aspect is also provided by another JCR metric, the Eigenfactor® score. As was noted in a recent State of Innovation article, the Eigenfactor score reflects citations to a given journal, but also factors in the journals in which those citations were originally recorded. Citations in influential (i.e., highly cited) journals carry more weight towards a higher Eigenfactor score. The standard Eigenfactor score represents the percentage of time that a hypothetical researcher in a hypothetical library, endlessly tracking the network of citations throughout the literature, would spend on a given journal, according to its overall frequency of citation. Each Eigenfactor score represents a fraction of the sum, totaling 100, representing all of the 11,000-plus journals covered in JCR.

A recent refinement added to the Eigenfactor score, and also included in JCR, is a “normalized” Eigenfactor figure, which compares the influence of journals against a benchmark of 1.00. By this measure, for example, a journal’s score of 2.00 would denote an influence that is twice the world average.

To gauge the influence of journals on a per-article basis, JCR also provides the Article Influence® Score. Via a calculation that includes a journal’s Eigenfactor score and the journal’s overall article count, article influence is compared against a mean Article Influence benchmark set at 1.00. As with the normalized Eigenfactor score, a mark of 3.00, for example, indicates that articles published in the journal are three times more influential than average.

Another measurement relatively new to JCR, the JIF Percentile, affords comparison between journals in different specialty areas. This measurement considers a journal’s impact factor within its specific field, while also controlling for the size of the field. With the resulting normalized scores, journals can be meaningfully compared across categories (a chemistry journal, perhaps, against a title in mathematics) in a far more telling way than simply comparing their respective Journal Impact Factors.

Now embarking on its fifth decade, the Journal Impact Factor remains a usefully simple, graspable measurement of journal influence. For a fuller picture of impact, however, librarians, administrators, researchers and others owe it to themselves to consult the full range of metrics available in JCR.