«Prev5 Ways Patents Dial You Into the Mobile Future NextFighting Melanoma from the Inside»

Evaluating Journals: Beyond the Impact Factor

Imagine you’re a young researcher, eager to make your mark on the world, steeped in the age-old academic imperative to “publish or perish.” You’re seeking the most appropriate outlet for the manuscript containing your latest findings. But where to publish? How can you be certain your work will achieve the highest possible visibility and influence?

Attempting to ascertain the current state of academic and scholarly publishing, instead of providing guidance, will likely induce dizziness as you confront an ever-increasing torrent of information. Every two days, humanity creates as much information as it did from the dawn of civilization through the year 2003. That’s about five exabytes of data.

From the pages of long-established scientific and scholarly journals, as well as from online e-journals that seemingly spring up overnight boasting nothing more than a new URL, research papers pour forth, numbering in the millions annually.

For the aspiring researcher, as well as for librarians, academics, students and others who need to keep track of the data onslaught, the dilemma is clear: how does one determine the most trustworthy and proven sources of information?

For the last four decades, the Intellectual Property & Science business of Thomson Reuters has offered an answer – the Journal Citation Reports, a compendium of metrics for gauging the influence of scientific and scholarly journals. One measurement in particular has been renowned as a mark of a journal’s significance: the Journal Impact Factor. Rather than being a subjective rating imposed by an outside observer, the Journal Impact Factor has the advantage of directly reflecting the judgments that scientists and scholars themselves make regarding the most noteworthy and useful research.

The Journal Impact Factor, however, is only a single measurement—one which, due to instances of misuse and misinterpretation by users, has occasioned some controversy over the years. Nevertheless, its longevity and prominence as a simple, easily understandable metric continues to ensure the significance of its use, despite the advent of other analytic tools. The ongoing story of the JCR is a one of new refinements and more extensive, telling measurements. But the story, inevitably, begins with impact, and citations.

Following the Citations

When researchers and scholars publish the results of their experiments and studies, academic tradition dictates that they follow a cardinal rule: explicitly footnoting, or citing, the previously published literature on which their own work is based. By marking their points of departure and specifying the previous work that they’re pushing forward or steering in new directions, authors acknowledge an intellectual debt to the earlier research and its creators. This debt is “paid,” in effect, with a citation, which accrues to the previous publication and all its listed authors.

Hard at work in the University library

Citations can be tracked and tallied over time, serving as markers of activity, concentration and influence in the literature. In the mid-1950s, recognizing the value of citations as a means of organizing and clarifying the sprawling world of scholarly research, information scientist Eugene Garfield first proposed the idea of a citation index. In 1964, the Science Citation Index (SCI) was released by the company Garfield founded, the Institute for Scientific Information, the forerunner of the IP&S business of Thomson Reuters. Today, the SCI has been updated and expanded into an online incarnation known as the Web of Science, which covers more than 12,000 scientific and scholarly journals and other materials. Along with indexing the contents of the literature, the Web of Science tabulates every citation recorded in every published item.

More than 1 billion citations have now been recorded—each one a measurement of the significance of a given paper. When analyzed in aggregate, citations serve as a concrete data point—albeit, only one of many possible evaluative measures— by which to gauge the influence of researchers, institutions, nations, regions, etc.

Citations are the strongest and most reliable indicator of scholarly value.

In the mid-1950s, Garfield also contemplated a citation-based metric that would assess the influence of an individual journal. His initial aim, in highlighting journals on the basis of citation impact, was to assist librarians and administrators in building and maintaining library collections. Twenty years later, the idea reached fruition in the 1975 publication of the first Journal Citation Reports (JCR) as a component of the SCI. The Journal Impact Factor officially entered the scene and has been published annually ever since, along with a widening array of other, newer measurements.

In each year’s JCR listings, nearly 12,000 journals are ranked in more than 230 subject areas representing the sciences and social sciences.

Citations to Citable Items

The Journal Impact Factor answers a simple question: how many times has the average article in a given journal been cited in a particular year or period? In other words, it represents a ratio between citations and recent citable items published. Specifically, the impact factor is calculated by considering all citations in one year to a journal’s content published in the prior two years, divided by the number of substantive, scholarly items published in that journal in those same two years. The “substantive” content constituting the “citable items” refers to articles and conference proceedings that announce original work, along with review articles that summarize key research in a given area.

Meanwhile, items that generally do not communicate substantial research findings—editorials, letters, news items and meeting abstracts—are excluded from the calculation. Determining which material to include as citable items is a matter of careful scrutiny by staff specialists in bibliographic policy.

Essential to the validity of the JCR and the impact factor is the process by which Thomson Reuters selects and maintains its collection of indexed journals. To be included for coverage in the Web of Science and all its component databases, candidate journals must pass a range of tests for timely publishing, novel content and international diversity, among other criteria. Given the impossibility of covering all the world’s journals, the goal is to cover those that produce significant research.

By capturing all the citations recorded within its population of covered journals (and even in those not covered), Web of Science and the JCR can convey the collective judgment of the scientific community on the significance and utility of published work. After 40 years, and even with the development of different metrics, citations retain a singular power as the strongest and most reliable indicator of scholarly value. Although newer metrics such as web-page visits and other measures have significance in some circumstances, they simply don’t have the same explicit weight that scholars assign to cited references. The cited reference is king.

Proper Application

Despite its original design as a tool for assisting librarians in evaluating their collections, the Journal Impact Factor soon took on a life of its own, beyond the control of Thomson Reuters. Journal publishers, understandably, have been keenly interested in the impact factors of their respective titles, often featuring the metric in their advertising. Critics, meanwhile, argue that the Journal Impact Factor has gained undue prominence and emphasis as a measurement, and even, in some cases, as a driver of editorial content in pursuit of a higher score.

One such editorial strategy involves increasing a journal’s publication of review articles, in contrast to accounts of original research. Because they summarize significant findings in a given area, reviews serve as useful references for authors to provide background to their work, and therefore tend to be highly cited. Consequently, a high number of review articles can have the effect—at least for a given year—of boosting the Journal Impact Factor.

“The whole point of the exercise was to show the relative impact of one journal versus another.”

Video: The Spirit of Entrepreneurship: The Birth of the Impact Factor

Other forms of intentional manipulation have been more blatant. Particularly infamous is the deliberate insertion into a given paper of additional footnotes, all of which cite previous articles in the same journal. These citations, in turn, increase the journal’s overall citation tally toward its Journal Impact Factor.

The practice of citing one’s own work, or “self-citation,” is, to an extent, normative and accepted in scholarly publishing, as authors legitimately cite selections of their previous work to provide background and context. Similarly, a certain degree of self-citation on the part of journals is normal and expected. Excessive self-citation, however, is usually considered a red flag—a means of artificially inflating citation rates, whether for an individual author or a journal.

Vigilance against untoward levels of self-citation is now part of the routine evaluation of journals in the Web of Science and the JCR. If analysts, after careful study, determine that a journal’s self-citations are excessive, remedial steps might include temporary de-listing of the journal from the JCR, or dropping the journal from coverage in Thomson Reuters products. In order to provide transparency, each listed journal’s level of self-citation is now part of the standard data presentation in the JCR.

Another criticism aimed at Journal Impact Factor—and another phenomenon outside the control of Thomson Reuters—is that users sometimes inappropriately and incorrectly apply the measurement beyond its original scope and intent.

Other forms of intentional manipulation have been more blatant. Particularly infamous is the deliberate insertion into a given paper of additional footnotes, all of which cite previous articles in the same journal. These citations, in turn, increase the journal’s overall citation tally toward its Journal Impact Factor.

The practice of citing one’s own work, or “self-citation,” is, to an extent, normative and accepted in scholarly publishing, as authors legitimately cite selections of their previous work to provide background and context. Similarly, a certain degree of self-citation on the part of journals is normal and expected. Excessive self-citation, however, is usually considered a red flag—a means of artificially inflating citation rates, whether for an individual author or a journal.

Vigilance against untoward levels of self-citation is now part of the routine evaluation of journals in the Web of Science and the JCR. If analysts, after careful study, determine that a journal’s self-citations are excessive, remedial steps might include temporary de-listing of the journal from the JCR, or dropping the journal from coverage in Thomson Reuters products. In order to provide transparency, each listed journal’s level of self-citation is now part of the standard data presentation in the JCR.

Another criticism aimed at Journal Impact Factor—and another phenomenon outside the control of Thomson Reuters—is that users sometimes inappropriately and incorrectly apply the measurement beyond its original scope and intent.

Specifically, authors, or those tasked with evaluating authors, sometimes misguidedly invoke the Journal Impact Factor of a given journal in which a paper has been published, as a means of imparting a judgment on the given paper or author. This misinterpretation of data has also extended to the evaluation of institutions and academic departments.

“What’s problematic with the impact factor is the use that some organizations outside of Thomson Reuters have put it to, in extending it to evaluate a single paper or author,” says James Hardcastle, Senior Research Executive at the publishing firm of Taylor & Francis. “Whether it’s actually the case or not, many authors perceive themselves to be evaluated by the impact factor of the journals in which they publish. And perceptions are hard to change.”

In fact, Thomson Reuters has always specified and emphasized the proper application of the Journal Impact Factor as a general measure of a journal’s influence, rather than as a proxy or surrogate for the evaluation of authors or institutions. Furthermore, the Journal Impact Factor is but a single data point, which must be considered advisedly and in context, given that many factors influence rates at which individual papers in various disciplines are cited.

To provide just one example: citations to a typical paper in one field may start to accrue citations relatively soon after publication, while in another field a delay of a year or more is common before citations begin to accumulate. For this reason, among others, Journal Impact Factor does not serve as a meaningful comparator between journals in different fields.

Of course, authors make legitimate use of the Journal Impact Factor in targeting their manuscripts at high-impact journals. But subsequently letting the journal’s impact factor substitute as a measure of the paper itself, or the author, is not an appropriate application.

Beyond the Journal Impact Factor

In recent years, the tools of journal evaluation have grown more sophisticated and nuanced in comparison with the simplicity of the Journal Impact factor. And the JCR reflects the expansion of these metric implements.

For example, the two-year Journal Impact Factor has now been joined by a five-year measure—which, as the name implies, assesses the impact of journals over a five-year period, affording a more retrospective view of article influence, for fields in which citations typically take longer to accumulate.

Another comparatively new metric, known as the Eigenfactor score, also considers articles published over a five-year period. But the Eigenfactor goes beyond simple citation counts, also factoring in the influence of the journals in which those citations are recorded. Therefore, citations in comparatively influential journals will add to a larger Eigenfactor score. The Eigenfactor measurement also eliminates the effect of self-citations.

The Eigenfactor itself has undergone recent refinement, to be hereafter reflected in new editions of the JCR. A “normalized” Eigenfactor score compares journals against a benchmark of 1.00, such that a score of 2.00, for example, represents influence of double the world mark.

A further measure, the Article Influence Score, gauges the relative importance of a given journal on a per-article basis. Through a calculation involving the journal’s Eigenfactor score and the journal’s overall article count, each article’s score is compared against a mean Article Influence benchmark. Scores above 1.00 denote that a journal’s articles generally wield above-average influence.

And still another new measurement is the JIF Percentile. By factoring in a journal’s impact factor within its specific field, as well as controlling for the size of the field, the JIF Percentile allows normalized comparison between journals in different specialty areas. This permits, for example, assessing the impact of a medical journal against an engineering journal in a deeper way than simply comparing their respective Journal Impact Factors. The metric also provides specific comparative data (based, as the name implies, on percentiles) on a journal’s standing in its own field.

In all, the JCR offers a range of metrics that provide a more detailed, varied and contextual view of journal influence.

Decades after originally conceiving the Journal Impact Factor, Eugene Garfield later wrote, “In 1955, it did not occur to me that ‘impact’ would one day be so controversial. Like nuclear energy, the impact factor is a mixed blessing. I expected it to be used constructively while recognizing that in the wrong hands it might be abused.”

Sixty years later, at the contemporary incarnation of the company Garfield founded, the effort to refine and deepen the evaluation of journals, and to see the evaluation appropriately applied, goes on. The JCR remains a valid and credible source for researchers and others looking to assess the impact of academic journals.