Open Access
How to translate text using browser tools
1 July 2007 Journal Impact Factors Are Inflated
ALAN E. WILSON
Author Affiliations +

Scientific articles produced by for-profit publishers cost consumers five times more, per page, than articles published by nonprofit presses (Bergstrom and Bergstrom 2006). This finding is disturbing, given that library budgets are constrained (Frazier 2001) and librarians have to make hard decisions about which journals to order, maintain, and cancel. In addition to feedback from institutional users about their specific journal interests, many librarians use information provided by Thomson Scientific's Journal Citation Reports (JCR) to rank the value of specific journals. Since 1975, Thomson Scientific has produced the annual JCR, which includes the Institute for Scientific Information's index of journal impact factors.

A journal's impact factor for any given year—say, 2005—is determined by calculating the number of times that the journal's articles published in the previous two years (2003 and 2004) are cited in 2005 in all indexed journals, and dividing that number by the total number of articles that appeared in the journal in 2003 and 2004; in other words, the impact factor is a ratio between citations and recent citable items published (see  http://scientific.thomson.com/free/essays/journalcitationreports/impactfactor/). The impact factor is a widely anticipated and discussed indicator of journal quality, because it can influence a journal's subscribership, profits, and perceived scientific importance, as well as its contributors' credibility and reputation. However, the value of the impact factor as an objective way to compare journals has come into question because of miscalculations in its computation and dubious editorial practices that may influence a journal's impact factor (Gowrishankar and Divakar 1999, Agrawal 2005).

I compared changes in annual impact factors for 95 nonreview ecology journals to evaluate the influence that publisher type may have on those impact factors. Three types of publishing company were included in this analysis: nonprofit (nonprofit publisher of a nonprofit group's journal), joint (for-profit publisher of a nonprofit group's journal), and for-profit (for-profit publisher of a for-profit group's journal). The change in each journal's annual impact factor was calculated by regressing annual impact factor estimates over time (from 1996 to 2005, when available). When necessary to conform to the assumptions of parametric statistics, the data on annual impact factor change and on year of journal founding were log transformed as log(impact factor slope + 0.15) and log(2005 –year of journal founding), respectively.

For nonreview ecology journals listed by Thomson Scientific, the change in the annual impact factor was significantly greater than zero (mean ± 1 standard error = 0.115 ± 0.022; t-test mean = 0, P < 0.001, n = 95) and was negatively correlated with journal age (r = −0.386, P < 0.001, n = 95). In other words, ecology journals' impact factors increased over time, with newer journals' impact factors exhibiting greater annual increases than those of older journals (figure 1a).

Figure 1.

(a) Relationship between year of journal founding (year of first issue) and annual journal impact factor change (calculated as slope of impact factor over time from 1996 through 2005, when available) for Thomson Scientific–listed, nonreview ecology journals. Journals were categorized as nonprofit (white squares), joint (gray triangles), or for-profit (black circles). (b) Mean annual journal impact factor change for nonprofit (white bars), joint (gray bars), and for-profit (black bars) Thomson Scientific–listed, nonreview ecology journals founded before 1976 (the year after Thomson Scientific began producing Journal Citation Reports; ANOVA [analysis of variance], P = 0.028), before 1991 (the year after interest in journal impact factors began exponentially increasing; ANOVA, P < 0.001), or before 2006 (full data set; ANOVA, P = 0.046). Error bars indicate 1 standard error. Inset numbers represent sample size.

i0006-3568-57-7-550-f01.gif

The positive correlation between annual impact factor change and year of journal founding (figure 1a) is not surprising, given that the Thomson Scientific database, which does not include all journals, may exclude newer journals with low impact factors while retaining older, historically important journals with declining impact factors. Moreover, the positive annual impact factor change differed across the three publisher types for ecology journals founded before 2006 (ANOVA [analysis of variance] test, P = 0.046; figure 1b). Nonprofit journals' impact factors increased 35 and 40 percent less each year than the impact factors for joint and for-profit journals, respectively. Furthermore, nonprofit journals listed with Thomson Scientific were older (mean founding year = 1957) than joint journals (mean founding year = 1974) and for-profit journals (mean founding year = 1979). Thus, one explanation for the differences in the impact factor changes observed over time for the three publisher types (using data for journals founded before 2006; figure 1b) could be that the higher slopes observed for joint and for-profit publishers were an artifact of their being newer and of Thomson Scientific's tendency to exclude newer journals with low impact factors.

Reanalysis of the data on changes in annual impact factors for journals published before 1976 (the year after Thomson Scientific began producing JCR) or before 1991 (the year after interest in journal impact factors began to increase exponentially) provided results similar to those observed for journals founded before 2006 (ANOVA test, P = 0.028 and P < 0.001, respectively; figure 1b). Yet neither reduced data set revealed significant correlations between annual impact factor change and year of journal founding (P = 0.323 and P = 0.107, respectively). Consequently, the finding that nonprofit journals' impact factors increased less each year than did the impact factors of journals published by joint and for-profit publishers was robust. However, it is unclear whether this phenomenon is consistent across all journals or is specific to the ecological literature.

Authors are motivated to publish in high-quality, well-read journals, so their interest in journal impact factors will most likely continue to grow and to guide manuscript submissions and journal subscriptions. If this occurs, publishers will be encouraged to adopt strategies to enhance their journals' impact factors by increasing the number of citations per article published or by publishing fewer articles while maintaining the current number of citations. Several publishers of peer-reviewed journals are (a) shortening the review and publication process; (b) posting accepted, unedited manuscripts or in-press papers online before the print publication date; (c) providing free online access to papers; (d) including regular review articles; (e) inviting submissions of topical, popular interest; and (f) e-mailing information on their current table of contents to subscribers. More disturbing strategies that take place during the review process—editors' encouraging journal self-citation or limiting the number of citations that can be made to competing journals, for example—are not well documented but may be fairly common; such practices should be banned to protect scientific integrity (Agrawal 2005).

Although the goal of some of these strategies may be to cut costs and to improve the quality and availability of scientific publications, the inherent impact factor inflation associated with these strategies should make consumers of the peer-reviewed literature wary about the impact factor's usefulness as a quality index for scientific journals. I therefore suggest that authors, patrons, and librarians consider alternative measures to rate the value of peer-reviewed literature, such as the journal's scope, its intended audience, its cost, and the quality of research and findings published, in lieu of using journal impact factors. For those interested in using direct quantitative measures of citation, the Eigenfactor score ( www.eigenfactor.org; Bergstrom 2007) provides an intriguing alternative. Eigenfactor ranks journals much as Google ranks Web sites, using an iterative algorithm that gives more weight to citations from high-quality journals and adjusts for differences in citation patterns across fields. Because Eigenfactor disregards self-citation at the journal level, it is not subject to many of the manipulations to which Thomson Scientific journal impact factors can be sensitive.

Acknowledgments

I am grateful to Ted Bergstrom and Carl Bergstrom for providing their raw data on journal publishing companies, as well as for their thoughtful criticisms regarding my data analysis. Constructive comments from Orlando Sarnelle, Tomas Höök, Jay Lennon, Jean Palange, Desiree Tullos, and Gretchen Gerrish improved earlier versions of the manuscript.

References cited

1.

A. A. Agrawal 2005. Corruption of journal impact factors. Trends in Ecology and Evolution 20:157. Google Scholar

2.

C. T. Bergstrom 2007. Eigenfactor: Measuring the value and prestige of scholarly journals. College and Research Libraries News 68.5(18 May 2007;  www.ala.org/ala/acrl/acrlpubs/crlnews/backissues2007/may07/Eigenfactor.htm). Google Scholar

3.

C. T. Bergstrom and T. C. Bergstrom . 2006. The economics of ecology journals. Frontiers in Ecology and the Environment 4:488–495. Google Scholar

4.

K. Frazier 2001. The librarians' dilemma: Contemplating the costs of the “big deal.”. D-Lib Magazine 7.3(2 May 2007;  www.dlib.org/dlib/march01/frazier/03frazier.html). Google Scholar

5.

J. Gowrishankar and P. Divakar . 1999. Sprucing up one's impact factor. Nature 401:321–322. Google Scholar
ALAN E. WILSON "Journal Impact Factors Are Inflated," BioScience 57(7), 550-551, (1 July 2007). https://doi.org/10.1641/B570702
Published: 1 July 2007
Back to Top