Factors Behind the Impact

Factors Behind the Impact

Author: Sarah Millar

In 1961, publication of the Science Citation Index allowed the author citation index to be sorted into the journal citation index and a few years later, the journal impact factor (JIF) was born [1].

Today, the Journal Citation Reports (JCR) are published annually by the Institute of Scientific Information (ISI) and include every journal citation in more than 5000 journals — about 15 million citations from 1 million source items per year.

The definition of source item varies from journal to journal and it is not always possible to predict whether a given article will be counted in the ISI’s analysis. In general, the following are counted as source items:

  • Original articles
  • Review articles
  • Case reports
  •  Articles in symposium supplements

and the following are not:

  • Letters (except where they function as articles, e.g. Nature)
  • Abstracts
  • Commentaries
  • Editorials

Through analysis of this data, the ISI claim to offer a systematic, objective means to critically evaluate journals, with quantifiable, statistical information based on citation data, that is, the all important impact factor.

Calculation

Impact factors are calculated using two sets of data (see box) and can be considered, at their most basic, as an average of citations per article for each journal – the more citations an article has, the higher impact that article, and thereby that journal, has had.

The precision of JIFs is questionable, with methods of manipulation possible and debate as to what constitutes a source item giving rise to discrepancies. Other criteria, like the time over which the citations are averaged, has little effect on journal rankings, but can also make a significant difference to the individual numbers.

Manipulation of the figure is closely tied to what is regarded as a substantive contribution. An editorial may not count towards the denominator in the above equation, but it may cite several articles, increasing the numerator. Likewise, a news item may be heavily cited, but not be included in the denominator while the citations it receives are included in the numerator. Inclusion of many review articles, which are heavily cited, can also help increase a journal’s IF.

However, reporting to three decimal places reduces the number of journals with identical impact ranks, but it matters little whether, for example, the impact of Angewandte Chemie is quoted as 11.8 rather than 11.848.

Controversy and Comparison

Since their inception, impact factors have been controversial, with authors and editors on both sides of the argument (for example, see the interview with Peter Gölitz, editor of Angewandte Chemie) and while most acknowledge the usefulness of impact factors in assessing a journal’s standing and associated prestige within its field, considerable caution should be exercised when using JIFs to assess the quality of individual articles or researchers as a value averaged over an entire journal gives almost no information about the individual cases.

Likewise, a comparison can not always be made between different areas of research as sociological factors come into play, such as the size of the journal, size of the subject area and typical number of authors per paper. Generally, fundamental and pure subject areas have higher impact factors than specialized or applied one. For example, fundamental life sciences can have an average impact factor of 3.0, while materials science and engineering journals average 0.6 [2]. This is also true to a lesser extent within a field such as chemistry; organic synthesis and methodology journals tend to have higher impact than niche journals such as Biofuels, Bioproducts and Biorefining.

Part of this disparity comes from the relative sizes of the journals under discussion. As the JIF is an average, it is subject to statistical variations that are more pronounced in a small sample size like a highly specialized journal. The impact factors of very small journals (those publishing < 35 articles per year) can vary up to ± 40 % from one year to the next due to the nature of the small, inherently biased samples involved.

Some of these fluctuations can be ironed out if the window of study is increased from two years to five, and five-year impact factors are steadily gaining popularity along with a range of other metrics. These include Journal Performance Indicators, where each source item is linked to its own unique citations, and immediacy indices.

With the advent of the internet, an alternative method of measuring impact is available — the download count. However, Eugene Garfield, creator of the impact factor, points out that a distinction must be made between readership and actual citations as readership can also be dependent on the size of the respective communities. For example, organic chemistry is a much larger field than inorganic, and so an organic paper would be expected to receive more downloads than an inorganic, yet may not, necessarily, be cited more. This distinction becomes more apparent when one considers that downloads and citations do not reflect the same thing — downloads only reflect the immediate interest in an item rather than its long term impact on the chemical community.

Use with Care

These fluctuations, manipulations and statistical anomalies mean that extending IFs to individual authors should be avoided, not just because of the ethical considerations of tying financial remuneration to a pre-set standard, but because the error margins can make the values for individuals meaningless. Similarly, the finite numbers for journals should always be considered with caution.

Finally, there is no consensus as to what a citation really means. Does the fact that an article has been cited mean that it contains fundemental chemistry, or has it just been included for the sake of completion? And what about the forgotten papers? — Those that seem unimportant at the time, only to be rediscovered years later as the trends in chemistry shift back in that direction. So it is not all about quality — impact does not necessarily equal quality.

But whether or not you like or agree with impact factors, it appears they are here to stay.

As C. Hoeffel eloquently wrote in Allergy [3]:
“Impact Factor is not a perfect tool to measure the quality of articles but there is nothing better and it has the advantage of already being in existence and is, therefore, a good technique for scientific evaluation. Experience has shown that in each specialty the best journals are those in which it is most difficult to have an article accepted, and these are the journals that have a high impact factor. Most of these journals existed long before the impact factor was devised. The use of impact factor as a measure of quality is widespread because it fits well with the opinion we have in each field of the best journals in our specialty.”



References
  1. The History and Meaning of the Journal Impact Factor
    Eugene Garfield,
    J. Am. Med. Assoc. 2006, 295 (1), 90-93.
    DOI: 10.1001/jama.295.1.90
  2. Impact Factors: Use and Abuse
    M. Amin, M. Mabe
    Perspectives in Publishing, Elsevier, October 2000 (Reissued 2007 with minor revisions)
    http://www.elsevier.com/framework_editors/pdfs/Perspectives1.pdf
  3. Journal impact factors
    C. Hoeffel
    Allergy, 1998, 53 (12), 1225.
    DOI: 10.1111/j.1398-9995.1998.tb03848.x

Leave a Reply

Kindly review our community guidelines before leaving a comment.

Your email address will not be published. Required fields are marked *