Monday, 24 August 2015

Citations and Impact Factors

Recently, I was engaged in a debate with a few colleagues about the importance of citation metrics. Is the Journal Impact Factor (IF) really an important factor to consider while choosing a journal to publish one's paper? Is a scientist with higher citations or a higher h-index really doing better quality research in comparison to his/her colleagues? A lot of people, when asked, usually say that citations are an imperfect way of measuring scientific output; but at the same time they look at the very same metric when taking decisions relating to promotions and employment.

In some fields which are not yet fully affected by the seduction of IF, people look at other factors such as quality of the editorial board, reach and popularity of the journal, and cost of the journal before taking the decision to publish there.

A recent article in the journal Nature methods has some nice comments:
.. the IF of journals in which a scientist publishes should not be the criterion on which his or her scientific contributions are judged, for instance when making hiring or funding decisions..
IF varies by field, is affected by editorial policies - publishing a lot or reviews can have a positive effect, for example - and reflects citation practices good and ill.
The IF also does not report on other aspects of impact - whether a method is commercialised, for instance, or whether it has other societal effects.
and finally concludes by saying that
It is a truism, which nonetheless bears repeating, that no metric should be wielded without judgment. This depends, in turn, on knowing what the metric reports and what its assumptions and biases are. Just as for any other method. 


  1. This comment has been removed by the author.

  2. From Nature report
    Faked peer reviews prompt 64 retractions
    The cull follows a similar discovery earlier this year.

    Ewen Callaway is worth reading