Saturday, October 25, 2008

"The Misused Impact Factor"

Kai Simons makes an interesting point in an editorial in Science (10 October 2008).
Each year, Thomson Reuters extracts the references from more than 9000 journals and calculates the impact factor for each journal by taking the number of citations to articles published by the journal in the previous 2 years and dividing this by the number of articles published by the journal during those same years.
The data is used widely; researchers and research institutions are judged according not only to the numbers of their publications but also the impact of the journals in which they publish. Money hangs on the judgment, both in terms of salaries for the researcher and endowments for the institutions; so does prestige and promotion.

The problem to which Simons refers is the increasing efforts made by journal editors to increase the estimated impacts of their journals. They do this not merely by choosing the best and most important articles that they can find, but by publishing more survey and other kinds of articles that tend to produce high ratings.

This is an example of a general problem with indicators. They are often imperfect measures of that which is important, and if people seek to maximize the indicator value rather than do what is most important, they can be dangerous.

I recall one of my friends long ago telling me that as an Indian Health Service physician he discovered that their production index gave more points for physician supervised preventive services than for those provided by unsupervised nurses. So he moved the nurses and his desk into a common area and did his paper work while the nurses were providing their services. This now counted as supervision, and his productivity numbers were greatly increased.

The lesson is that one should not confuse what is measured with what is important. This is not a criticism of the people designing indices, but of the bureaucrats who misuse the data they produce.

No comments: