The Impact Factor is the most popular numerical measure of a scientist’s work. Despite
many well-documented flaws, the Impact Factor is commonly used in recruitment, appointment,
and funding decisions. A diverse group of stakeholders is now making a concerted effort
to combat misuse of the Impact Factor and is calling for the development of more accurate
measures to assess research. The group has issued the San Francisco Declaration on
Research Assessment. You too can join the campaign.
It is in the nature of us scientists to measure things—even things that are difficult
to quantify such as an individual scientist’s performance and impact. A commonly used
metric to describe scientific impact is the journal Impact Factor (IF). The IF is
a journal-specific number that is calculated as the ratio of total citations a journal
receives over the preceding two years divided by the total number of citable articles
published during that time. Each paper in a given journal then is described not by
its own citation tally but rather by the journal-wide Impact Factor.
The IF is pervasive in the scientific community. Scientists refer to it casually in
conversation to convince colleagues of the importance of their own papers, or they
wonder how a paper ended up in “a journal with such a high Impact Factor.” Students
and postdocs want to publish only in “high Impact Factor” journals, and the IF is
frequently used in recruitment, tenure, and granting decisions when a candidate’s
past publication performance is assessed.
The IF was never meant to be used in that way! It was introduced in the early 1960s
to aid librarians in stocking their shelves with the journals that were most important
to their constituents. It was not intended to assess the research quality or impact
of a single paper, let alone an individual scientist’s performance.
Numerous flaws in the IF have been pointed out over the years. Some of the more troublesome
shortcomings are: a journal’s IF can be driven by a few, extremely highly cited articles,
yet all articles published in a given journal, even those that are never cited, are
presumed to have the same IF; the IF does not say anything about an individual article,
yet conclusions about a particular paper are often drawn; the IF can be manipulated
by journals in many ways, for example by publishing more review articles, which are
generally more highly cited, thus distorting the perceived impact of the journal’s
primary research articles; and the IF is sensitive to the nature of the scientific
content and the size of a given field, with smaller communities naturally generating
fewer citations.
Fortunately, awareness of the many flaws of the IF has grown over the last few years.
Now, a group of prominent journal editors and publishers of scholarly journals, as
well as representatives from major funding agencies and research institutions, is
speaking up as one voice to highlight the limitations of the IF and to call for a
concerted effort to improve the ways scientific output is assessed by funding agencies,
academic institutions, and scientists themselves. The group has developed a set of
specific recommendations and published them in the San Francisco Declaration on Research
Assessment. The Declaration bears the signatures of about 200 institutions and individuals
and is available at http://www.ascb.org/SFdeclaration.html for public signature by
any party who wants to express its support.
The key points of the declaration are simple, yet profound. The central recommendation
calls for the elimination of the use of the IF, and all other journal-level metrics,
in funding, appointment, award, and promotion decisions. We need to return to a culture
where these often life-changing decisions are made by careful, in-depth consideration
of a candidate’s work and future potential rather than merely adding up numerical
values. This effort will require that funding agencies and institutions explicitly
define, and adhere to, criteria they will use for evaluation of scientific productivity.
A second broad recommendation is to refrain from using publications and citation as
the primary indicators of impact. Scientists produce much more than just publications.
All research outputs—minable datasets, software, equipment and technology development,
contributions to large-scale collaborative efforts, and reagents made available to
the community—should be considered when assessing a scientist’s contributions. In
addition, an individual’s influence on policy and on scientific or clinical practice
should be included in any evaluation.
Although initiated by a group of editors and publishers, the declaration is also self-critical
and challenges publishers not to use the IF for promotional purposes. This includes
removing mention of the IF from their websites and refraining from using it in advertising
materials. In addition, rather than promoting a single metric, publishers are urged
to provide a range of publication metrics, which will allow readers to more accurately
assess the strengths and weaknesses of a given article or journal. Given that most
journals are nowadays electronically published, extraction of a diverse set of publication
metrics is easily feasible.
A final important recommendation is to call on scientists to do their part in eliminating
inappropriate use of the IF. Active scientists should refrain from buying into the
IF frenzy. When serving as a member of a recruitment or tenure committee, scientists
should not consider IF-based information in their decisions. More importantly, we
must teach our students and postdocs about the limitations of the IF and not promote
the notion that only work in high Impact Factor journals is worth reading and building
on for future research. Importantly, scientists must challenge others when faced with
inappropriate use or interpretation of journal-based metrics, be it on formal committees
or in casual conversation with colleagues.
The IF was created to assess a journal as a whole. But it is now often inappropriately
used to assess the quality of individual articles and scientists. We scientists are
not entirely innocent in bringing about the misuse of the IF. We like to measure,
we like to compete, and we like numbers. The IF was a tempting way to satisfy all
those inclinations despite its inappropriateness and its flaws in assessing individual
impact. Scientists often express disdain for the IF, but most play along, because
everyone else does. The San Francisco Declaration on Research Assessment is a chance
to break this Catch-22. Make your voice heard to eliminate the impact of the Impact
Factor by signing the San Francisco Declaration on Research Assessment.