Journal Impact Factor is a term you have probably heard before, maybe in a situation like this: A colleague comes to you and says “Hey, I’ve got this new paper to read that was published in a really good journal; it has a high impact factor” or maybe the opposite, they say to you “Oh, I saw this paper that’s relevant to our research but it’s not in a very good journal; it’s a low impact factor journal, so I’m not sure how excited I am about it”. Or maybe your friend says to you “I would really love to publish in this certain journal because it has such a high impact factor” or maybe you’ve heard someone say “I wish we could hire this person because they publish in good journals with high impact factors” or the opposite: “We shouldn’t hire that person because they publish in low impact factor journals”.

 

So, let’s talk about what the Journal Impact Factor is. First, we will define it, then we will talk about its purposes and uses and misuses. Then we will talk about how to find the Journal Impact Factor for specific journals, and lastly how it’s calculated.

 

The Journal Impact Factor is a metric that is released in the annual Journal Citation Report by Clarivate Analytics. The calculation is based on the number of citations to that journal in the previous two years. The Journal Impact Factor can change from year to year because it is an evolving number, and it’s based on past performance. Basically, the Journal Impact Factor is the average citation rate over the previous two years. The higher the Journal Impact Factor, the more prestigious that journal is considered to be, because it’s having a greater impact (when impact is measured as citations).

 

Uses and misuses of Journal Impact Factor. Journal Impact Factor was started in 1975. The purpose was to help university librarians decide which journals they should subscribe to for their university libraries, and it’s still used for that purpose. But it’s also used for other purposes. For example, as researchers, we have to decide where we’re going to publish our articles. Journal Impact Factor can be one consideration for deciding where to submit our manuscripts.

But the Journal Impact Factor is also being misused. The first way it’s being misused is as a single metric to evaluate the quality of an individual publication. And you should not take that lazy approach and think that a paper is good just because it’s published in this particular journal. You have to read the paper and evaluate it carefully to see if it is actually good.

 

A second way that the Journal Impact Factor is misused is to evaluate individual researchers. You might look at someone’s track record and see that they are publishing mostly in low impact journals, or that they are publishing mostly in high impact journals. That fact alone cannot be used to say whether that person is a good researcher. That single metric of Impact Factor is not a good way to evaluate people. For evaluating individual researchers there’s another metric, the h-index, that is much better.

 

These misuses of the Journal Impact Factor are widely known, and if you look on the internet you’ll see a lot of discussion about misuses of the journal impact factor. For example, an article in Science in 2016 is called “Hate journal impact factors? New study gives you one more reason.” A quote from that article is: “scientists have come to use it (Journal Impact Factor) not only for deciding where to submit research papers, but for judging their peers, as well as influencing who wins jobs tenure and grants. All that from a single easy to read number. … And yet a journal’s impact factor is dismissed by many as useless or even destructive.” Deeper into this article we see another quote: “The Journal Impact Factor is a reflection of the citation performance of a journal as a whole unit, not as an assembly of diverse published items”. In other words, you should not use the Journal Impact Factor to evaluate single articles. It’s only measuring the journal itself. Another article from Science is titled “The misused impact factor”. Clarivate Analytics itself, on their website, has a paragraph called “Using the Journal Impact Factor Wisely”. In that paragraph they tell you the Journal Impact Factor should be used with informed peer review, not by itself. And recently the European Research Council has banned applicants from citing the Journal Impact Factor in materials for bids (grant applications). Some organizations are taking it seriously and not being lazy and using the Journal Impact Factor as the ultimate number to make evaluations.

 

Let’s now discuss how you find the Journal Impact Factor for a journal. There are two main ways to do that.

The first way: Most universities subscribe to Clarivate Analytics products, such as the Journal Citation Report and Web of Science. If you are a student, staff, or faculty at a university, or if you have access to a university library, you can browse or download the Journal Citation Report. It will list all of the journals that they have evaluated and their impact factors, and you can search for it by the name of the journal, you can sort it by Impact Factor number, or by categories to see rankings of journals within those categories.

 

The second way to find Journal Impact Factor is to go to the journal’s website. Most journals publish their Journal Impact Factor prominently on their website. As a few examples, I readily found the Journal Impact Factor on the websites of the first three journals I checked: Journal of Experimental Biology has a 2020 Impact Factor of 3.312; the Journal of Plant Physiology has an Impact Factor of 3.549, and Geology has an Impact Factor for 2020 of 5.399.

 

How is Journal Impact Factor calculated? It’s hard to get the raw data to do the calculations yourself, because it’s proprietary information collected by Clarivate Analytics, but this is the way they use that information. The Journal Impact Factor for 2020 is calculated (during 2021) as the total number of citations in 2020 to that journal, divided by the citable items in 2019 plus the decidable items in 2018. I have three hypothetical examples here:

This way of calculating the Journal Impact Factor is the very most basic way to calculate a mean or an average. It doesn’t tell you anything about the distribution of the citations. And this point about the distribution of the articles being cited has not escaped the notice of researchers. We see in this article from Nature in 2016: “The publishing elite turns against impact factor”, about the problem of distribution and assumptions readers may make about the meaning of the average. This article has a graphic showing the distribution of citations to articles in Nature, Science and PLoS One, where it is clear that there are a tiny number of papers that had more than 100 citations, and a tiny number that had zero citations, and a lot of papers with citations in the range of 15 to 20. A lot of the articles, approximately 75%, are getting fewer citations than the average. In PLoS One the distribution is tighter than it is in Science or Nature, but there are still a lot of articles, approximately 72%, that have fewer citations than the average. So, citation distributions are skewed towards the lower citation side. The takeaway from this graphic is that publishing an article in Science or Nature doesn’t guarantee that it’s going to get a lot of citations. A more extensive report was published on bioRxiv, and again, the distributions are skewed towards the lower citation rate. So, keep that in mind when you are thinking about Journal Impact Factor.

 

In summary, Journal Impact Factor an important metric, but it is just a single metric. Don’t use it alone to evaluate individual journal articles or individual researchers. You need to evaluate these things based on multiple factors.

 

Do you have any thoughts or questions about Journal Impact Factor? Leave a comment below! And check out our YouTube video!

0 Comments

Trackbacks/Pingbacks

  1. | Scientific Writing - […] you heard of the h-index? In a previous post we talked about the Journal Impact Factor, and how it…
  2. Alternatives to the Journal Impact Factor metric | Scientific Writing - […] a previous post we talked about the Journal Impact Factor, how it’s calculated, and how you should not use…
  3. Should you care about Journal Impact Factor? | Scientific Writing - […] a previous post, we discussed the Journal Impact Factor (JIF). The Journal Impact Factor might be a key […]
  4. What are predatory journals? How to recognize and avoid predatory journals. | Scientific Writing - […] Another sign is that they mimic or impersonate established journals. They might make a slight variation of the name…