Frontiers | Science News

Science News post list

56 news posts in Impact factor

Impact analysis

03 Jul 2016

Quality and Impact Analysis: Frontiers in Immunology

Coming soon: 2017 analysis based on the most recent Journal Citation Reports by Clarivate Analytics (formerly published by Thomson Reuters). Frontiers in Immunology, launched in 2010, received its first official Impact Factor of 5.695 in 2016. In just 5 years, it became the largest and the most cited open-access journal in Immunology, and the 7th most cited among all journals in Immunology. Impact Factor (IF), defined as the total number of citations in a given year divided by the number of citable articles over the previous two-year period, is the most commonly accepted metric of journal quality (but not of an individual paper or researcher). It was formally established by the Institute for Scientific Information (ISI) in 1975. As the IF can be heavily skewed by a few highly-cited papers, total citations generated over the same two-year period provide a more accurate indication of the overall  influence or impact of the articles published by a journal in a field. Frontiers is a pioneer in the use of article-level and author-level metrics and encourages every author to use these to track the development of his or her readership on a more granular level. Analysis within the category of Immunology There are 150 journals listed in the category of Immunology in […]

Impact analysis

03 Jul 2016

Quality and Impact Analysis: Frontiers in Neurology

Coming soon: 2017 analysis based on the most recent Journal Citation Reports by Clarivate Analytics (formerly published by Thomson Reuters). Frontiers in Neurology, launched in 2010, received its first official Impact Factor of 3.184 in 2016. In just 5 years, it became the most cited and the 3rd largest open-access journal in Neurology. Impact Factor (IF), defined as the total number of citations in a given year divided by the number of citable articles over the previous two-year period, is the most commonly accepted metric of journal quality (but not of an individual paper or researcher). It was formally established by the Institute for Scientific Information (ISI) in 1975. As the IF can be heavily skewed by a few highly-cited papers, total citations generated over the same two-year period provide a more accurate indication of the overall  influence or impact of the articles published by a journal in a field. Frontiers is a pioneer in the use of article-level and author-level metrics and encourages every author to use these to track the development of his or her readership on a more granular level. Analysis within the category of Neurology There are 192 journals listed in the category of Neurology in the 2015 Journal Citation Reports (JCR) provided by Thomson Reuters in […]

Open science and peer review

04 Mar 2016

New Data Debunks Old Beliefs: Part 2

Our original New data debunks old beliefs blog post plotted the impact factors of 570 randomly selected journals indexed in the 2014 Journal Citation Reports (Thomson Reuters, 2015), against their publicly stated rejection rates. The goal was to understand the relationship between rejection rates and journal quality. Despite a widespread belief that high rejection rates secure high impact factors, no significant correlation was found. This study was preliminary, to start a discussion, because it suggests that such an entrenched belief may be wrong. This blog post is going one step further by removing what could be the main reason why we could not find any correlation: varying citation rates across academic fields. It is widely known that articles in some fields typically get more citations than in other fields. Perhaps a correlation would become evident once we removed this variable. In Figures 1-7 below, we normalized the impact factors by field, thus effectively removing this variable from the results (data accessible here). We have done this by calculating the ranking of each journal within its own Journal Citation Reports category (or field). As an example, a journal that has the 4th largest IF among 200 journals in its category will […]

Open science and peer review

24 Dec 2015

Article Processing Charges: Open Access could save global research

The total number of peer-reviewed research articles published each year increases by approximately 4% [Scopus]. In 2014, nearly 400,000 published research articles were Gold open-access papers. This results in approximately 20% of all research articles — and the number is growing at an astonishing rate of 20% per year (Lewis, 2013).  If the rate continues, open-access papers will exceed subscription papers in just a few years from now. This, and similar observations, have led some commentators to predict that traditional subscription journals will soon be a thing of the past (Lewis, 2012). But is this a credible prediction? Is open access capable of disrupting the entire scholarly publishing industry? Can it replace traditional publishing or force it to adopt new business models? The answers depend on whether open access satisfies two fundamental criteria for disruption: an increase in efficiency and a decrease in costs. The new generation of open-access publishers are “born digital” which is undoubtedly far more efficient, but how much will universities, institutes and scientists save by switching to open access ? Brief history of the evolution of open access From the 1950s to the 1990s, nearly all scientific papers were published in subscription journals that were paid for by individual readers or […]

Open science and peer review

22 Dec 2015

Born Digital: building the ultimate open-access publisher

By Pascal Rocha da Silva, Frontiers The digital disruption for analog film started in 1975 with the invention of the digital camera by Steven Sasson and ended with the bankruptcy of Kodak in 2012 (40 years later). The digital disruption in publishing started in the late 1990s with the first online archiving of articles, but it is still far from complete (~30 years into the transition). However, as over 30% of peer-reviewed papers are now published in some form of open-access1, the industry has technically crossed the tipping point for disruption. This is the point where more than just the innovators and early adopters begin using a product or service. Figure 1: Projection of open access versus subscription articles: 2000-2021. Disruptions are driven by economic models that lower costs, and process models that increase efficiency. In 2014, the revenue per subscription article was around $7,000 (calculated from $14 billion revenue for about 2 million articles2 – see article on the cost of publishing), while the average Article Processing Charge (APC) for an open-access article was estimated in our sample at $2,700. This means that, as open-access articles grow to dominate the market, the cost of publishing will eventually drop 2-3 fold, saving libraries and research departments $5 to 10 […]

Open science and peer review

21 Dec 2015

Selecting for impact: new data debunks old beliefs

One of the strongest beliefs in scholarly publishing is that journals seeking a high impact factor (IF) should be highly selective, accepting only papers predicted to become highly significant and novel, and hence likely to attract a large number of citations. The result is that so-called top journals reject as many of 90-95% of the manuscripts they receive, forcing the authors of these papers to resubmit in more “specialized”, lower impact factor journals where they may find a more receptive home. Unfortunately, most of the 20,000 or so journals in the scholarly publishing world follow their example. All of which raises the question: does the strategy work? There is evidence that proves it doesn’t. In Figure 1, we plotted the impact factors of 570 randomly selected journals indexed in the 2014 Journal Citation Reports (Thomson Reuters, 2015), against their publicly stated rejection rates.    Figure 1: 570 journals with publicly stated rejection rates (for sources, see below and to see complete data, click here). Impact factors from Thomson Reuters Journal Citation Reports (2014). (Y-axis is on a Log scale). As you can see, Figure 1 shows there is absolutely no correlation between rejection rates and impact factor (r2 = 0.0023; we assume the sample of 570 journals is sufficiently random to represent the […]