Journalology #34: Curious

published5 months ago
12 min read

Hello fellow journalologists,

The 2022 impact factors were released this week; many of you will have been assessing how your journals performed against competitors. I don’t have access to the complete JCR dataset, so I can’t provide any useful analysis I’m afraid. Suffice to say, it looks as though the Nature and Lancet journals are dominating the top 50 list again. I wrote about the power of brand this time last year in The Brief.

Talking of The Brief, the latest issue was sent to subscribers earlier this week. We covered the cOAlition S transformative journals announcement, the growth of academic research in China, and the creation of supercontinents of research.

I’ve spent some time this week reading Springer Nature’s 2022 annual report. This is only the second year that Springer Nature has published an annual report. Presumably BC Partners, the private equity firm that owns 47% of Springer Nature, will need to sell its stake in the not-too-distant future; the annual report is essentially a sales pitch to future investors, but it’s also a source of useful information for other scholarly publishers (and their consultants) alike.

As a reminder, Springer Nature is split into four divisions: Research (72% of revenues), Education (12%), Health (10%), and Professional (6%). Curiously, all of the journals are part of the Research division, bar one. At the end of last year Springer Nature acquired Cureus, which published 12,452 articles in 2022 according to Dimensions (up from 8,482 the previous year). For reasons that are not entirely clear, Cureus is part of the newly formed Health division, not part of the Research division.

A few months ago I wrote about peer review systems and author experience in The Brief. A peer review systems is one of the most important pieces of technology for a publisher as they power growth for OA journals. SNAPP is the Springer Nature Article Processing Platform. Page 18 of the report provides an overview of SNAPP, which notes:

To help authors find the best journal in our portfolio for their research, SNAPP includes a one-click transfer function, saving time.

Transfer cascades are incredibly important for publishers and a core part of any portfolio strategy. In a previous Journalology newsletter, I argued that revenue per submission is a KPI that publishers should be measuring.

By the end of 2022, 640 of Springer Nature’s 3000 journals were using SNAPP, so there is still a long way to go in terms of migrating journals off of other peer-review platforms (e.g. Editorial Manager and eJP).

Revenues for the Research division grew by 4% to $1,312 billion, but the growth in publications was far more modest. Submissions grew by 6.6% from 1,436,320 (in 2021) to 1,530,608 (in 2022) but the number of published articles only grew by 1% from 406,495 (in 2021) to 410,769 (in 2022). 115,000 articles (28%) were published via the transfer cascade. MDPI and Frontiers are growing significantly faster, which is problematic for a company that’s looking to change owners.

Traditional publishers, such as Springer Nature, have one significant advantage over born-OA publishers and that’s their large institutional sales teams, which are being deployed to sell transformative agreements (TAs) to large consortia (and even countries). Unsurprisingly, TAs were a headline act in the Springer Nature annual report and also in a speech that Frank Vrancken Peeters, the Springer Nature CEO, gave at the 16th Berlin OA Conference earlier in the month. The accompanying blog post is worth reading.

Last year we published three times as many gold OA articles in our Springer hybrid titles via TAs than we did via our author choice options. In countries where we have a TA, up to 90% of articles we publish are now published OA. In Germany, open access articles have grown by a factor of almost nine and if you look at HSS articles, it’s over 15 - going from 120 articles to over 2000 in a single year! In fact, industry wide, the picture is the same. STM recently published data which showed that TAs accounted for over 200,000 of the over 1m gold OA articles last year (up from just under 18,000 in 2018). This means that a fifth of all OA articles are already being generated by TAs. Overall, gold OA articles now account for over 31% of all articles while subscription articles have fallen to 52% and the number of green OA articles stayed flat at only about 8%.

Last week I wrote about the usage of Springer Nature’s transformative journals and the significant contribution that the Nature journals make to those numbers. Page 31 of the Springer Nature annual report provides the average usage and citations per article for the portfolio as a whole. Based on the new cOAlition S data set, I suspect that the Nature journals are skewing the data and that those numbers would not look quite so rosy if the Nature journals were excluded.

As we enter July, the Frontiers 2022 annual report is yet to be published. The foreword to last year’s report was dated January 2022, so the report has been delayed for some reason. It will be interesting to see how Frontiers positions itself against MDPI and more established players, such as Springer Nature, when the 2022 annual report is published.


Unveiling the Journal Citation Reports 2023: Supporting research integrity with trusted tools and data

While the primary change in this year’s release is the extension of the JIF to more than 9,000 additional journals from more than 3,000 publishers, users will also notice a few changes to the user interface. First and foremost, the JIF will now display for journals in ESCI and AHCI. Where we previously displayed N/A, you will now see a value. Note that AHCI and ESCI journals will not be ranked or receive a quartile or percentile until 2024. This is reflected in the product with an N/A, as in previous years. Additionally, the JIF now displays only one decimal place instead of three. This means we will see an increase of rank position ties in many categories – i.e., multiple journals with the same JIF – which you can read about in more detail on our blog . This change encourages users to consider other indicators and descriptive data in the JCR when comparing journals.

Clarivate (Kate Heaney)

Sensus Impact: Visualizing Your Value

Silverchair and OUP are piloting a new community platform that delivers a standardized and centralized way to report the differential impact of funded research publications – across all publishers and venues. There are scenarios where it benefits everyone for publishers to come together and adopt community-level solutions to common needs, and this seems like one of those scenarios. Already, some of the biggest funders are starting to try to assemble this impact reporting on their own. However, many funders don’t have the resources to do this independently. Plus, different methods / approaches from each funder would result in an uneven and incomplete reporting environment – and could have knock-on effects that led to publishers having to create custom reporting / data submissions to 10s or 100s of funders.

Silverchair (press release)

Moving away from APCs: a multi-stakeholder working group convened by cOAlition S, Jisc and PLOS

cOAlition S, in partnership with Jisc and PLOS, are seeking to establish a multi-stakeholder working group to identify business models and arrangements that enable equitable participation in knowledge-sharing. The aims of this working group and the eligibility criteria that interested parties must meet in order to apply are outlined below. We anticipate that the group will consist of a maximum of twelve individuals and will represent the three key stakeholders – funders, institutions/library consortia and publishers – in roughly equal proportions. Once established, the working group is expected to convene up to six times. The key outcome from this collaborative effort will be the development of a model (or multiple models) that, if implemented, would enable equitable participation in knowledge sharing.

Plan S blog

JB: The moral of this story is “be careful of what you wish for”. After all, PLOS and cOAlition S have done more than most organisations to push the APC economy. It was obvious right from the start that paying publishers per article would create a system in which publishers seek to publish as many articles as possible. The publish or perish culture within academia has exacerbated the situation. The PLOS blog is worth reading too.

Tracking Global Access- the move to OpenAlex and inclusion of 2022 data

The Open Access dashboard provides information on the OA status of research outputs by country and by institution. At the core of this is assigning research outputs to institutions and MAG and now OpenAlex are our core source for this. The top level message is that OpenAlex is offering a big jump forward in coverage and completeness, and we know the team there are working hard on making it even better. Currently we’ve seen a huge improvement in the tracking of open access outputs in our dashboard, with 14,477 institutions covered, up from 7,701 previously. The big good news story is the increase in the countries covered, with 221 now included, up from 189 previously. In particular this has seen a big increase in our coverage of African countries with an additional 16 countries now included, which is exciting given the inclusion of the COKI dashboard as a source on the AfricArXiv country pages.

COKI (Kathryn Napier, Cameron Neylon and Jamie Diprose)

A first look at Open Science Indicators for articles published in 2023

This week, PLOS shares the latest update to our Open Science Indicators (OSIs) dataset. We’ve added an additional three months of data, bringing the data current through the end of the first quarter of this year. The complete dataset now covers the period from January 1 2019-March 31 2023. PLOS Open Science indicators are a large public dataset—the latest version has data on 82,298 articles—that uses Natural Language Processing to identify and measure key Open Science practices in the published literature, including the entire PLOS corpus, plus a smaller comparator dataset drawn from PubMed Central. The current indicators include data sharing, code sharing, and preprint posting.

The Official PLOS Blog (Lauren Cadwallader, Lindsay Morton, and Iain Hrynaszkiewicz)

Announcing new resources from the FORCE11-COPE Research Data Publishing Ethics Working Group: flowcharts!

However, we recognized that an important aspect of normalizing best practices relates to the day-to-day handling of cases that the journals and data repositories may encounter, and that resources that would guide team members in this process would be particularly useful. With this goal, the Working Group set out to develop flowcharts for each of the four categories of issues outlined in the recommendations. Several group members worked collaboratively to produce flowcharts for each of the four categories outlined in the recommendations; for each of the categories, they created one flowchart for cases raised prior to the publication of the dataset, and another for the handling of issues involving published datasets. The group iterated on the drafts while carefully considering the format that would best guide step-by-step follow up on individual cases, while maintaining the consistency in guidance across the four categories. We are now pleased to share the eight finalized flowcharts to guide the handling of ethical concerns related to data publication.

Upstream (Iratxe Puebla)

Launch of New Open Access Toolkit to Empower Scholarly Publishers and Researchers

The launch of the Toolkit marks a significant milestone in the efforts of OASPA and DOAJ to promote transparency, accessibility, and inclusivity in scholarly publishing. The Toolkit answers a need for an online resource to support new and established open access journals in navigating the rapidly changing landscape of open access publishing. The Open Access Journals Toolkit design process began in November 2022 and ended in June 2023 with this launch.

OASPA press release

Fostering collaboration: a study of scientific publications with authors in G20 countries

In the last 30 years, some of the G20 countries saw enormous growth in their scientific capacity, especially the ones that belong to the Global South. In the period 1999 to 2022, the number of scientific publications with authors in India grew at a CAGR of 11.2% per year. For China the rate was 14.7% per year, for Saudi Arabia 16.0% per year, and for Indonesia 20.1% per year.

Elsevier (Carlos Henrique de Brito Cruz)


The PLOS Union

Arguably the true measure of inclusion is not whether an organization is willing to give a voice, but whether it is willing to give power. As a democratic complement to managerial hierarchies, trade unions are the only legally empowered mechanism for employee participation in their workplaces. It is out of recognition of the power that unions give employees that unionization is often actively (and often illegally) resisted, even by apparently progressive corporations like Apple, Amazon, Tesla, and Starbucks. In the context of PLOS, which is neither a membership organization nor a for-profit corporation, its approach to the new union is therefore potentially an opportunity for the publisher to express its values.

The Scholarly Kitchen (Charles Whalley)

JB: Charles kindly gives a shout out to the Journalology newsletter in his article (its first ever citation, perhaps?). I wrote about the PLOS financial picture in Issue 27.

SSP Conference Debate: AI and the Integrity of Scholarly Publishing

Faced with Generative AI, each publisher has a choice to make. You can either invest heavily in ensuring that the work presented in your journals is real research that actually happened, or you can carry on as normal in the hope that the majority of work you publish is still real. But here’s a warning: journals that don’t want to certify their research as real will steadily become repositories of fabricated junk, fatally undermined by AI. Will that be all of us? Or just most of us? That’s up to you.

The Scholarly Kitchen (Tim Vines)

Journal club

Bibliometrics Methods in Detecting Citations to Questionable Journals

This paper intends to analyse whether journals that had been removed from the Directory of Open Access Journals (DOAJ) in 2018 due to suspected misconduct were cited within journals indexed in the Scopus database. Our analysis showed that Scopus contained over 15 thousand references to the removed journals identified. The majority of the publications citing these journals came from the area of Engineering. It is important to note that although we cannot assume that all the journals removed followed unethical practices, it is still essential that researchers are aware of the issues around citing journals that have been suspected of misconduct. We suggest that research libraries play a crucial role in training, advising and providing information to researchers about these ethical issues of publication malpractice and misconduct.

The Journal of Academic Librarianship (Barbara S. Lancho Barrantes, Sally Dalton and Deirdre Andre)

Enhancing Partnerships of Institutions and Journals to Address Concerns About Research Misconduct: Recommendations From a Working Group of Institutional Research Integrity Officers and Journal Editors and Publishers

The working group identified 3 key recommendations to be adopted and implemented to change the status quo for better collaboration between institutions and journals: (1) reconsideration and broadening of the interpretation by institutions of the need-to-know criteria in federal regulations (ie, confidential or sensitive information and data are not disclosed unless there is a need for an individual to know the facts to perform specific jobs or functions), (2) uncoupling the evaluation of the accuracy and validity of research data from the determination of culpability and intent of the individuals involved, and (3) initiating a widespread change for the policies of journals and publishers regarding the timing and appropriateness for contacting institutions, either before or concurrently under certain conditions, when contacting the authors.

JAMA Network Open (Susan Garfinkel et al)

Content and form of original research articles in general major medical journals

Substantial differences between journals were also observed for the mentioning of methods, the patient population, the geography, the interventional treatment, and the use of an abbreviation in the title. In addition, there were substantial differences in the use of a study name in the title. For example, while no article published in the NEJM used a study name, almost half (45%) of the studies in the Lancet used one. Some content criteria were mainly not or rarely used in all considered journals, such as a dash, mentioning of results, using a declarative title, or a question mark.

PLOS ONE (Nicole Heßler and Andreas Ziegler)

And finally…

My favourite article of the week was Daniel Hook’s essay on the problems he had generating an image of a single banana using AI. Daniel is CEO of Digital Science and somehow also manages to find the time to publish on theoretical physics and write excellent essays like this one.

I began to suspect that bananas, like quarks in the Standard Model of physics, might not naturally occur on their own as through some obscure binding principle they might only occur in pairs. I checked the kitchen. Experimental evidence suggested that bananas can definitely appear individually. Phew! But, the fact remained that I couldn’t get an individual banana as an output from the AI. So, what was going on?

You should read the essay in full, but here’s the take home message:

The danger here is that due to the convincing nature of our interactions with AIs, we begin to believe that they understand the world in the way that we do. They don’t. AIs, at their current level of development, don’t perceive objects in the way that we do – they understand commonly occurring patterns. Their reality is fundamentally different to ours – it is not born in the physical world but in a logical world. Certainly, as successive generations of AI develop, it is easy for us to have interactions with them that suggest that they do understand. Some of the results of textual analysis that I’ve done with ChatGPT definitely give the impression of understanding. And yet, without a sense of the physical world, an AI has a problem with the concept of a single banana.

Thank you for reading to the end. Please feel free to forward this email to colleagues if you think it would be helpful to them.

Until next time,



The Journalology newsletter helps editors and publishing professionals keep up to date with scholarly publishing, and guides them on how to build influential scholarly journals.

Read more from Journalology