profile

Journalology

Journalology #70: REF 2029

Published about 2 months ago • 15 min read


Hello fellow journalologists,

This week the UK Research Excellence Framework (REF) published its proposed open access policy for REF 2029; there was uproar about how the proposal could affect academics working in the humanities who publish books, but very little noise about the effects the OA proposal could have on UK researchers who publish in journals.

But before we get to that story, here’s a message from Cassyni, which is kindly supporting the newsletter over the next 4 weeks.

Thank you to our sponsor, Cassyni

What’s the secret to sustainable submission growth in 2024? Thriving journal communities.

Learn more about why publishers are using Cassyni seminars to build journal communities, delight authors and grow submissions.

News

Research Excellence Framework 2029 open access consultation

UK Research Excellence Framework (announcement)

JB: The consultation runs until 17 June, 2024, with the final REF 2029 OA Policy expected “in the summer or autumn of 2024”.

This proposal, if implemented, could significantly impact UK researchers, especially those working in certain subject areas.

The embargo period of 6 (or 12 months) is helpful, but a CC BY or CC BY-ND license is not offered by many publishers (for either the version of record or author-accepted manuscript).

For example, many leading US clinical journals do not offer a publishing option that’s compatible with this proposal. This means that clinical researchers based in the UK may not be able to publish in leading US journals, if they want their work to be considered as part of their institution’s REF submission (REF provides £2 billion in public funding to UK universities each year).

The major US funding agencies have, so far, not mandated a CC BY license in their response to the Nelson OSTP memo. It seems unlikely that US journals will change their licensing options just to accommodate UK researchers, although they may be willing to ‘bend the rules’ on an ad hoc basis.

It’s unclear what this might mean for international research collaborations. Presumably membership organisations, such as the Academy of Medical Sciences, will be making their thoughts on this proposal known. I suspect the nuance of the REF proposal will be lost on most academics, but their voices need to be heard too.


The impact of Plan S: a discussion on findings so far

Following a tender process, cOAlition S has selected scidecode science consulting to conduct a study assessing the impact of Plan S on the global scholarly communication ecosystem. Five months after the launch of this project, scidecode science consulting will be presenting the early results of this work to the OASPA network during a webinar on Tuesday, 9 April, 2024, from 3.00 pm to 4.15 pm BST.
The webinar may be particularly interesting to the publishing community and will include a presentation from the Royal Society of Chemistry as an example roadmap for a full journal portfolio flip to a fully Open Access business model.

Plan S (announcement)

JB: I’ll add this webinar to the master list, but the topic is an important one for editors and publishers and I wanted to flag it here.


PLOS ONE to correct 1,000 papers, add author proof step

The megajournal PLOS ONE will be correcting about 1,000 papers over the next few months, Retraction Watch has learned, and will add an author proof step – a first for the journal.
The corrections are for “errors in author names, affiliations, titles and references; to make minor updates to the acknowledgements, funding statements, and data availability statements, among other minor issues,” PLOS ONE head of communications David Knutson told us.

Retraction Watch (Ivan Oransky)


Open Access Charges – Continued Consolidation and Increases

It seems that the world of open access pricing is not immune to the effects of inflation. We have seen price increases most years, but going into 2024 they are noticeably higher than in previous years.
While outlying price changes are important general indicators, they do not necessarily shift the market on their own. Changes in the spread and emphasis of popular price bands play key roles too. The increasing proportion of higher-priced journals serve to increase the average prices actually paid.

Delta Think (Dan Pollock and Heather Staines)

JB: 20,000 titles are included in this survey, so the sample size is significant. The take home message is:

Fully OA list prices across our sample have risen by around 9.5% compared with those set a year ago. Hybrid list prices have risen by an average of 4.2% over the same period.

STM welcomes landmark EU AI Act vote

Today, the European Parliament made a significant stride in the governance of artificial intelligence with the final approval of the Artificial Intelligence Act, a world-first legislation intended to regulate the development and use of AI and sets an example for responsible AI governance.
STM welcomes the vote and endorses the Act’s positioning on the necessity to develop responsible and sustainable AI use, protecting the interests of rightsholders and creators alike.
Together with European rightsholders and creators, we have prepared a formal statement detailing this support, available here.

STM Association (announcement)


MDPI Joins the STM Integrity Hub

We are pleased to announce that MDPI is now part of the STM Integrity Hub. This affiliation underscores our steadfast dedication to STM initiatives aimed at safeguarding the integrity of science. MDPI has long been a supporter and partner within the STM community, with involvement ranging from sponsoring and attending events to helping organize event programs.
MDPI fully aligns with the values of the STM Integrity Hub, emphasizing the sharing of data and experiences. We strongly advocate for collaboration and open exchange to establish a comprehensive approach in supporting research integrity at MDPI and throughout the entire academic publishing industry.

MDPI (announcement)

JB: MDPI is the 35th organisation to join the STM Integrity Hub.


Supercharge your PDF reading: Follow references, skim outline, jump to figures

Today, we are launching the Google Scholar PDF Reader to enhance your paper reading. It brings the familiar ease and seamlessness of Scholar to reading PDF papers. In-text citations are now links – with one click, you will see a preview of the cited article and often a version you can read. All of this without losing your place in the paper.
Scholar PDF Reader displays an automatically computed table of contents. Want to go first to the methods section? Click on its link in the outline. Want to drill down to a specific subsection? Expand sections to quickly find your way there.

Google Scholar Blog (Sam Yuan et al)

JB: This is a Google Chrome extension that works with a PDF on any website. The reviews are good so far.


21 De Gruyter journals to be published open access under Subscribe to Open (S2O) model in 2024

De Gruyter is pleased to confirm that 21 De Gruyter journals are publishing this year's volumes in open access via Subscribe to Open.
The subscriber threshold for 16 existing Subscribe to Open journals has been reached. Five new journals are being transformed via Subscribe to Open in 2024, in line with the plan to transform about 80 per cent of De Gruyter’s journal portfolio into open access via the Subscribe to Open model by 2028.

De Gruyter (press release)

JB: The news here is that the S2O experiment continues for Dr Gruyter. The big question in my mind is what does this mean for Brill? (De Gruyter recently acquired Brill.)

One of the highlights of the APE (Academic Publishing in Europe) meeting earlier this year was the discussion about S2O, which, unlike many open-access discussions, was measured and reasonable.


Collaboratively seeking better solutions for monitoring Open Science

Research by PLOS and Research Consulting has found there is a growing need for Open Science Indicators (Open Science monitoring solutions) among some funders and institutions but implementation of monitoring solutions may be limited unless Open Science practices are a strategic priority for organisations. Research data sharing, and code and software sharing, are among the most important Open Science practices to monitor but organisations need information that is compatible with their own structure and nomenclature to be usable, which is not available currently. In the future Open Science Indicators need to monitor not just prevalence but also the effects or qualities of Open Science practices.

The Official PLOS Blog (Iain Hrynaszkiewicz and Chris Heid)


Here’s Proof You Can Train an AI Model Without Slurping Copyrighted Content

Two announcements Wednesday offer evidence that large language models can in fact be trained without the permissionless use of copyrighted materials.
A group of researchers backed by the French government have released what is thought to be the largest AI training dataset composed entirely of text that is in the public domain. And the nonprofit Fairly Trained announced that it has awarded its first certification for a large language model built without copyright infringement, showing that technology like that behind ChatGPT can be built in a different way to the AI industry’s contentious norm.

WIRED (Kate Knibbs)


Is AI ready to mass-produce lay summaries of research articles?

Andy Shepherd, scientific director at global technology company Envision Pharma Group in Horsham, UK, has in his spare time compared the performances of several AI tools to see how often they introduce blunders. He used eight text generators, including general ones and some that had been optimized to produce lay summaries. He then asked people with different backgrounds, such as health-care professionals and the public, to assess how clear, readable and useful lay summaries were for two papers.
“All of the platforms produced something that was coherent and read like a reasonable study, but a few of them introduced errors, and two of them actively reversed the conclusion of the paper,” he says. It’s easy for AI tools to make this mistake by, for instance, omitting the word ‘not’ in a sentence, he explains. Ratcliff cautions that AI summaries should be viewed as a tool’s “best guess” of what a paper is about, stressing that it can’t check facts.

Nature Index (Kamal Nahas)


Short article format comes to Cell

Our flagship journal Cell, known for its rich and comprehensive long narrative article format, now offers one more format option to publish exciting research: the short article format. In a March 14 editorial entitled “Exciting science in all format,” the journal team explains the thinking behind the decision to offer a formal 4000-word, 4-figure option to prospective authors. The first article in this format was published in the same March 14 issue.
The change comes at a pivotal moment for Cell, as the journal celebrates its 50th anniversary in 2024. In addition to publishing a series of special issues this year reflecting on the past and looking ahead in fields new and established, the editorial team sees this anniversary as an opportunity to consider how the journal can better serve the scientific community over the next 50 years.

Cell Press (announcement)

JB: Now THIS is what I call news!


Other news stories

Call for Panel Session Proposals for the 2024 OASPA Conference (OASPA)

ASM Joins UN Sustainable Development Goals Publishers Compact (The American Society for Microbiology)

At London Book Fair, Publishers Urge Permission for AI Training (CCC)

Open Access as a Means to Equity: Progress, Challenges, and the Continued Role for the BOAI (SPARC)

Theme for Open Access Week 2024 Continues Call to Put “Community over Commercialization” (SPARC)

ACM partners with Morressier to bolster research integrity in conference proceedings (Morressier)

Papermills prefer Open Access (Adam Day on Medium)

Progress report 2023 (Think. Check. Submit.)

JAMA Network names new editor in chief of JAMA Network Open (JAMA Network)

AI image generators often give racist and sexist results: can they be fixed? (Nature)

Ending profiteering from publicly-funded research (The Australia Institute) JB: Times Higher Education covered this story here: Reserve grants for open access researchers, says report

Gender, Work & Organization: mass walkout from top journal (Times Higher Education) JB: I linked to the resignation letter in last week’s newsletter.

Thank you to our sponsor, Morgan Healey

Global Executive Search Specialists in STM/Scholarly Publishing, Open Research & Digital Content.

Opinion

Publishing Fast or Slow: How Speed Varies for Similar Journals

The emergence of MDPI and Frontiers (and the soon-to-be-defunct Hindawi) has meant that mainstream publishers now care about speed more than ever before. At a minimum, they care enough to keep track of their performance and that of their competitors. The savviest publishers have dedicated speed programs or analytics teams that monitor performance and support their publishing and editorial teams to perform in a timely manner.

The Scholarly Kitchen (Christos Petrou)

JB: This is the comment that I left underneath Christos article on The Scholarly Kitchen website:

Transparency is always a good thing, although there are some significant challenges in interpreting turnaround time (TAT) data across publishers.
I’d be interested to hear your thoughts on what the unintended consequences of this initiative might be. Is it possible, perhaps even likely, that some publishers would game the system by rejecting papers after review, while strongly encouraging a brand new submission that addresses the referees’ comments? I suspect that this already happens for some papers that need “major revisions”.
To make the data truly useful the TATs need to be accompanied by information on the number of rounds of review. Taking 180 days to publish a paper after one round of review is a very different proposition from reviewing the paper three times in that period. This information is generally not available publicly, but it should be. Transparent peer review would help in that regard.
As much as I would like to see more granular data on TATs across publishers (e.g. submission to first decision, first decision to final decision, acceptance to publication etc), I can’t see how this would work in practice. For example, I had an email conversation with a publisher recently where we came up with three different definitions of “submission to first decision”. Getting publishers to agree on a common terminology would be very hard; getting them to consistently apply those definitions would be nigh on impossible.

To be clear, I’m supportive of this experiment. Researchers should be able to see how long a journal is likely to take to process their paper. However, there are certainly some data interpretation challenges that need to be considered. You can read Christos’ reply on the TSK website.


Techniques for supercharging academic writing with generative AI

Many researchers wish that ‘letting the data speak for itself’ — rather than wrestling with each word, sentence and paragraph — would suffice. In this Comment, I have outlined a collaborative framework, techniques and caveats for integrating generative LLMs into academic writing. The framework highlights the versatility of LLMs in stimulating ideas and in assisting with language tasks throughout the writing process, and the prompting examples in Box 1 show how to effectively apply different levels of AI assistance, from basic editing to higher-order content generation and critical feedback. The use of LLMs in scientific writing promises to ease communication bottlenecks and thereby to accelerate scientific progress.

Nature Biomedical Engineering (Zhicheng Lin)

JB: This is one of the most comprehensive articles I’ve read about how academics can use AI to improve their writing.


The Latest "Crisis" - Is the Research Literature Overrun with ChatGPT- and LLM-generated Articles?

Retraction Watch has a larger list of 77 items (as of this writing), using a more comprehensive set of criteria to spot problematic, likely AI-generated text which includes journal articles from Elsevier, Springer Nature, MDPI, PLOS, Frontiers, Wiley, IEEE, and Sage. Again, this list needs further sorting, as it also includes some five book chapters, eleven preprints, and at least sixteen conference proceedings pieces. Removing these 32 items from the list suggests a failure rate of 0.00056%.

The Scholarly Kitchen (David Crotty)


On the tragic fate of PeerJ

None of this is true. We know it’s not true. Pete and Jason know it’s not true. We know they know it’s not true. They know that we know they know. Why even insult us with this nonsense?
I suppose it’s part of the contract they signed with their new bosses, that they have to make public statements about how excited they are. But, seriously, who is buying this?

Sauropod Vertebra Picture of the Week (Mike Taylor)

JB: Er, Taylor & Francis is buying (has bought) this. PeerJ was always a commercial enterprise owned by shareholders, who have every right to sell their stake in the business and move on. PeerJ pitched itself as a community journal, presumably at least in part because it thought it was a good commercial strategy.


Lack of experimentation has stalled the debate on open peer review

The scarcity of evidence surrounding open peer review practices stems from its limited implementation. Our review sought to gauge the extent of adoption, revealing a gradual uptake of various elements of open peer review, notably open identities and open reports, albeit with variations across disciplines. We found open peer review remains far from common. Only a small fraction of journals, between one and five percent, have adopted these practices, although this includes several prominent outlets.

Impact of Social Sciences (Tony Ross-Hellauer and Serge P.J.M. Horbach)

JB: You can read their PLOS Biology paper, published in October last year, here: Open peer review urgently requires evidence: A call to action. Researchers need access to data sets. Editors and publishers need to carefully consider the need to maintain confidentiality, but on the other hand without access to data researchers can’t test hypotheses. Shouldn’t we, as an industry, be doing a better job of opening up our peer review datasets for scrutiny?


Understanding the provenance and quality of methods is essential for responsible reuse of FAIR data

FAIR data evaluations typically focus on the question: “Can I reuse these data?” We argue that it is time to also ask, “Should I re-use these data?” and “How should I reuse these data responsibly?”. These questions allocate responsibilities between the data depositor and the prospective data user. This shift should include several elements.

Nature Medicine (Tracey L. Weissgerber et al)

JB: There’s a great deal of wisdom contained in this article.


Other opinion articles

Desirable Characteristics of Persistent Identifiers (Upstream)

Open Access as a Means to Equity: Progress, Challenges, and the Continued Role for the BOAI (SPARC)

Bonfire Launches Open Science Network for Academics and Researchers (We Distribute)

How Can We Solve the Challenges Faced by Authors from the Global South? (Original Thoughts Blog)

Transitional agreements may not be the whole route to open access (Wonkhe)

Happy anniversary, MIT faculty open access policy (MIT Libraries)

Only 25% of publishers use ALT text on social media (The International Bunch)

PEER REVIEWING FOR CASH: Are Paid Error Spotters the Future of Scientific Accountability? (SocialWebBranding)

A Roadmap for Developing a US National PID Strategy (The Scholarly Kitchen)

Publisher Day 2024: The road ahead for scholarly publishing (Digital Science) JB: This was an enjoyable conference. Thanks again for the invitation to take part in a panel discussion, Digital Science.

Correcting the record: retracting papermill articles (UKSG)


Webinars

Here are the webinars related to scholarly publishing that are being held this week.

GetFTR: General Update and Information for Librarians
March 25 (STM)

AI and Beyond: A vision for the future of publishing technology
March 26 (ALPSP; members only)

Scopus AI: How curated, enriched and connected data enhance research insights Webinar
March 26 (Elsevier)

Delta Think APC Update Webinar
March 27 (Delta Think)

CHORUS Forum: 12 Best Practices for Research Data Sharing
March 27 (CHORUS)

How to Be a Peer Reviewer
March 28 (Sage)

Promoting Research Visibility in the Digital Age
March 28 (Council of Science Editors)

Journal Club

Controlled experiment finds no detectable citation bump from Twitter promotion

Thus, by the broader measure of alternative metrics, tweeting by Twitter-influential scientists raised the profile of the tweeted articles compared to the controls. In other words, more people (including scientists and non-scientists) became aware of, downloaded, and possibly even read these papers than would have otherwise.
However, tweeting did not result in significantly higher citation counts—one indicator of the scholarly impact of a scientific paper—within three years. Three years is generally sufficient for citations of articles to approach asymptotic annual values. Although citations for tweeted articles were 7% higher in Web of Science, and 12% higher in Google Scholar, these differences were not statistically significant whether based on raw counts or after normalizing to ensure that all journals counted equally.

PLOS ONE (Trevor A. Branch et al)

JB: This intuitively makes sense to me. More awareness about a paper seems unlikely to drive citations, unless, perhaps, it’s a review article.


The challenges of open data sharing for qualitative researchers

We have argued above that rigid requirements for qualitative researchers to make full data sets publicly available are inappropriate, but that there are some convincing reasons to consider whether qualitative data could be made available for secondary analysis (given appropriate consent and vetting procedures).
We suggest that researchers, journals and funders can improve the ways in which qualitative data is accessed and used.

Journal of Health Psychology (Danielle Lamb et al.)


And finally...

I enjoyed reading this article in The Conversation, written by qualitative researchers based in New Zealand. In their words:

… we are qualitative researchers who are interested in the messy, emotional, lived experience of people’s perspectives on dating. We were drawn to the thrills and disappointments participants originally pointed to with online dating, the frustrations and challenges of trying to use dating apps, as well as the opportunities they might create for intimacy during a time of lockdowns and evolving health mandates.

During the pandemic they asked participants to “develop stories in response to hypothetical scenarios”:

Participants described characters navigating the challenges of “Zoom dates” and clashing over vaccination statuses or wearing masks. Others wrote passionate love stories with eyebrow-raising details. Some even broke the fourth wall and wrote directly to us, complaining about the mandatory word length of their stories or the quality of our prompts.

More recently, this work has encountered some new challenges, however:

But in the latest round of our study in late 2023, something had clearly changed across the 60 stories we received.
This time many of the stories felt “off”. Word choices were quite stilted or overly formal. And each story was quite moralistic in terms of what one “should” do in a situation.
Using AI detection tools, such as ZeroGPT, we concluded participants – or even bots – were using AI to generate story answers for them, possibly to receive the gift voucher for minimal effort.

In other words, the study participants were using ChatGPT to construct their answers to the researchers’ questions, rather than writing their own responses. This might be something to bear in mind the next time you do some market research.

Until next time,

James


113 Cherry St #92768, Seattle, WA 98104-2205
Unsubscribe · Preferences

Journalology

James Butcher

The Journalology newsletter helps editors and publishing professionals keep up to date with scholarly publishing, and guides them on how to build influential scholarly journals.

Read more from Journalology

Subscribe to newsletter Hello fellow journalologists, When I write these newsletters I try to add value by giving my opinion on the story behind the story. Getting the balance between insight and speculation is hard; I have no desire to create a gossip magazine. Last week I wrote about the new collaboration between JACC (Journal of the American College of Cardiology) and The Lancet and I read the tea leaves wrong. The downside of working for corporates for 20+ years, as I have, is that it can...

about 14 hours ago • 20 min read

Subscribe to newsletter Hello fellow journalologists, This week’s newsletter delves into a new transfer pathway — between two competitor journals — that’s been two decades in the making. I also touch on the steep learning curve for Taylor & Francis’ new CEO. As usual, there’s a lot to cover, but first here’s a message from the newsletter’s primary sponsor. Thank you to our sponsor, Digital Science Digital Science’s flagship solution, Dimensions, is the world’s largest linked-research database...

8 days ago • 15 min read

Subscribe to newsletter Hello fellow journalologists, There’s a strong DEI theme to this week’s issue, with reports from Springer Nature (on editorial board diversity) and C4DISC (on workplace equity) released this week. The newsletter also includes a fascinating map of the biomedical publishing landscape, a primer on COUNTER, and a discussion of F1000’s recently revised editorial model. Thank you to our sponsor, Digital Science In late 2023, Digital Science fully acquired Writefull, which...

15 days ago • 16 min read
Share this post