ÍøÆØÃÅ

Elsevier journal under fire over ¡®AI-generated¡¯ review comments

<ÍøÆØÃÅ class="standfirst">Researcher who waited nearly two years for rejection claims ¡®nonsensical¡¯ criticisms of his paper must have been chatbot-generated
July 11, 2025
Storm Damage to Transmission Tower
Source: iStock/sakakawea7

A UK academic has criticised the suspected use of chatbots in peer review after he was given lengthy instructions on improving his statistical analysis ¨C despite not including any statistics in his rejected paper.

Keith Baker, a researcher on energy policy, submitted a review paper to the open-access journal??almost two years ago with the aim of highlighting how his proposals for a state-owned energy company in Scotland, suggested in 2014, had eventually been adopted in Wales.

The paper, co-authored by academics from several Scottish universities, set out to explain the ¡°unexpected success story¡± of the proposals from the thinktank?, with which he is affiliated. The ideas share some similarities with current UK government plans to invest ?8 billion into the state-owned power company GB Energy.

Almost a year after submitting the paper to?Heliyon, Baker finally received a lengthy list of recommendations from reviewers, including 14 suggestions on how to improve the paper¡¯s statistical methods and reporting.

ÍøÆØÃÅ

ADVERTISEMENT

The authors were asked, for instance, to describe the algorithms used in the statistical analysis, show 95 per cent confidence intervals to guard against p-hacking and include ¡°forest plots and funnel plots¡± to help visualise their results.

¡°The only statistics we mentioned in the paper were some figures from energy company accounts ¨C the comments just didn¡¯t make sense,¡± Baker told?Times Higher Education.

ÍøÆØÃÅ

ADVERTISEMENT

¡°At best we were dealing with a reviewer who was incapable, but we strongly suspect this was an AI-generated review,¡± he continued, suggesting that a human reviewer was unlikely to make dozens of detailed recommendations, most of which were ¡°nonsensical¡± or unreasonable because they would require years of further work.

¡°Many of the comments urged us to improve the quality of English in the paper which was frankly insulting given the authors include a professional journalist and several journal editors,¡± said Baker.

He said he decided to speak out?after months of receiving automated responses to his attempts to contact the Shanghai-based section editor at?Heliyon?dealing with his paper.

After Baker and his co-authors responded robustly to the comments, the paper was rejected ¨C again, nearly a year later.

ÍøÆØÃÅ

ADVERTISEMENT

The journal, which was founded by Cell Press in 2015 but is now owned by Elsevier, charges an article processing fee of $2,270 (?1,670), which the authors¡¯ institutions had agreed to cover because of the paper¡¯s relevance to UK domestic energy policy.

¡°We just wanted to get the damn thing out there,¡± said Baker on why he approached?Heliyon, a mega-journal which, according to its website, ¡°considers research from all areas of the physical, applied, life, social and medical sciences¡±.

In October 2024, Clarivate¡¯s Web of Science put its indexation of new content at?Heliyon?on hold, apparently?because of the quality of its articles,??reported at the time. According to an Elsevier?, the Clarivate investigation is still ongoing.

¡°We¡¯d tried?Energy Policy?[another journal] but they batted it back ¨C which is their right ¨C but we wanted the results out there because we have done a lot of work on this,¡± said Baker.

ÍøÆØÃÅ

ADVERTISEMENT

¡°We¡¯ve spent months trying to get in touch but the best we¡¯ve had is an email from customer services which refuses to engage with the complaint that the review was AI-generated.¡±

In a statement, Elsevier said it ¡°will investigate and if needed determine how we can better prevent the use of AI going forward as well as training our editors to identify this during the peer review process¡±.

ÍøÆØÃÅ

ADVERTISEMENT

¡°Reviewers are not permitted to upload any submitted papers into a large language model (LLM),¡± it added, referencing its??which prohibits the use of LLMs in peer review.

jack.grove@timeshighereducation.com

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Please
or
to read this article.
<ÍøÆØÃÅ class="pane-title"> Related articles

This year¡¯s marking season has confirmed for many academics that, less than three years since the launch of ChatGPT, AI use by students has become so rife that their submitted writing is no longer a reliable indicator of what they have learned. Three scholars offer their views on where to go from here

<ÍøÆØÃÅ class="pane-title"> Reader's comments (4)
In more than 50 years of scholarly publishing, I have received a number of "nonsensical" reviews. Most long before A I. An author needs specific evidence, not allegations of "nonsensical." That is as old as peer reviewing.
It¡¯s sad after working so hard on a paper. I remember a reviewer asking how possible online data was collected in Ghana. The ignorance of the reviewer
A dozen years ago I received a review that was clearly not about our paper. Happened before introduction of AI, will happen after it.
new
AI is far more objective and fairer than subjective humans who often know whose paper it is an block on due to professional jealously. Certain Oxbridge journals are bias towawrds there own staff with them have a 1000fold chance of publishing in them due to editorial gatekeeping.
<ÍøÆØÃÅ class="pane-title"> Sponsored
<ÍøÆØÃÅ class="pane-title"> Featured jobs
See all jobs
ADVERTISEMENT