网曝门

My paper was probably reviewed by AI – and that’s a serious problem

<网曝门 class="standfirst">Our paper was rejected on the basis of reviewer comments that were vague, formulaic, often irrelevant and occasionally inaccurate, says Seongjin Hong
June 24, 2025
A robot reading, illustrating AI peer reviewing
Source: PhonlamaiPhoto/iStock

As an environmental scientist with over 15 years of experience and more than 150 peer-reviewed publications, I am familiar with the ups and downs of academic publishing. But there was something distinctly odd about the rejection decision that I received from a prominent international journal last month.

After an initial major revision decision, we had carefully addressed each of the reviewers’ concerns and submitted a thoroughly revised manuscript. The first-round comments were reasonable, and we responded in detail to further improve the clarity and scientific rigour of the work. Yet our paper was ultimately rejected, primarily?because of one reviewer’s unexpectedly negative second-round report.

What troubled me was not just the tone, but the nature of the critique. The reviewer introduced entirely new concerns that had not been previously raised. Moreover, the comments were formulaic, vague, often irrelevant and occasionally inaccurate, with little engagement in the actual content of our manuscript. Remarks such as “more needed” and “needs to be validated” lacked technical rationale or data-based feedback.

Our study is in the field of environmental chemistry, focused on the field application of a novel environmental analysis method. However, the reviewer criticised it for failing to provide a “comprehensive ecological assessment” and for “not examining the effects on animal behaviours such as feeding or mating” – as if it were a behavioural ecology paper. The reviewer also claimed that “repeatability of chemical analysis isn’t fully explained” even though this was addressed in multiple sections.

网曝门

ADVERTISEMENT

Moreover, the review even contradicted itself. It began by acknowledging that “the authors replied to the questions raised”, but then concluded, without coherent reasoning, that “I cannot recommend this work.”

At that moment, I began to suspect that the review had been written, at least in part, by an AI tool such as ChatGPT. As an associate editor for an environmental science journal myself, I am seeing an increasing number of reviews that appear to be written by AI – though this is rarely disclosed upfront. They often sound superficially articulate, but they lack depth, context and a sense of professional accountability.

网曝门

ADVERTISEMENT

Specifically, in my experience, AI-generated reviews often suffer from five key weaknesses. They rely on vague, overly general language. They misrepresent the paper’s scope through abstract criticisms. They flag issues that have already been addressed. They exhibit inconsistent or contradictory logic. And they lack the tone, empathy, or nuance of a thoughtful human reviewer.

To confirm my suspicions, I compared the reviewer’s comments to a sample review that I generated with a large language model. The similarity was striking. The phrasing, once again, was templated and disengaged from the actual content of our manuscript. And, once again, the review contained keyword-driven summaries, baseless assertions and flawed reasoning. It felt less like a thoughtful peer review and more like the automated response that it was.

As an editor, I also know how difficult it can be to recruit qualified reviewers. Many experts are overburdened, and the temptation to use AI tools to speed up the process is growing. But superficial logic is no substitute for scientific judgement. So I raised my concerns with the editor-in-chief of the journal, providing detailed rebuttals and supporting evidence.

The editor replied courteously but cautiously: “It is highly unlikely the reviewer used AI,” they said. “If you can address all concerns, I recommend resubmitting as a new manuscript.” After three months of effort invested in revision and response, we were back at the starting line.

网曝门

ADVERTISEMENT

The decision – and the possibility that it was influenced by inappropriate use of AI – left me deeply disappointed. Some might dismiss it as bad luck, but science should not depend on luck. Peer review must be grounded in fairness, transparency and expertise.

This is not a call to ban AI from the peer review process entirely. These tools can assist reviewers and editors by identifying inconsistencies, spotting plagiarism or improving presentation. However, using them to produce entire peer reviews risks undermining the very purpose of the process. Their use must be transparent and strictly secondary.

Reviewers should not rely uncritically on AI-generated text, and editors must learn to recognise reviews that lack substance or coherence. Publishers, too, have a responsibility to develop mechanisms for detecting AI-generated content and to establish clear disclosure policies. Nature’s announcement on 16 June that it will begin publishing all peer review comments and author responses alongside accepted papers represents one potential path forward for publishers to restore transparency and accountability.

If peer review becomes devalued by undisclosed and substandard automation, we risk losing the trust and rigour that scientific credibility depends on. Science and publishing must move forward with technology, but not without responsibility. Transparent, human-centred peer review remains essential.

网曝门

ADVERTISEMENT

Seongjin Hong is a full professor at Chungnam National University, South Korea.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Please
or
to read this article.
<网曝门 class="pane-title"> Related articles
<网曝门 class="pane-title"> Reader's comments (5)
You need evidence to make such claims. Any experienced academics has had manuscripts rejected based on much less that you describe. We can't blame undefined "AI" for everything!
Further: is it "his" paper or "our" paper? "Associate editor" or "editor"? Was this written by AI and not fact-checked?
AI has advanced significantly, but at least for now, it still falls short compared to human reviewers. Reviewers and editors must take greater responsibility and should not accept AI-generated feedback uncritically.
Thank you for this insightful piece. It’s a timely reminder of the importance of recognizing both the role and limits of AI in scientific publishing.
new
I believe that more voices need to speak out about both the light and the shadow sides of this emerging trend. While AI has undoubtedly brought us many advantages, we must not overlook the potential harms and unintended consequences it can also bring. Some may question this article by asking, “Is there concrete proof that the review was generated by AI?” Of course, evidence based on facts is important, but I also believe that insights gained through years of experience are equally valuable and should not be dismissed. There is a reason we call such individuals veterans in their field. Thank you for this thoughtful piece. It reminded me of the importance of using AI tools with greater caution, transparency, and responsibility.
<网曝门 class="pane-title"> Sponsored
<网曝门 class="pane-title"> Featured jobs
See all jobs
ADVERTISEMENT