As an environmental scientist with over 15 years of experience and more than 150 peer-reviewed publications, I am familiar with the ups and downs of academic publishing. But there was something distinctly odd about the rejection decision that I received from a prominent international journal last month.
After an initial major revision decision, we had carefully addressed each of the reviewers’ concerns and submitted a thoroughly revised manuscript. The first-round comments were reasonable, and we responded in detail to further improve the clarity and scientific rigour of the work. Yet our paper was ultimately rejected, primarily?because of one reviewer’s unexpectedly negative second-round report.
What troubled me was not just the tone, but the nature of the critique. The reviewer introduced entirely new concerns that had not been previously raised. Moreover, the comments were formulaic, vague, often irrelevant and occasionally inaccurate, with little engagement in the actual content of our manuscript. Remarks such as “more needed” and “needs to be validated” lacked technical rationale or data-based feedback.
Our study is in the field of environmental chemistry, focused on the field application of a novel environmental analysis method. However, the reviewer criticised it for failing to provide a “comprehensive ecological assessment” and for “not examining the effects on animal behaviours such as feeding or mating” – as if it were a behavioural ecology paper. The reviewer also claimed that “repeatability of chemical analysis isn’t fully explained” even though this was addressed in multiple sections.
Moreover, the review even contradicted itself. It began by acknowledging that “the authors replied to the questions raised”, but then concluded, without coherent reasoning, that “I cannot recommend this work.”
At that moment, I began to suspect that the review had been written, at least in part, by an AI tool such as ChatGPT. As an associate editor for an environmental science journal myself, I am seeing an increasing number of reviews that appear to be written by AI – though this is rarely disclosed upfront. They often sound superficially articulate, but they lack depth, context and a sense of professional accountability.
Specifically, in my experience, AI-generated reviews often suffer from five key weaknesses. They rely on vague, overly general language. They misrepresent the paper’s scope through abstract criticisms. They flag issues that have already been addressed. They exhibit inconsistent or contradictory logic. And they lack the tone, empathy, or nuance of a thoughtful human reviewer.
To confirm my suspicions, I compared the reviewer’s comments to a sample review that I generated with a large language model. The similarity was striking. The phrasing, once again, was templated and disengaged from the actual content of our manuscript. And, once again, the review contained keyword-driven summaries, baseless assertions and flawed reasoning. It felt less like a thoughtful peer review and more like the automated response that it was.
As an editor, I also know how difficult it can be to recruit qualified reviewers. Many experts are overburdened, and the temptation to use AI tools to speed up the process is growing. But superficial logic is no substitute for scientific judgement. So I raised my concerns with the editor-in-chief of the journal, providing detailed rebuttals and supporting evidence.
The editor replied courteously but cautiously: “It is highly unlikely the reviewer used AI,” they said. “If you can address all concerns, I recommend resubmitting as a new manuscript.” After three months of effort invested in revision and response, we were back at the starting line.
The decision – and the possibility that it was influenced by inappropriate use of AI – left me deeply disappointed. Some might dismiss it as bad luck, but science should not depend on luck. Peer review must be grounded in fairness, transparency and expertise.
This is not a call to ban AI from the peer review process entirely. These tools can assist reviewers and editors by identifying inconsistencies, spotting plagiarism or improving presentation. However, using them to produce entire peer reviews risks undermining the very purpose of the process. Their use must be transparent and strictly secondary.
Reviewers should not rely uncritically on AI-generated text, and editors must learn to recognise reviews that lack substance or coherence. Publishers, too, have a responsibility to develop mechanisms for detecting AI-generated content and to establish clear disclosure policies. Nature’s announcement on 16 June that it will begin publishing all peer review comments and author responses alongside accepted papers represents one potential path forward for publishers to restore transparency and accountability.
If peer review becomes devalued by undisclosed and substandard automation, we risk losing the trust and rigour that scientific credibility depends on. Science and publishing must move forward with technology, but not without responsibility. Transparent, human-centred peer review remains essential.
Seongjin Hong is a full professor at Chungnam National University, South Korea.
请先注册再继续
为何要注册?
- 注册是免费的,而且十分便捷
- 注册成功后,您每月可免费阅读3篇文章
- 订阅我们的邮件
已经注册或者是已订阅?