网曝门

Have chatbots killed the student essay?

<网曝门 class="standfirst">This year’s marking season has confirmed for many academics that, less than three years since the launch of ChatGPT, AI use by students has become so rife that their submitted writing is no longer a reliable indicator of what they have learned. Three scholars offer their views on where to go from here
七月 7, 2025
Student with brain unravelling in to a brain on a computer screen. To illustrate the increased use of AI by students, outsourcing their brain to the internet.
Source: Malte Mueller/Getty Images (edited)

‘Higher education is clearly at an inflection point’

May and June’s “eat-sleep-mark-repeat” routine is always gruelling for academics. This summer’s marking season has, however, been particularly tough. Because while some essays I’ve read have been genuinely outstanding, for the first time in my 15 years in academia, the majority submitted in some areas have clearly used generative AI.

At the University of Surrey, I convene modules both in politics and international relations and for our social science foundation year. Most of my work this year has been with students in their first year of university. Earlier in the academic year, we faced concerns after about 40 pieces of work (out of roughly 900) were pulled for academic misconduct, the large majority being for passing off obviously AI-generated work as the students’ own.

Our response was to work closely with students throughout the year, with a real focus on building critical skills and stressing the importance of developing students’ own voices as they engage in the fascinating arguments that suffuse our discipline. We also have a clear policy on AI use – but are running to catch up as the tools evolve.

That had an impact, but misuse of AI resurfaced at foundation level – those students who are determined to gain degrees in politics, sociology or, predominantly, law, but have joined the university with lower grades and are often less experienced in what it takes to thrive in a university-style learning environment. With the final round of assignments, I spent the best part of a fortnight mining their work for insight and originality and all too often was met with banality.

To be fair, it was not the case across the board, but more than half of my foundation students had used AI to provide sources, offer up case studies and, in many cases, do the writing as well. Some had used it effectively as a supporting research tool, but for too many it was the silver bullet to do all the work – and around 25 per cent of students ended up firing blanks, earning very low marks.

Students who did well mainly did so by picking up on all the cues we’d fed them through the year, demonstrating an ability to create a clear and well-evidenced case. In a paper on law and the media, strong essays might spend time identifying and commenting on theories behind why the media acted as it did and analytically chewing over its impact. When AI was detectable, however, I had far too many descriptive accounts of Amanda Knox’s , , the tribulations of Boris Johnson during or, strangely, the sexting exploits of former New York congressman (not covered in any of our modules). While it was always fascinating to see the likes of Gramsci and Althusser introduced to the argument, more often than not any “insight” came from the hive mind of ChatGPT or one of its cousins.

The point of an essay assignment is to allow academics to see how students find and manage material and articulate their thoughts to answer a question. At levels 3 and 4, I look at their ability to use what they learn in class, build on it and start their own research journey. I know what essays at each level sound like, the language students use and the common errors they make. But this time, too often, I was reading smooth, generic work that simply didn’t sound like anyone I’d met in class this year.

Student surfing on a wave with binary code while a teacher runs from it. To illustrate that some are still ignoring the AI tidal wave, while others are trying to surf it.
Source:?
Malte Mueller/Getty Images/iStock montage (edited)

AI is going to become more prevalent in everything we do, but students outsourcing their brain to the internet is not the point of university study. For me, we’ve probably reached the end of the line for the straightforward research essay and in my future courses I’ll be looking to introduce more in-class work and more alternative assessments, rather than asking questions that can simply be fed into the AI machine.

When I spoke to the students after mark release, I found that they fell into a number of camps. Some – generally the transactional ones – put their hands up straight away and conceded that they were trying to get words on a page as quickly as possible and at the minimum intellectual cost. Most didn’t give much thought to any negative impacts of such cognitive offloading. Others lacked confidence in their own abilities. They simply thought ChatGPT would produce a better result. And some were angry at their peers for using AI rather than doing the hard yards in the first place.

This assignment marking round has certainly been more of a learning experience for me than for those intent simply on gaming the system. Higher education is clearly at an inflection point – but where next? A lot of colleagues are still ignoring the AI tidal wave, while others are trying to surf it. I’m somewhere in the middle.

Somehow, we need to embrace the benefits of a revolution every bit as big as when the internet first launched. But students need to learn the core skills of critical thinking, communication, collaboration, problem solving and the rest first. Then they can use AI as a support tool – but not as a replacement for their own skills.

is associate professor of political engagement at the University of Surrey.

?

‘The only solution I can see is a return to in-person exams’

Assessments in higher education as we know them are broken. The shift to in-course assessments and take-home exams, which reached its apotheosis during the pandemic, was popular with both students and staff. But the launch of ChatGPT in late 2022 has made this situation untenable.

AI is endemic in higher education. It is too tempting and too free for it not to be. It is also increasingly difficult to detect. As a law lecturer, I have noticed a marked shift in the quality and style of essays and assessments submitted by students. . I gave it an unremarkable 2:1. But its utterly forgettable answer is precisely what makes it so dangerous and difficult to detect. It is a herd answer, and there is safety in the herd.

This is not to say that there are no indicators of an AI-generated essay. My experience with AI is that it tends to produce answers in the guise of an expert asserting a brief point. Answers jump from one assertion to the next, never dwelling too long on a specific point. This brevity appears to exude a confidence borne from years of knowledge accumulation and research experience – certainly not typical of an undergraduate student.

In addition, AI sometimes draws on sources or concepts outside a module’s core reading list – but so do students conducting their own research or trying to demonstrate their keenness.


Campus resource: Can we detect AI-written content?


AI answers to legal questions almost always conclude with suggestions for reform. It’s not enough, these large language models believe, to unpack and critique: one must also fix. But, again, there is nothing inherently wrong with this: problem-solving is, after all, supposedly a key skill we seek to instil in law students.

The overall result, however, is that while AI answers can be impressively broad, they are ultimately shallow and devoid of any deep analytical insight. If an expert is somebody who knows a lot about very little, AI is the opposite, engaging with academic literature only superficially while falling over itself to show you the breadth of its knowledge. The result is 1,500 words of grammatically impeccable mediocrity, the most impressive aspect of which is the speed at which such slop can be churned out.

Robot hiding behind a mask of a human face. To illustrate that the use of AI in higher education is becoming increasingly difficult to detect.
Source:?
Malte Mueller/Getty Images

But legal academics are, unsurprisingly, obsessed with burdens of proof and fairness. All these indicators of AI use are merely “sniff tests”: they raise suspicion, but are they enough to secure a conviction? I cannot flag an academic integrity issue simply because a student appears to know how to use a semicolon or has tried to cram too much into their essay. I can’t even raise an academic integrity issue if they bring in material from outside the module’s core content. As a colleague of mine pithily lamented: “AI is everywhere…but we have no evidence to prove it.”

The most concrete evidence of AI usage?is the case of “hallucinations” – fictitious information conjured up by AI. . Of the 10 sources it returned, only two were completely accurate in terms of author, title, relevance and publication information. Some were complete fabrications. But students now know all about the peril of hallucinations and the examiner’s gaze can easily be averted by a simple check of their sources to remove any non-existent articles or legal cases.

It gives me no joy to remark that the only solution I can see is a return to in-person exams. I was assessed almost exclusively through exams as an undergrad and I hated them. And I’m sure my lecturers hated my exams too, written in a barely legible scrawl, smudged and smeared by my obstinate left hand. Only through that rare module assessed partly by coursework did I realise that I actually quite enjoyed research and writing, and that a career in law and academia may be for me. A complete shift to in-person exams may deny somebody a similarly life-changing realisation.

There is also a first-mover problem: since students hate exams, any university or course that assesses primarily through exams is likely to be less popular – including with the international students whose high fees underwrite them. And then there is the practical problem that student number expansion means that many institutions no longer have exam venues large enough to accommodate in-person exams.

One solution may be to hold a viva or in-class presentation to test a student on what they’ve written. But this would result in an increased workload for both students and staff and would be simply unviable for courses with large intakes.

Ultimately, a sector-wide problem demands a sector-wide solution, whose formulation properly falls to those lucky enough to enjoy the salary of those whose job it is to come up with sector-wide solutions. But, until their verdict is delivered, I will stick with recommending in-person exams.

Yes, they often reward rote learning, and, yes, strict time limits?restrict the opportunity for students to demonstrate critical thinking and analysis. But at least we can be sure that the person sitting in that chair in the exam hall is actually doing the writing.

is reader in constitutional law and human rights at Birmingham Law School.

?

‘AI challenges higher education to put more emphasis on writing, not less’

This spring, traditional authorship became passé. Human authorship had had a good run, beginning with the in ancient Mesopotamia around 3,200 BCE. But it is now clear to anyone paying attention that people prefer composing with AI rather than working alone.

Nowhere is this more evident than universities, where lecturers collapsed under the weight of bland and voiceless student writing “co-authored” with ChatGPT, Claude, Perplexity and other large language models. Even those diligent teachers who had tried AI detectors or other policing measures, such as asking students to record their composing sessions, realised that students could anticipate and nullify their efforts.

Such concerns were also voiced last summer, as ChatGPT use went mainstream. However, this spring felt more significant, making abundantly clear how Google’s shift from a search company to an answer company has undermined economic incentives for human authorship. Now, when people search Google, AI summarises the key insights from multiple websites, making it unnecessary for users to actually read the news sites, blogs and encyclopedias. Even my own site, Writing Commons, set up to support academic writing, has seen traffic fall dramatically.

Student watching a robot write on a computer screen, giving a ChatGPT answer as to whether chatbots have killed the student essay.
Source:?
Malte Mueller/Getty Images montage (edited)

Primary academic sources have also taken a major hit. For a long time, teachers have complained students do not read required articles, reference materials, course assignments or books. This has resulted in some pretty quiet classroom conversations: so quiet that some teachers give students time to do reading in class.? But this spring, thanks to the improved abilities of GenAI to summarise primary texts, the idea of reading a course assignment became positively quaint. Instead, students fed the URLs or PDFs for the assigned readings into GenAI tools and asked them to summarise the key points, reducing texts they deemed too long to read (anything over a page) into a bulleted outline. Savvy students even remediated readings as podcasts thanks to Notebook LLM. The professor’s reading quiz wilted on the vine.?

For generations, writing teachers invoked Isaac Newton’s admission that “if I have seen further, it is by standing on the shoulders of giants” to inspire students to enter the scholarly conversation, to learn by engaging with the minds that came before. But that metaphor has given way to a new reality: humans now stand on the shoulders of algorithms: GPT-5, Claude Opus 4, Gemini 2.?

So what’s the alternative? We could require students to handwrite their assignments,? Yet that’s impractical given the large teaching loads of higher education faculty – no one has time to decipher student handwriting. Alternatively, we could “air-gap” the classroom, shutting off students’ access to the internet. But universities aren’t set up that way: we cannot shut down the campus network for individual classrooms.

Moreover, reverting back to handwriting or pre-internet composing processes won’t prepare students for the workplace. After all, today’s knowledge workers are expected to use AI tools critically to improve their thinking and communication processes. According to, 78 per cent of business leaders are already planning to add AI-specific roles, and nearly half (46 per cent) are automating entire workflows or business processes with AI. Students rightly fear that without adapting, their careers might disappear before they even begin, as humans are replaced by digital colleagues who never tire, never sleep and never stop learning.?

Given this context, higher education must accept that . Human authorship has given way to AI-assisted authorship, and we must develop new teaching practices that recognise that student writing is no longer a proxy for learning, reasoning or communicative competence.?

Today’s students need to master not just writing but strategic prompting, critical synthesis and human judgement. Employers’ expectations for the quality of graduates’ work are growing with increased AI usage, but to fully benefit from AI as a thought partner, writing coach and editor, students need to understand concepts such as , , and .

The key point is that students must use AI to enhance their own thinking, not to replace it. Hence, beyond checking their work for depth, flow and , we may also need to check their chat logs when the text seems machine-authored and we sense students have engaged in excessive cognitive offloading. For important assignments, we also need to sit down with students and ask them to discuss their decision-making in detail and to explain what they learned in the process about using AI to achieve their writerly goals.

Ultimately, writing remains the fundamental way we come to understand our world and ourselves. It’s how we resist becoming passive recipients of machine-generated knowledge. We must write, not despite the presence of generative AI but precisely because it challenges us to keep thinking, keep learning and keep asserting what makes us human.?

AI challenges higher education to put more emphasis on writing, not less.?

is professor of English at the University of South Florida and founder of the , an open access resource to help students to improve their writing.

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
Please
or
to read this article.
<网曝门 class="pane-title"> Reader's comments (11)
There has been some good work on interactive oral assessments (and how to implement them without going crazy) at Griffith University and, I think, City University Dublin. It is worth having a look at what is being done in this space https://app.secure.griffith.edu.au/exlnt/entry/9569/view
Translate this into English: " AI use by students has become so rife" What does "so rife" mean? This is a collection of one-sided views with no thought to teaching students to use AI responsibly. Why?
Having spent seven years as an Academic Conduct Officer (a.k.a Plagiarism Officer), I can agree that there has been a steady increase over the period in the use of AI, and this year it has gone up exponentially. It will soon be the new normal. This is, plainly, not good for ACOs! Neither is it good for the students, since they cannot develop by that route; but it's so easy for them. The solution is training them in good working practices and in working independently alongside assessment practices such the ones dr mentions. These are to be commended. They must learn to think independently; otherwise, AI will leave them jobless - or, rather, they'll never get a graduate job in the first place. Written examinations are the definitive solution, but not all HEIs can implement them easily. That is to say, students in some places would need to get used to them and that might well be difficult. Perhaps it would help us - that is to say, help us by reminding us that we don't need to look wholly at the difficult side of the question - to bear in mind that many students don't use AI in their essays.
The AI, ChatGPT, issue is affecting academic journal articles too. Recen;ly saw one on local cottage industry in the global South that looked impressive, mentioned (too many) statistical models and theories) - but nothing linked to the substance of the research (of which there was very little). Worst of all, it included an amazing flow chart with arrows, numbers (to 3 decimal places), but nowhere were what those numbers might refer to mentioned in the rest of the text. And the conclusion was basically 'empowering home workers and improvoing theoir pay and conditions makes them happeir'. Well hoodathunkthatthen ? Re student essays, maybe the answer is to give them a subject that requires a bit of local research or at least observation in the university city or their home town. This may not be feasible for all subjects but can work well in e.g. economics, geography, criminology etc. That might fox ChatGPT a little bit.
Excellent article. What all the authors describe rings very true. This has created an existential threat to universities like few other things, and the software is developing further all the time. We have no time to waste - nothing else is more important in education.
Good story and great comments. Exams are a solution but I don't think we are ever going back there tbh. It would need a complete reversal of how we prepare students education over their learning from School onwards. Exams are difficult and to be successful you need to develop a technique and hone it. To be honest, they are just too difficult intellectually for the vast majority of our students. And many of our students could not cope with the stress of an exam in terms of their mental health. Essay deadlines are bad enough but you can give them extensions etc, but with exams its resits! Universities admin hate exams as they are so difficult to manage (timetable, rooms, invigilation etc etc), hence they firmly steered us to continuous assessment in the first place. And with the reduction in staffing (remember that story?) it's not going to get easier. So forget them.
Teaching computer science, I tell students that generative AI (gAI) is a tool and like any tool they need to learn how to use it properly. If they want to use it in their work, they need to say so and present the prompts they gave the gAI. Then they should state how good the result was for their purposes, analysing it critically, and editing it as necessary. Those gAI snippets Google keeps shoving at anyone running a search are useless. I waste far too much time giving them feedback out of sheer fury at how wrong it gets things even when the correct information is readily available on the World Wide Web for it to access!
Following on from the first comment on Interactive Oral assessments, Dublin City Univeristy has worked with Griffith University on the Interactive Oral assessment approach and it has a website on Interactive Oral assessments, availabe at: https://www.dcu.ie/teu/interactive-oral-assessment.
AI essays also beg the question: why should lecturers mark them? If the assignments were written by a bot, why offer feedback?
I think this: "The point of an essay assignment is to allow academics to see how students find and manage material and articulate their thoughts to answer a question. " Is part of the problem. Is the goal of an essay assignment to measure student performance, or is it to give the students a task through the completion of which they are forced to engage intellectually with the material, which they will then almost certainly have a better grasp of afterwards. When I was a student, we wrote many essays, but none of them received grades (only qualitative feedback), and they didn't count towards anything. Because it was the process of writing that mattered, not the end product. Perhaps we need to return to the idea that its the journey that matters, not the destination. This year when I meet my new tutees, I will tell them we are moving in to a world where AI is normal. They could use that AI to try to make their path through university less effortful. But they will be emerging into a world where many of the things they could have done will be done by AI. University is there opportunity to find area that can develop themselves in where there are better than the AI. Because that is what they will need to succeed in the new world - they will need to be better than AI. If they take the easy route, then so be it, but it will be their 3 years, and their ?28k they'll be wasting. But if they choose to engage, we will do the best we can to help them find where they can be the people, the intellect, the thinker, the writer, that AI cannot be.
new
The student essay has been useless long before AI. With few exceptions it is formulaic and boring to write and mark. There are other ways to teach and assess writing and knowledge.
ADVERTISEMENT