Everyone knows the old rhyme about the little girl who had a little curl ‘right in the middle of her forehead’. When she was good, ‘she was very very good, / But when she was bad, she was horrid’. So it is with many aspects of life. Take the Internet, for example. As the author of several books, over as many decades, my research activity has changed from time-consuming and often fruitless hours spent in libraries, in the 1980s, here and overseas, to find what I was looking for to resolve this or that knotty problem as I was developing my thesis and providing examples in support of it; to today, in 2025, when I rarely have to leave home and computer, with access to the Internet, to find the answer - often in a matter of minutes.

Certainly, you not only need to have an idea of what you are looking for; where you might find precisely what you want, and – most importantly of all, as every seasoned scholar working online today knows - to be as sure as you can be that what you have accessed is reliable and accurate. But all this was true when you were laboriously searching through the card catalogues and shelves of libraries – as often as not finding that the book that you needed had already been borrowed.

I would estimate that my writing of a 120,000-word (or 400-pages-long book) forty years ago would have taken me probably twice as long as it did in the last couple of years with my most recent book of the same length. This significant reduction in expenditure of time and effort is substantially due to the advent, accessibility and seemingly boundless resources of the Internet. Whoever said that this revolution of accessing knowledge, today, is equivalent in its impact to the invention of the printing press, centuries ago, is surely correct.

So the first point to be made about the Internet, generally, before we turn to the well-named Artificial Intelligence (hereafter, ‘AI’) component of it, specifically, and its implications for the future of learning, is that as school pupils and university students have been using the Internet extensively now, for years, teachers and academics need to spend time (no doubt many are) constructively warning them and demonstrating to them the need to be healthily sceptical about what they are accessing, online, in preparation of their essays, assignments and other work. If today’s media are worm-eaten with fake news, there are mountains of misinformation available on the World Wide Web, too, not to mention the endless timewasting distractions which are a mere flick of a finger away whenever you go online.

Wikipedia is a classic example of the wariness that needs to be cultivated. While some of the entries on it are thoroughly well-researched, scholarly and suitably detailed, often with copious and useful referencing and sources, others are decidedly not. Further research should always be undertaken when being tempted to rely on a single site for what is assumed to be the complete gospel truth about this or that subject. I have generally (not always) found that the more I know about a topic, the less satisfactory Wikipedia can be in dealing with it.

*                                        *                                        *

More recently, the advent of AI on the Internet has inevitably entered the world of student learning, too, and brought further complex issues and problems to that domain. Here, the rhyme about the little girl – and its alert – is immediately applicable. AI can be very good and it can be very bad.

I first encountered it – and continue to, daily – unbidden, when I do a Google search to answer some question (often, again, in the course of my writing, but even to respond to mundane queries such as how long to boil an egg) and, lo and behold, up comes AI with the first response - usually, of many, and, typically, too, well-written and clearly expressed and, so it seems, factually correct, and providing me with exactly what I was wanting to know and in a flash. In these ways it can be unlike other Internet searching that one does that can sometimes bring material that is off the precise mark of the inquiry - or even missing the point, entirely. Refining one’s search will usually sort out that problem.

So, again, AI can be a boon for seekers after truth and knowledge, but should be treated with a good dose of scepticism, too, and routinely lead to some further checking and research in matters of any complexity or subtlety. As with the Internet at large, schoolchildren and tertiary students need to be wary, and teachers and parents must be vigilant in urging this caution with regard to automatic dependence on what AI will tell them. Often AI will provide a brief and rather superficial response to some search (such as an aspect of the life and times of an author, or the subject of a text) which will always call for further nuancing if that author or text is the subject of a protracted discussion in such as an essay or assignment.

As I was saying earlier about some Wikipedia entries, I have found that the more I know about a topic, the less satisfactory the AI response has been – even though, as far as it goes, it is routinely correct.

*                                        *                                        *

Now we come to the horrid aspect of AI, and this has deeply disturbing implications for the learning and education of young people and, even more profoundly and enduringly, for the development of their minds and reasoning powers, generally – and, therefore, for the well-being of a free society, at large. As often, in this domain of thought – its expression and cherished freedom – George Orwell has put the matter succinctly:

If people cannot write well, they cannot think well, and if they cannot think well, others will do their thinking for them.

In addition to asking AI for the answer to questions or for detail on some topic, it is now possible, through such as the AI chatbot, ChatGBT, to ask it to do your work (that is, your thinking) for you:

The language model can… compose various written content, including articles, social media posts, essays, code and emails.

I was reading Katherine Mansfield’s short story, ‘The Garden-Party’, so I typed into Google the author’s name and the story’s title, mainly to get the date of it and where it came in her corpus, and instantly a 500-word account of the text popped up as an ‘AI Overview’ with sub-sections dealing with such central issues of the story as irony and symbolism, social class and superficiality, and so on - the sorts of details which would be included in the kind of exercise that a Year 11 or 12 student, or an undergraduate, would be preparing for an assignment or tutorial or seminar paper. So, already, they are presented with a framework, ready-made, for their essay or presentation, supplied with topics that should be addressed, and some addressing of those topics, moreover. Abracadabra, the essay is done, free of any independent thought. Copy and paste and submit.

From time immemorial, students have consulted sources for their work – in English literature, for example, from the vast libraries of literary scholarship and criticism – and, we should never forget, before we become over-agitated about AI, that plagiarism from those sources often occurred. And with the advent of the Internet age, similarly, though with less effort, a simple cut and paste from some online document could easily be incorporated into an essay and passed off as the student’s own thought and writing.

But the difference here is that in both of those earlier kinds of plagiarism – just a fancy academic term for theft and cheating - we teachers and academics, marking such work, could as often as not spot the theft and, as dire consequences could follow from that discovery (including failure in a course, even expulsion), most students were aware of the risks involved and were wary of chancing it. Turnitin has proved effective, although not fool-proof, in identifying online plagiarism, and is now applied, along with such as Winston AI, with reference to AI material, too:

While these tools are not perfect and can sometimes misclassify human-written text as AI-generated, they are becoming increasingly sophisticated in detecting patterns associated with AI writing.  

In fact, studies have shown, with regard to AI-plagiarism detection, that these programs are about 75% effective. So the more skilled the AI plagiarist, the more likely it is that he or she can ‘get away with it’. The irony here is that the amount of effort and ability taken to ensure non-detection could surely have been as easily – but ethically – applied to thinking about and writing the essay oneself! Recent graduates have told me that they often had to run a topic through AI several times to get, in toto, the material they wanted and in the correct form. I wondered why it might not have been easier – not to mention ethical and more satisfying and enriching, intellectually – to do the original research themselves and write the piece themselves.

Appeals to students’ better natures are well-intentioned, as these online warnings indicate:

When using AI tools, it's crucial to use them ethically and responsibly, focusing on using them as a tool to enhance your own writing rather than relying on them to produce the entire essay. Relying on AI to write your essays can hinder your own development of critical thinking, research, and writing skills. 

But at what point, precisely, does the student move beyond AI’s ‘tools’, enhancing their writing, to writing themselves, and sustain that quality? And how powerful, for the harassed student with an essay due tomorrow or a topic that’s driving them to distraction to progress, is the monitory message that their ‘critical thinking’ skills are not going to be well-honed if they rely on AI to do their thinking for them? This is Pollyanna territory. AI tells us what ‘well-honed’ means, even as the misuse of AI thwarts that accomplishment:

"Well-honed" means sharpened, perfected, or highly developed. It describes something that has been improved through practice and effort, making it more effective or precise.

A study by MIT (reported in Time magazine, in June, this year) has shown – surely, to nobody’s surprise – that the AI tool for essay-writing, ChatGPT, is ‘eroding critical thinking skills’:

The study divided 54 subjects — 18 to 39 year-olds from the Boston area — into three groups, and asked them to write several SAT essays using OpenAI’s ChatGPT, Google’s search engine, and nothing at all, respectively. Researchers used an EEG to record the writers’ brain activity across 32 regions, and found that of the three groups, ChatGPT users had the lowest brain engagement and “consistently underperformed at neural, linguistic, and behavioral levels.” Over the course of several months, ChatGPT users got lazier with each subsequent essay, often resorting to copy-and-paste by the end of the study.

*                                        *                                        *

To address and solve the formidable problems that AI misuse is posing in the school and university classroom – in addition to appealing to young people’s sense of honesty and the inestimable benefits for them and for society of using their own brain (rather than a robot’s) for their brain-work - it is necessary to bring back supervised, closed-book examinations with no access to online resources, in addition to regular in-class writing assignments.

They need to be closed-book, as it is impossible to police open-book exams. If there's assessment material that demands a textbook for reference, then that can be submitted work.

The recovery of what was once the taken-for-granted, central, non-negotiable assessment regime in education, should be initiated in half-yearly and yearly exams in all subjects from primary school, through to the undergraduate years.

There is no other guaranteed way of ensuring that the learning that the students are revealing is indeed their own learning and not somebody else’s. In any case, it is the public role and duty of such as universities, funded by the taxpayer, to guarantee to the community and to prospective employers, that their graduates are indeed graduates who have earned the degrees that the universities have conferred on them, acknowledging their mastery of a discipline, and that they are not simply serial, online cheaters.

Another huge advantage of exams is from the marker's perspective, eliminating the need to write an extended account on each paper as to why you've given this one a D and that one a CR or – dare I say it – failed this one. Over my forty years as an academic, I saw a sea-change (for the worse) in this area, in the very period when exams were increasingly abandoned. When I was an undergraduate, it was not uncommon to receive even a 3,000-word essay back, with just the mark, or a one-sentence comment giving a compliment or suggesting some improvement. Later, it became mandatory for the marker to write an extensive explanation - an essay in itself - as to why this or that mark was given, as the 'culture' had developed of students coming to complain about the mark they had received, query the assessment and even demand a second opinion, as the student tail began to wag the academic dog. Exam scripts all but eliminate this phenomenon.

*                                        *                                        *

So, AI is not going away. When it is good, it is very good, but its potential for negative impact on the future of students’ learning is deeply concerning. The restoration of supervised, closed- book examinations (supplemented, where possible, with viva voce tests) to their central, non-negotiable place, across the entire spectrum of education, is essential, to the point where they form at least 50% of assessment at every level.  This will not only assist in curbing cheating, but, positively, will encourage students, both in their years of education and for their lives at large, to develop and exercise their own thinking skills to become the constructive, intelligent and articulate citizens that are the bedrock of a truly free society.

 

_________________________________

Barry Spurr, Australia’s first Professor of Poetry, is Literary Editor of Quadrant. His latest book, Language in the Liturgy: Past, Present, Future (James Clarke, Cambridge, 2025), is a critical study of contemporary liturgical language in the Anglophone world, in both the Anglican and Roman Catholic Churches. It is available from the publisher’s website or, in Australia, from Quadrant Books.