The Double-Edged Sword of A.I. and its Implications for Higher Education

As covered in Part 1 of this blog series, The recent case of R(Ayinde) v Haringey LBC is unique in standing as a watershed moment, and a stark cautionary tale in the use of A.I. to aid writing. 

It is a dilemma that many in higher education now face, and for which we will likely see increasing issues in professional practice.  The use of A.I. is not only inexorable, but rather than being at a glacial pace its spread is like wildfire amongst those of the later stages of Generation Z and those of Generation Alpha.  There are regular articles in the field of higher education where professors are reporting that students have requested extensions of time for assessments because ChatGPT was down, or where re-wording assignments to be A.I.-proof has seen students complain that their ‘learning styles’ are being interfered with. 

A recent anonymised survey published by the Higher Education Policy Institute (Feb. 2025) has revealed that 92% of students now use A.I. in some form as part of their studies.  One must wonder how many of the 8% that disclaimed any use of A.I. were false negatives, perhaps not trusting that they would remain anonymous.  However, there are equally increasing numbers of articles where students have reported ‘catching out’ their professors too for use of A.I. to create teaching and assessment materials, and even to frame feedback; with these professors being just as careless as the students we regularly see being reported in the media.

The natural and first fall back has been to employ A.I. detectors to check various types of assessments; and yet, in the same way as the A.I. potentially used in Ayinde created fake cases, the MIT Sloan School has noted a number of reports that show that A.I. detectors have “…high error rates and can lead instructors to falsely accuse students of misconduct”.  Some institutions have adopted what appears to be a progressive approach – permitting the use of A.I. in some assessments, as long as a declaration is provided to indicate that A.I. was used and setting out exactly the nature of its use.  Yet, all too frequently students are still being accused, and found guilty, of academic misconduct from the use of A.I. in an assessment, despite having filed a declaration, despite having said exactly how they used A.I. to complete their work.  This often comes following the use of A.I. detectors and/or services like Turnitin to review the work, and despite a declaration there appears to be reluctance on the part of institutions to accept that either the A.I. detector has produced a false result or that conduct was openly disclosed in compliance with the rules. 

Student-life is often very stressful for those who have ambitions for real achievement.  But these new risks have seen the development of the practice by some students of recording their study and drafting processes as they go, putting themselves under various methods of ‘self-surveillance’ in the hopes of being able to fend off any false accusations of A.I. use.

These waters are murky and of unknown depth, for both students desperate not to have their efforts called into question, and institutions who have to manage both students and staff to protect their academic standards and quality assurance as against any scrutiny.  There is no clear answer to these issues in this brave new world.  But the need for flexibility in considering these matters screams loud from those unfortunate cases that have fallen between the margins.  The best approach to as fluid a field as A.I. will never be to impose rigid frameworks that could never hope to stem the tides.