Academic misconduct: Moral panics and academic arms races
Neil Lent*
It can be argued that in the last few years we have seen a few moral panics centred on perceived opportunities for academic misconduct by students. These panics include the use of essay mills for contract cheating, the change to remote assessment due to the pandemic, and, now, the use of technology such as ChatGPT. The responses to these concerns often seem to be a retreat to ‘traditional’ assessment methods, especially invigilated unseen in-person exams (We can be sure the examinees are doing the work…), the use of similarity checkers and online proctoring.
There are problems with this ‘policing’ approach to tackling academic misconduct. Privileging assessment methods that are seen as ‘safe’ can be at the expense of more valid authentic assessments that can enhance learning that are more valid measures of the kind of learning we want from our students. It can also give us a false sense of security: no assessment method is safe from cheating. There does appear to be a risk gradient, though. There is limited research, but, for example, from the work of Harper et al. (2020), there is evidence that cheating may be hardest to detect in programming tasks, MCQs (Multiple Choice Questions) and group assignments. MCQs and short answer exams may be most at risk of third-party cheating. Vivas and supervised timed essays seem much lower risk. For coursework assessments, reports and essays seem most at risk and portfolios, placement report and research theses least at risk. Nevertheless, all assessment formats have a level of risk.
According to research, risk factors for third party cheating include:
-
High stakes, low turn-around times are associated with more misconduct
-
Dissatisfaction with the learning and teaching environment
-
Perceived opportunities to cheat
-
English as a second language
-
Prevailing disciplinary culture: where students feel disempowered and undervalued, misconduct may be more likely. Where they feel included and understand what is expected of them, misconduct is less likely (This is under-researched, however.)
There is a place for detection and sanctions. It is probably fair to say that ‘cheaters gonna cheat,’ but how widespread is it likely to be, and how fair is it to apply blanket methods that affect all students? Knowing that there is potential for cheating and looking out for it is quite effective in finding cheating. From Phill Dawson and Wendy Sutherland-Smith’s work (2019), this can be around a 60% detection rate with no further training.
The prominence of professional contract cheating may well be overstated. Evidence seems to suggest that most third-party cheating is carried out by fellow students, friends and family members and is not usually done on a paid basis (Bretag et al, 2019). It may even be possible that public concern over essay mills alerts potential cheaters to the availability of the ‘service.’
We now seem to be moving away from a concern with contract cheating towards a new moral panic: the use of AI writing tools such as ChatGPT. These tools are not designed specifically for cheating but are intended to learn how to be able to write like real humans and to be able to do so in a range of styles and formats. Clearly there is potential for using it for academic writing, thus creating our latest moral panic.[1] The nature of the threat posed by ChatGPT is still being discussed, and some academics are even talking about how it – ChatGPT – might find legitimate uses in learning and teaching (for more on this, you can watch a recording of a recent talk by Professor Mike Sharples from the OU here: Generative AI).
This regular appearance of new threats to academic integrity may mean that taking a policing approach is likely to fuel an academic misconduct ‘arms race’. New ways of detecting and stopping cheating will be countered by new cheating technologies and services.
An alternative would be to consider a mixture of ‘push’ and ‘pull’ factors that can influence students to cheat. What factors push students towards cheating, and which factors pull them away from it? Such solutions are likely to be holistic and integrated into course and programme design in ways that make sense to students, so they understand why they are being assessed in particular ways and how they benefit from this. Teaching and assessment are likely to be coherent and aligned to understandable and achievable learning outcomes in ways that motivate students to participate and learn. Tim Fawns and Jen Ross (2020) suggest some features of assessment that may help students meaningfully engage and provide some elements of misconduct prevention:
-
Non-anonymous, open book, over anextended period of time, and potentially-collaborative (Students are allowed to use any available resources).
-
Requires significant intellectual inputfrom every student.
-
Shows the learning process andprovides rich opportunities for feedback (from peers and/or tutors).
-
Provides opportunities for creativity,personalisation and contextualisation.
-
Covers the key aims/knowledge of theassessed course.
-
Is manageable for staff and students.
If we treat our students as potential cheaters, what does that imply about the kinds of pedagogical relationships they will be part of? How does it help them learn to work effectively, independently, in ways they can be trusted, if their behaviour is shaped through external policing and assessment methods that are not fit for purpose?
References
Bretag, T., Harper, R., Burton, M., Ellis, C., Newton, P., Rozenberg, P., Saddiqui, S. & van Haeringen, K. (2019). Contract cheating: A survey of Australian university students. Studies in Higher Education, 44(11), 1837-1856. https://doi.org/10.1080/03075079.2018.1462788
Dawson, P. and Sutherland-Smith, W. (2019). Can training improve marker accuracy at detecting contract cheating? A multi-disciplinary pre-post study. Assessment & Evaluation in Higher Education, 44(5), 715-725. https://doi.org/10.1080/02602938.2018.1531109
Fawns, T. and Ross, J. (2020, June 3). Spotlight on alternative assessment methods: Alternatives to exams. Teaching Matters. https://www.teaching-matters-blog.ed.ac.uk/spotlight-on-alternative-assessment-methods-alternatives-to-exams/
Harper, R., Bretag, T., & Rundle, K. (2020). Detecting contract cheating: Examining the role of assessment type. Higher Education Research & Development, 40(2), 1–16. https://doi.org/10.1080/07294360.2020.1724899
[1] It does seem to have limits. I asked ChatGPT to write an 800-word blogpost on academic misconduct, and it refused.
Recent comments