Is ChatGPT spelling the end of take-home essays as a form of assessment? Part 1: The principles

Person writing by a laptop
Image credit: StockSnap, Pixabay CC0

In this post, Dr Matjaz Vidmar explores the future of the take-home essay as a form of assessment in the era of generative large-language models. Matjaz is Lecturer in Engineering Management and Deputy Director of Learning and Teaching overseeing the interdisciplinary courses at the School of Engineering. This post is Part 1 of 2, and belongs to the Hot Topic theme: Critical insights into contemporary issues in Higher Education.


Since the global launch of chat-based access to generative large-language models (most notably, ChatGPT), a sense of technological and moral panic set in across the education sector. In particular, it seems that despite the well-documented existence of “essay mills”, where assignments could be written by hired “professional writers”, we nonetheless relied extensively on take-home written work as form of formative and summative assessment. With the now ubiquitous and freely-available writing tools in the hands of students, a number of colleagues became worried that marks on their courses were unsafe and considered (re)introduction of in-person timed exams as a way to stop what was perceived a wide-spread possibility of “cheating”.

However, is it that perhaps the issue with take-home written assessment is not about a sudden crisis arising from new technological development, but rather with some of our teaching and assessment practices?

In many ways, the sector has been struggling for years with the parallel challenges of the need to grow student numbers to keep up with maintaining expensive real-estate and facilities, whilst at the same time questioning what is the role of (higher) education in the modern information-driven, digital society. As more up-to-date knowledge is increasingly available online, the traditional (formal) forms of higher education have been resorting to focusing on honing in graduate skills and attributes in critical thinking, effective professional practice, and cross-cultural citizenship. Thus, it has become less clear how to identify the most effective and appropriate teaching and learning models.

One promising direction emerged when grappling with more complex and advanced concepts: an emphasis on experiential learning and group / project work. This also follows the leading pedagogical frameworks about practice-based learning and greater integration of individual ways of knowing and learning. The emphasis here is placed on critical personal reflection by connecting practical experience of challenges, and the strategies adopted to address them, with the academic understanding of the issues at stake.

The schematic shown below brings together some of the critical contributions mentioned above into a more holistic approach to learning and teaching, framing experiential learning as the grounding of the development of students through experimentation and experience. However, crucial to the attainment and assessment of deep personal understanding of the learning process is a meaningful reflective observation, connecting the (individual) practice with (collective) sense / knowledge-making.

Infographic of a holistic approach to learning and teaching, combining the two key pedagogical theories of Experiential Learning Cycle and 6 Learning Types, contextualised with Edinburgh Graduate Attitudes and Attributes. Source: Autho

This schematic image by the author shows a holistic approach to learning and teaching, combining the two key pedagogical theories of Experiential Learning Cycle and 6 Learning Types, contextualised with Edinburgh Graduate Attitudes and Attributes.

When it comes to assessment of advanced (Honours) courses, the customary learning outcomes examined usually comprise a combination of core (theoretical) knowledge, relevant applied practices, and some transferable skills. These are most often examined through essay-style exams or take-home reports, where students are asked to produce some critical analysis of the subject-matter at hand with reference to academic literature. However, with the varying emphasis on experiential learning, so the details of such assessment also vary.

In more traditional learning environments, the assessment structure focuses on problem-solving in the direct sense, working through a challenge example and testing the application of an appropriate theoretical framework to propose a solution. In this context, the challenge of generative large-language models is very real. With take-home written assessment testing the average best solution to an abstract problem, students’ work can indeed be “outsourced” to these tools, thus making their attainment of learning objectives questionable. Though the flowery, over-the-top modes of expression generated by these algorithms were initially easy to spot, through further advances in “prompt engineering” and response personalisation, the ability to distinguish human and machine writing is becoming harder.

However, apart from the ability to clearly communicate complex concepts, writing in and of itself is most often not a matter of assessment. What educators examine is the understanding and application of course concepts and frameworks. For this, design assessment briefs can be designed that are harder for generative chat-bots to respond to effectively. In particular, if grounding the assessment in personal experiences of the operationalisation of theory in practice based on a specific in-course (group) exercise, it is so far impossible for the critical thinking in framing and editing the narrative to be done by generative writing tools.

As the language models used in such tools are by definition averaging statistical correlation between features of language, the context-specific cases cannot be generated. Hence, even if generative writing tools are to be used, the assessed critical analysis and the propositional logic within the essay has to be developed by students themselves as part of prompt engineering in order for their submission to appear coherent.

As it happens, this is one of the hardest skills for students to master, and a fundamental mark of developing scholarship, which reinforces the critical role of essays in assessing this dimension of a learner’s performance. It seems that until generative writing tools are able to learn in this particular experiential way (and all evidence shows that this is especially challenging, if not impossible), experiential-learning-based take-home essays are a safe and robust form of assessment[1].

In fact, I have been applying these principles in practice in the design of assessment on a number of Honours and Masters level courses at the School of Engineering and Edinburgh Futures Institute, as discussed in Part 2 of this article.

[1] This is by-and-large borne out in the first studies of student use of generative writing tools, noting their extensive use as part of the learning process, but less so when it comes to final assessment, even where it would be effective. Some of that may also be on the back of lack of clear policies and students’ assumption of punitive action for apparent use of such tools, akin to other forms of plagiarism.


picture of editor/producerMatjaz Vidmar

Dr Matjaz Vidmar is Lecturer in Engineering Management at the University of Edinburgh and Deputy Director of Learning and Teaching overseeing the interdisciplinary courses at the School of Engineering. He is researching the collaborations within Open Engineering by bridging technical and social dimensions of innovation processes and (eco)systems as well as futures, strategies and design. In particular, he co-leads The New Real programme, a collaboration between the Edinburgh Futures Institute and Alan Turing Institute, experimenting with new AI experiences, practices, infrastructures, business models and R&D methodologies, including the flagship Open Prototyping. He is also the Deputy Director of the Institute for the Study of Science, Technology and Innovation and is involved in many international initiatives to develop the future of these fields, including several start-up companies and an extensive public engagement programme on interplay of STEM, arts, and futures literacy. More at www.blogs.ed.ac.uk/vidmar.