Is ChatGPT spelling the end of take-home essays as a form of assessment? Part 2: The practice

Person writing in notebook holding a mobile phone
Image credit: Tung Nguyen, Pixabay, CC0

In this post, Dr Matjaz Vidmar offers Part 2 of his exploration about the future of the take-home essay as a form of assessment in the era of generative large-language models. Matjaz is Lecturer in Engineering Management and Deputy Director of Learning and Teaching overseeing the interdisciplinary courses at the School of Engineering. This post is Part 2 of 2, and belongs to the Hot Topic theme: Critical insights into contemporary issues in Higher Education.


As explained in Part 1 of this article, generative writing tools need not be feared as the end of take-home writing assignments, if they are grounded on students’ critical reflection in connecting theory and practice. On the contrary, with the increase of group-work and experiential learning models, the take-home assessment has become more common in fields beyond the social science and humanities domains, such as engineering. For example, in the final assessment set in a number of courses I designed and now deliver across management, systems engineering and futures design domains, 60-70% of the final mark is obtained from a final, take-home essay.

However, for this to work at a time when chat-bots can read and write much faster than humans, I have developed an explicit assessment brief to discuss practical experience with respect to the core course concepts and literature. This assessment structure pre-dates the advent of generative writing tools, and is based on the pedagogy of reflexive critical thinking as a major marker of experiential learning and knowledge making. Most importantly, the students are also required to offer some original insight that the practical experience inspired. This leads them beyond the well-rehearsed generic principles from course literature and requires them to examine unique points of view (a dimension where generative algorithms struggle).

In my experience, whilst it is possible for students to articulate a well-structured generative writing prompt that identifies all three key components of the assessment answer (underlying theory, practical experience, and new insight), in doing so, they have, by-and-large, already demonstrated the critical understanding and skills outcomes from the course. Thus, the writing of the text in itself is not as pedagogically relevant (especially in an era where such tools are widely used in any case). Furthermore, by keeping the assignment length reasonably short (<1500 words), the limited scope and required conciseness requires writing skills that ensure the clarity of message, again demonstrating critical thinking and academic practice.

These considerations and observations have also been demonstrated in practice. During marking take-home essay assignments in all of my courses with this mode of assessment, we could only detect a small number of submissions where more extensive use of generative writing tools was likely (<5%). Furthermore, these submissions did suffer from the tale-tail signs of algorithmic syntax, both in terms of the artificial linguistic forms and predicted poor integration of different sections, even where the content was edited to be more or less on point.

Overall, we can note that there was no detectable advantage in terms of quality of writing output. If comparing the mark distribution for the final take-home essay assignment in the course Technology and Innovation Management (5 / MSc), for which we have the required comparable longitudinal data[1], there is no discernible change in student relative performance between academic years 2021-22 (pre-ChatGPT; n=74/63) and 2023-24 (post Chat-GPT; n=77/65). If anything, within the normal boundaries of cohort differentiation, the results from the more recent academic year(s) are slightly worse, despite the teaching team’s clear policy that use of generative writing tools is allowed as means to improve grammar, syntax and text flow.

Graphs showing comparison of final take-home essay assessment results for the course Technology and Innovation Management 5 (undergraduate) / MSc (postgraduate) for the academic years 2021-22; 2022-23 and 2023-24. There is no noticeable change in relative overall performance, there was also no shift in absolute quality of submitted work. Source: Author.
Comparison of final take-home essay assessment results for the course Technology and Innovation Management 5 (undergraduate) / MSc (postgraduate) for the academic years 2021-22; 2022-23 and 2023-24. There is no noticeable change in relative overall performance, there was also no shift in absolute quality of submitted work. Source: Author.

Overall, this model is demonstrating that, with more focus in course delivery on experiential learning and then framing the take-home essay as an examination of critical reflection, the current generation of generative writing tools do not pose any serious threat to the robustness and integrity of take-home essay as a form of assessment. In addition, if the assessment objectives target explicit linkage of theoretical concepts to a managed, in-class experience, then the intellectual work required to construct a writing prompt for a generative writing tool already meets the core learning outcomes examined by such an assessment.

Having said that, it is nonetheless important that we educate learners about ethical and epistemological issues surrounding large language models within the context of in-class exercises and take-home writing support. On the epistemic side, it is critical to stress that language models are based on statistical patterns of language use, and as such cannot serve as un-checked sources of knowledge.For many widely accepted theories, the two can be strongly aligned. But with newer or more peripheral bodies of literature, the margins of error in accurately representing scholarly insights increase significantly.

On the matter of ethics, two critical issues need to be communicated:

  1.  The models have been constructed with inherently biased and often exploitative data practices.
  2. Inputting own original ideas into publicly available tools can lead to them being used for future model development / training and thus lack of credit to the originators.

Colleagues have already set the scene for making the most of generative writing tools, proposing both more adaptive teaching practices as well as assessment innovation. Given that it is here to stay, we should focus on making the most of this new technology to enhance the experiential learning process, and reject the temptation to revert to outdated assessment practices, as that would also inevitably make our teaching less relevant to students.

[1] Apart from lowering the word-count (which was planned ahead of time and unrelated to arrival of generative writing tools) there was no change to the assessment brief. Three other courses which I organise, with comparable assessment models (Building Near Futures, Systems Engineering: Thinking and Practice and Social Dimensions of Astrobiology and Space Exploration) also show no discernible advantage to students, though they are new, so there is no pre-generative-writing-tools data. Colleagues teaching Technology Entrepreneurship 5 / MSc, which I set up in 2020-2022 academic year with similar final assessment also report there is no change in results and little use of generative writing tools.


picture of editor/producerMatjaz Vidmar

Dr Matjaz Vidmar is Lecturer in Engineering Management at the University of Edinburgh and Deputy Director of Learning and Teaching overseeing the interdisciplinary courses at the School of Engineering. He is researching the collaborations within Open Engineering by bridging technical and social dimensions of innovation processes and (eco)systems as well as futures, strategies and design. In particular, he co-leads The New Real programme, a collaboration between the Edinburgh Futures Institute and Alan Turing Institute, experimenting with new AI experiences, practices, infrastructures, business models and R&D methodologies, including the flagship Open Prototyping. He is also the Deputy Director of the Institute for the Study of Science, Technology and Innovation and is involved in many international initiatives to develop the future of these fields, including several start-up companies and an extensive public engagement programme on interplay of STEM, arts, and futures literacy. More at www.blogs.ed.ac.uk/vidmar.