As explained in Part 1 of this article, generative writing tools need not be feared as the end of take-home writing assignments, if they are grounded on students’ critical reflection in connecting theory and practice. On the contrary, with the increase of group-work and experiential learning models, the take-home assessment has become more common in fields beyond the social science and humanities domains, such as engineering. For example, in the final assessment set in a number of courses I designed and now deliver across management, systems engineering and futures design domains, 60-70% of the final mark is obtained from a final, take-home essay. However, for this to work at a time when chat-bots can read and write much faster than humans, I have developed an explicit assessment brief to discuss practical experience with respect to the core course concepts and literature. This assessment structure pre-dates the advent of generative writing tools, and is based on the pedagogy of reflexive critical thinking as a major marker of experiential learning and knowledge making. Most importantly, the students are also required to offer some original insight that the practical experience inspired. This leads them beyond the well-rehearsed generic principles from course literature and requires them to examine unique points of view (a dimension where generative algorithms struggle). In my experience, whilst it is possible for students to articulate a well-structured generative writing prompt that identifies all three key components of the assessment answer (underlying theory, practical experience, and new insight), in doing so, they have, by-and-large, already demonstrated the critical understanding and skills outcomes from the course. Thus, the writing of the text in itself is not as pedagogically relevant (especially in an era where such tools are widely used in any case). Furthermore, by keeping the assignment length reasonably short (<1500 words), the limited scope and required conciseness requires writing skills that ensure the clarity of message, again demonstrating critical thinking and academic practice. These considerations and observations have also been demonstrated in practice. During marking take-home essay assignments in all of my courses with this mode of assessment, we could only detect a small number of submissions where more extensive use of generative writing tools was likely (<5%). Furthermore, these submissions did suffer from the tale-tail signs of algorithmic syntax, both in terms of the artificial linguistic forms and predicted poor integration of different sections, even where the content was edited to be more or less on point. Overall, we can note that there was no detectable advantage in terms of quality of writing output. If comparing the mark distribution for the final take-home essay assignment in the course Technology and Innovation Management (5 / MSc), for which we have the required comparable longitudinal data[1], there is no discernible change in student relative performance between academic years 2021-22 (pre-ChatGPT; n=74/63) and 2023-24 (post Chat-GPT; n=77/65). If anything, within the normal boundaries of cohort differentiation, the results from the more recent academic year(s) are slightly worse, despite the teaching team’s clear policy that use of generative writing tools is allowed as means to improve grammar, syntax and text flow. Overall, this model is demonstrating that, with more focus in course delivery on experiential learning and then framing the take-home essay as an examination of critical reflection, the current generation of generative writing tools do not pose any serious threat to the robustness and integrity of take-home essay as a form of assessment. In addition, if the assessment objectives target explicit linkage of theoretical concepts to a managed, in-class experience, then the intellectual work required to construct a writing prompt for a generative writing tool already meets the core learning outcomes examined by such an assessment. Having said that, it is nonetheless important that we educate learners about ethical and epistemological issues surrounding large language models within the context of in-class exercises and take-home writing support. On the epistemic side, it is critical to stress that language models are based on statistical patterns of language use, and as such cannot serve as un-checked sources of knowledge.For many widely accepted theories, the two can be strongly aligned. But with newer or more peripheral bodies of literature, the margins of error in accurately representing scholarly insights increase significantly. On the matter of ethics, two critical issues need to be communicated:
- The models have been constructed with inherently biased and often exploitative data practices.
- Inputting own original ideas into publicly available tools can lead to them being used for future model development / training and thus lack of credit to the originators.