Graphic image showing six different people working together to solve a puzzle

Collegiate commentary: Five conundrums from the ChatGPT series

Graphic image showing six different people working together to solve a puzzle
AI generated image [BETA]
In this post, we share with you the Collegiate Commentary from our latest Teaching Matters newsletter↗️: Five conundrums from the Moving forward with ChatGPT series↗️. In this commentary, Sue Beckingham, Associate Professor (Learning and Teaching) at Sheffield Hallam University presents a comprehensive overview of AI in academia.


The open launch of ChatGPT in November 2022 has without doubt opened a floodgate of questions about its use. It quickly emerged that this was just one of many Generative AI (GenAI) tools. Both Microsoft and Google have competed to release their own versions alongside a swathe of others providing tools to not only generate text but also to create images, presentations, videos and debugging code. Suffice to say when we talk about generative technology, ChatGPT will be one of a number of chosen tools available in our digital toolbox.

Many concerns have been raised from academic integrity, copyright, misinformation, disinformation and bias, unethical practice training these large language models to refuse inappropriate requests, to the environmental impact developing and using the tools. On the flipside many have been waxing lyrical about the potential these technologies can have in terms of productivity (save time) and performance (improve quality).

Whilst illuminating, and indeed helpful, it is evident that many educators feel completely overwhelmed. Having facilitated several webinars in the last year, it is clear that we come with very different experiences. Polls conducted indicate a 50:50 split with half having never used GenAI. We will all benefit (staff and students) by engaging in ongoing CPD – on what GenAI is, how it can used and how it shouldn’t be used in the context of learning and teaching.

We need to engage in conversations with our students about appropriate and ethical use of GenAI and must not make assumptions that all students will know how it works and what the shortfalls are.

So if we are to use these technologies and prepare staff and students to do so safely, then the five conundrums identified are an excellent starting point to prepare ourselves. Here are some of my thoughts to add to the discussion.

1. Rethinking our academic culture

We need to reconsider what we are assessing and why we are assessing. There is the potential to bring GenAI into formative assessment to scaffold the process of assessment for learning, documenting and reflecting on the process of assessment and valuable skills development.

Whilst we know that every new technology introduced from the printing press to modern day digital technology has the potential to be a disruptor, it is important to acknowledge that GenAI needs to be fact-checked. Nature (24 January 2023↗️) made it clear in their article that LLM tools should not be cited or attributed as an author. Furthermore, whilst attempts to put guard rails in place, these should not be assumed to be secure and inappropriate data may be presented. Sam Altman, CEO of OpenAI who created ChatGPT, stated in an interview that although the new version (GPT-4) was “not perfect”, it had scored 90% in the US on the bar exams, and a near-perfect score on the high school SAT math test. It could also write computer code in most programming languages” (Guardian, 17 March 2023↗️). Conversely, he also highlighted concerns including the perpetuation of disinformation.

Supporting our student to develop critical fact checking skills is vital so that they can learn to identify misinformation (inaccurate) and disinformation (deliberately aimed to cause malicious damage).

2. Grappling with the difference between AI and human intelligences

We need to look at and discuss the ethical, legal, and social implications of using GenAI with our students. Privacy concerns, GDPR compliance↗️ and copyright infringement are a concern. Claude↗️ is described as an AI assistant. It allows you to upload PDFs and you can ask it to summarise the PDF document. Consideration needs to be given as to who has copyright ownership of this document (See Copyright policies of academic publishers↗️). The environmental impact is another area to discuss and find human solutions to rectify this.

3. Disrupting our teaching and assessment practices

Universities provide their students with Microsoft Office. If we are to use other technology, we need to be sure there is equitable access. If you are going to consider introducing GenAI tools in learning and teaching, it is vital students are directed to use tools that are free to access. Whilst Microsoft 365 CoPilot promises enticing enhancements, how many will be able to afford to commit to $30 per user, per month↗️?

4. Honouring transparency and honesty

We need to be transparent about how we are using these tools. As Tracy Madden points out in her post↗️, if academics are going to use generative technologies (which they are) then why wouldn’t we teach our students to use such tools to enhance their productivity and performance? Whilst academics might develop initial drafts to develop or enhance learning outcomes, assessment briefs and criteria, class activity outlines and activities, what might we consider that would be beneficial for students?

5. Embedding ethics in and beyond the classroom

The conversation around ethics needs to take place in the classroom, with ground rules established, discussed, and agreed. How might these be shared and developed further for the benefit of all? The University of Sydney has worked with students as partners to develop a useful resource Supporting students to use AI responsibly and productively↗️.

At my own university, we have updated the Academic Conduct regulations and provided new guidance for our students. Given the fast pace of developments in this area, I am sure we will continue to update guidance. The regulations now explicitly refer to artificial intelligence:

Contract cheating/concerns over authorship: This form of misconduct involves another person (or artificial intelligence) creating the assignment which you then submit as your own. Examples of this sort of misconduct include: buying an assignment from an ‘essay mill’/professional writer; submitting an assignment which you have downloaded from a filesharing site; acquiring an essay from another student or family member and submitting it as your own; attempting to pass off work created by artificial intelligence as your own. These activities show a clear intention to deceive the marker and are treated as misconduct.

New guidance provides examples of how generative artificial intelligence might be used. For example:

    • Answering questions where answers are based on material which can be found on the internet.
    • Drafting ideas and planning or structuring written materials.
    • Generating ideas for graphics, images, and visuals.
    • Reviewing and critically analysing written materials to assess their validity.
    • Helping to improve your grammar and writing structure – especially helpful if English is a second language.
    • Experimenting with different writing styles.
    • Getting explanations.
    • Debugging code.
    • Getting over writer’s block.

However, the guidance also highlights the limitations and drawbacks of using AI. Whilst easy to use, it is important to remember they can provide misleading or incorrect information.

They can offer shortcuts that reduce the need for critical engagement, a key to deep and meaningful learning, but students need to be aware of the difference between reasonable use of such tools, and at what point their use might be regarded as a way of avoiding necessary thinking.

The guidance also emphasises the artificial and human intelligence are not the same. AI tools do not understand anything that they produce, nor do they understand what the words they produce mean when applied to the real world.

To support this, we are also working on a new online academic integrity mini module that can be embedded within a course or signposted as a self-directed activity. This video aimed at students talks about ChatGPT and academic integrity↗️.

Useful resources:


Photograph of the authorSue Beckingham

Sue Beckingham is an Associate Professor (Learning and Teaching), a National Teaching Fellow, Principal Lecturer in Digital Analytics and Technologies, and a Learning and Teaching Portfolio Lead at Sheffield Hallam University. She is also a Certified Management and Business Educator, a Senior Fellow of the Higher Education Academy, a Fellow of the Staff and Educational Development Association, and a Visiting Fellow at Edge Hill University. Her research interests include social media for learning and digital identity, groupwork, and the use of technology to enhance learning and teaching; and has published and presented this work nationally and internationally as an invited keynote speaker. She is a co-founder of the international #LTHEchat ‘Learning and Teaching in Higher Education Twitter Chat↗️’ and the Social Media for Learning in HE Conference @SocMedHE↗️.

Leave a Reply

Your email address will not be published. Required fields are marked *