This is a blog post that’s been brewing for a while. The landscape is changing so much that every time I go to write it, I’ve heard something new that has changed my perspective. So I’m writing this knowing full well that in a months time I’ll realise it’s a load of nonsense. Please don’t judge me.
If you haven’t been living under a rock (see AI generated image below), you’ll have noticed there’s a lot of chat about AI in HE at the moment. Both the vast increase in compute power available and the massive increase in the creation and collection of data have contributed to the boom in Artificial Intelligence developments in recent years. However, since the end of 2022, the launch of ChatGPT (a chatbot built on the AI language models being developed by OpenAI) has made AI tools more available to the general public. ChatGPT is just one of many AI tools which are now widely available. There are many others, with many other purposes, including DALL-E 2 which I used to create my schnauzer-themed pictures. If you don’t know why they are schnauzer themed, see my Flickr account.
ChatGPT is a conversational chat bot built on top of GPT3.5. I asked ChatGPT what GPT was and it said ‘GPT stands for “Generative Pre-trained Transformer,” which is a type of deep learning algorithm used for natural language processing tasks such as language generation, translation, and understanding.’
There’s now a GPT4 but it’s only currently available to chat with if you have a ChatGPT Plus account which is something you pay for every month (see useful comparison article ‘GPT-4 vs. ChatGPT-3.5: What’s the difference?’).
I can think of so many good things which have started to emerge but also could be made possible by AI in a teaching, learning and assessment space. Things like marking assistance and automation, improved robot captioners, the ability to provide summaries or alternative formats for just about any kind of teaching/learning materials (like summaries of lecture recordings, for example) but there are also risks.
In a time where it’s already quite hard to spot fake news from real news, AI can create deepfakes which are almost impossible to tell apart from the real thing – including images, audio and video (ie a bot that can mimic a learned voice). There’s also a worry that both students and staff could try to pass AI creations off as their own and it may be hard to detect. This has generated a massive knee jerk reaction in HE where we are all scurrying around trying to work out what it means for our assessment practices and for edtech suppliers trying to create products that help us detect AI use.
There are also dangers around privacy. If you read my occasional blog posts, you may remember one I wrote during the pandemic about the dangers of buying and using random edtech tools without doing the compliance thinking/paperwork. This is another one of those situations. There is the potential for these AIs to take the data you give them and use it for other things (perhaps that you hadn’t been aware of), one of the reasons that Italy has banned it. When you are playing with it, please consider the risk and don’t give it personal information about you or colleagues and leave out confidential info about your institution.
And finally on the doom-and-gloom side of things, you may have seen that key figures in AI have asked for a pause of development of AIs in an open letter warning of potential risks with warnings about AI automating away all the human jobs which, I must admit, does appear to be becoming a more viable risk all the time. Still I can’t help but see the irony of Elon Musk who’s been developing driverless vehicles for many years with the threat of making lorry drivers redundant, participating in an open letter about the dangers of AI automating people’s jobs. Although I’ve been joking about the idea of there being a Virtual Karen Bot, allowing me to play with dogs more hours every day, I’m starting to wonder if it might not be that far off and maybe I’m finding it less funny than I did even in January.
In saying all of that and with the doom-and-gloom out of the way, there are massive possibilities here, and one’s I’ve been wondering about for many years. What I’m hoping is that all of this AI panic, once the dust settles, will encourage us in HE and FE, to look at our assessment and teaching practices and think about whether we are actually assessing the right things and whether there’s some big, fundamental changes we can make which will make assessment an actual part of the learning process rather than just a rush job at the end as a measurement of achievement. I can but dream…..
I really relate to the “been brewing for a while”
Agree, it’s such a fast moving field, and love the image based approach (was testing firefly.ai from Adobe today, was impressed)