Today I had an almost meeting free day. I spent it doing some short listing, catching up with emails, and for the last hour today I picked up where I left off with my AI dabbling/reading. As you can see, I’ve been playing with Adobe Firefly for 20 mins and it’s outputs are pretty good. Photos of people are a bit more ropey but actually I’m only really interested in schnauzer based imagery and that seems pretty good. ‘Oh no, not another schnauzer based blog post’, I hear you say…..
I’ve had lots of conversations with lots of people about AI. Some of them are particularly memorable, like a conversation I had with Lorna Campbell about the ethics (or lack of ethics!) in AI. Lorna wrote a blog post which is a good description of some of the dangers of AI which is worth a read, and discussions with Lorna have opened my eyes to some of the risks I hadn’t considered, like how the filtering/moderation is done for AI training and the impact on the human beings who do it (Lorna’s blog post explains this really well).
Recently, I’ve been thinking about proof in AI and how muddy it all is. Stephen in my team created a video this week which had an almost perfect version of him speaking fluent french with the right mouth movements (ok, my french is pretty bad so I don’t know if it was fluent, but it looked amazing!). It was generated from a video where he spoke in English. It was almost impossible to tell that the french was fake. It had his voice. After being totally amazed, I also realised how terrifying it is. This type of video proves AI can make you say almost anything, and it’ll be hard to tell fake from real.
For a student who’s accused by an AI detector that they’ve used AI to write an assignment, where in-fact they haven’t, how do they prove they are innocent? I noticed that Turnitin have an ‘Ethical AI use checklist for students’ which is effectively providing suggestions on how to prove you didn’t use AI, whilst you are writing your assignment (ie save and keep multiple drafts as you go, use draft coach and ask to keep multiple submissions, make an effort to be sure your writing style and voice are evident in the work). There’s an interesting Washington Post article which gives advice on what to do if you’ve been accused of using AI to write an assignment when you haven’t. Terrifyingly ‘If all else fails and you need to pass a class to graduate, you or your parents could talk to a lawyer.’ We don’t have the AI detector turned on in Turnitin here at Edinburgh.
Ironically, we also don’t know whether AI has stolen or plagiarised other people’s work. I’ve heard rumours that ChatGPT was trained on GitHub code. I don’t know if that’s true and I’m not finding anything that says for definite this is the case but I asked ChatGPT itself and it said ‘The specific details of ChatGPT’s training data, including whether it was trained on content from GitHub, are not publicly disclosed by OpenAI. ChatGPT is trained on a mixture of licensed data, data created by human trainers, and publicly available data from the internet. ‘. If ChatGPT has stolen or plagiarised your code, your research paper or your idea, how can you prove it?
I don’t have an answer to that question, so I’ll just finish here with more pictures of AI generated schnauzers.