Any views expressed within media held on this service are those of the contributors, should not be taken as approved or endorsed by the University, and do not necessarily reflect the views of the University in respect of any particular issue.
By Karen Howie (Head of Digital Learning Applications and Media in Information Services)
AI is exciting, the risks are boring… so why should we care?

AI is exciting, the risks are boring… so why should we care?

A schnauzer with headphones captioning video.
A schnauzer who is correcting captions for a piece of media (AI generated by Adobe Firefly and given my enjoyment of Schnauzer-based AI art).

I’ve not blogged for a while but I went to an event today which has inspired me to write a post I’ve been wanting to write for a while.  I’ve got another blog post coming about the event itself (which was run by JISC and was about Accessibility in Procurement – very thought provoking).  One of the things being talked about was what AI might do to the accessibility landscape – and the opportunities/risks around this and what we need to be aware of.

If you know me, you know that I am the service owner for a large number of our teaching, learning, assessment and media services at the University.  Many of these services are provided by 3rd parties and something I’ve noticed is how many of the suppliers are super excited about AI and are starting to push out features that are powered by AI.  Looking at the leaps and bounds in the AI space, even in the last year, it’s thrilling and terrifying at the same time. There is so much potential for good with AI but also perhaps an equal potential for evil.

I’m excited about improvements to natural language processing which are coming – to improve automated captions for accessibility, translation to help those with language barriers, summaries of all sorts of audio/video to help people consume large amounts of data more efficiently (but also a summary can help you understand if it’s worth investing your time in watching a full video). I’m interested but slightly wary of things we might ask AI to do to help us like write alternative text for images (this could be great or it could be awful!!) and absolutely terrified of deep-fakes and political meddling.

Given these tools are coming at us faster than we can hope to keep up, we need to think about how we can evaluate them before they come or before we decide to turn them on (luckily, some of our suppliers are giving us the ability to leave them switched off until we are happy to turn them on).

I’ve been working with other staff in LTW (Kendal in particular) to think about the AI that’s hurtling towards us.

The types of things we are thinking about, are as follows.

  • Is it obvious that the tool is powered by AI?  Would a user know that/understand that?  We want our users to be aware so they know they have to double check the outputs. Is there any way to ‘confirm’ the output has been checked by a human?
  • Does the AI mean data is processed in another location and/or by a new subprocessor? If so, it likely changes our Data Processing Agreement and Data Protection Impact Assessment will need to be reviewed (plus likely some of our other compliance paperwork).  This will depend on whether the supplier is creating their own AI or if they are integrating in an AI from another supplier (like ChatGPT or Azure AI for example). Does it’s behaviour do anything that might impact our compliance paperwork such as automatically making decisions (ie AI does marking with no human intervention)? How is it protected?
  • If the AI comes from another supplier, are there any contract clauses we might disagree with or new terms of use we’d find it impossible to comply with?  This could be around whether they use the requests/data collected for training the AI or some other purpose, whether there are any age restrictions on use this is something I hadn’t considered until the event today where it came up!), whether there are rules around what it can be used for.  Are there guard rails to restrict what it’s possible to use the AI for – ie conversations which don’t encourage law breaking/swearing and profanity.
  • Do we know how the AI has been trained?  How has the supplier mitigated bias given the AI will have been trained on bias data?  How ethical was the training process? Was it trained using copyrighted data? Do we have any choice in where it’s being trained/what it’s being trained on? How does the AI work/what algorithm does it use? Is it possible to audit it’s decision making?
  • Can we report on usage of the AI features so we can see what people are doing with it and how it’s being used?

It’s not as simple as just switching it on. We need to make sure it does what we expect and the supplier has developed it in a way which matches our values.

And of course, there’s one question missing from this…. is the feature actually useful in terms of teaching/learning/assessment at HE level?  Or is it just a flashy gimmick?




(Firefly - AI generated.)


Leave a Reply

Your email address will not be published. Required fields are marked *


Report this page

To report inappropriate content on this page, please use the form below. Upon receiving your report, we will be in touch as per the Take Down Policy of the service.

Please note that personal data collected through this form is used and stored for the purposes of processing this report and communication with you.

If you are unable to report a concern about content via this form please contact the Service Owner.

Please enter an email address you wish to be contacted on. Please describe the unacceptable content in sufficient detail to allow us to locate it, and why you consider it to be unacceptable.
By submitting this report, you accept that it is accurate and that fraudulent or nuisance complaints may result in action by the University.