I’m sorry, could you repeat that?
Have you ever had a moment while watching the news, or an advert, and felt like you missed what was said? It’s an odd sensation and can totally change your understanding of what you see. Now imagine if that happened over and over again. What things might you miss out on? What context would you lose and how might it negatively influence your overall engagement? Now imagine if that was happening while you were trying to revise for a university course or while watching a video to find out how to access important services. You would likely feel at a disadvantage compared to your peers.
The good thing is subtitles can help mitigate scenarios like those. Even better, subtitled media benefits everyone. Whether a member of the Deaf community, or non-native English speaker, or simply watching a video with lots of important but technical language, being able to listen and read information can make it easier to process. It means everyone gets the opportunity to fully engage.
For many, experiences of inaccessible media are unfortunately quite common, contrary to long-existing equality legislation. This is why it is important that institutions take action to address content accessibility. In March 2019, The University of Edinburgh launched a pilot programme to help address gaps in accessible multimedia hosted on our media platform (Media Hopper Create). This student-led subtitling service has helped to improve our multimedia for all who use it and consists of ten graduate and post-doc students from various academic backgrounds. So far, they have subtitled over one-hundred pieces of multimedia available to the public.
But as amazing as our student-led service has been over the past few months, there is still more work to do across the University to address accessibility. It is important that we start to really understand the technology landscape and what resources are available to aid this mission. Research and innovation have long been part of the University’s heritage. So, there is ample opportunity to leverage this innovation to help drive accessible design and create cultural change.
Over the past few months, the Subtitling for Media Project team has identified some interesting projects looking into transcription and speech technology within the University. On May 10th, the University hosted an event featuring Prof Steve Renals, Prof Speech Technology, School of Informatics and Nick Rankin, CEO and co-founder of Quorate Technology. The team hoped to use the event as a way of highlighting the innovation happening around transcription and automation. Questions that shaped the event included:
–What state of the art technology is available?
–What might the future look like for accessible media?
–What are the challenges around subtitling media?
Prof Renals presented on The Centre for Speech Technology Research – an interdisciplinary research centre linking Informatics and Linguistics and English Language. Renals spoke about companies and organisations who apply speech technology to a wide range of uses: from voice reconstruction for people with motor neurone disease to production companies, like BBC World Service, who use synthetic speech for multilingual spoken content production. Renals also spoke about challenges around speech recognition which can be used to expedite the transcription and subtitling process. Issues include vocabulary size, speaker characteristics and acoustic conditions. The biggest takeaway for the project team was that transcription is not cheap and it would be difficult to have human transcribed data for all teaching and public-facing materials. Still, the opportunity to use technology and data to create automatic subtitles, is a rich area worth exploring.
Nick Rankin, CEO of Quorate Technology, presented on three projects that his company had recently worked on:
1) EU funded research project looking into making meetings more productive by analysing speech from meetings to produce transcripts.
2) Project analysing calls from the financial sector – of which there were 1000 calls per day.
3) House of Commons project looking at committee meeting recordings to build a transcription service that could recognise natural speech patterns within politics.
Rankin highlighted challenges around having a sizeable training dataset to build systems that would benefit a subtitling service based in the University. But, what he demonstrated would be of huge benefit to any future subtitling team, especially the ability to quickly search for words or tags within a subtitled file.
Projects like the student-led subtitling service are helping the University to meet its responsibilities in creating accessible learning spaces. But more can be done, and if the event on May 10th is any indication, there are many great researchers and innovators ready to bring information and media well into the 21st century in an accessible and engaging way.
This blog post was originally published on the Digital Learning Applications and Media team blog.