AI, Heritage, and the Future of Education: My Reflections on the ArCH Conference
Introduction
On March 16th, I attended the AI for Cultural Heritage Hub (ArCH) Conference at the Clare College at the University Cambridge. I am very grateful for the generous invite by the organisers of the conference. Hosted by Cambridge University Libraries in collaboration with the Department of Applied Mathematics and Theoretical Physics, and funded by ai@cam, the event served as a vital forum for exploring how artificial intelligence is reshaping the Galleries, Libraries, Archives, and Museums (GLAM) sector. The conference focus aligned with something that I truly believe – that Generative AI can be used in a responsible way to support important aspects of Higher Education.
As a member of the Digital Skills, Design and Training Team at The University of Edinburgh and with how my day-today involves close collaboration with the ELM team at EDINA, I found the day to be an extraordinary opportunity to compare how different institutions tackle meaningful adoption of Generative AI technologies. My role involves designing and delivering training sessions on the responsible and effective use of Generative AI as a part of our broader Digital Skills Programme, and this conference provided the perfect case studies to inspire my future training sessions.
The ArCH Project and Institutional Infrastructure
The conference opened with a formal welcome from Jessica Gardner, the University Librarian at Cambridge, followed by an in-depth introduction to the ArCH project by Amelie Roper and Jennie Fletcher. ArCH is a proof-of-concept initiative designed to create a secure workspace where researchers can analyse cultural heritage data using AI without the risks associated with public-facing tools. This is similar to the mission of ELM, our AI innovation platform we have built at EDINA. ELM provides a central, secure gateway to Generative AI via access to Large Language Models (LLMs), ensuring that all chat histories, prompts, and uploaded documents are hosted securely on university servers. During the live demo of ArCH, it was clear the focus of the project was to ensure the environment to use AI to support work needs to be secure. This is the same as we advocate for in our training: providing a protected environment where users can experiment with AI while maintaining control over their data.
Unlocking Inaccessible Data with AI
The second session, led by Tuan Pham, moved into the practical challenges of “dark data” – records that exist but are effectively invisible because they are not machine-readable. Amparo Gimeno-Sanjuan presented a fascinating look at the conversion of legacy card catalogues into structured digital formats using LLMs. This was followed by Mathew Lowe from the Museum of Zoology emphasised that AI is “time-critical” for biodiversity research; we simply do not have the human capacity to manually transcribe centuries of handwritten registers. This session highlighted a fundamental shift in digital literacy: the ability to understand what AI is capable of and being able to identify how this can apply responsibly to different aspects of how University’s operate is becoming an increasingly important skill. I certainly see how this will influence my training sessions going forward.
Technical Precision and the Human-in-the-Loop
After lunch, the focus shifted to the technical nuances of computer vision under the lead of Irene Galandra. Huw Jones and Wallace Peaslee showcased their work on the Darwin/Henslow project, where they trained a model to distinguish between the handwriting of Charles Darwin and his mentor, John Stevens Henslow. Their results – achieving speeds three to four times faster than manual work – highlighted the efficiency of modern AI solutions. This was complemented by Anna Breger’s presentation on reconstructing the papyrus of Ramose (the Book of the Dead). Using bespoke autoencoders and fibre pattern matching, her team is digitally reassembling ancient fragments that have been physically separated for millennia. These presentations provided a masterclass in the “human-in-the-loop” approach, where AI handles the heavy lifting of pattern recognition while the human expert provides the final verification and context. Later, during dinner I had a conversation with Wallace, where we discussed if the idea is that the entire process is automated. Wallace highlighted, that having human researchers must remain essential and the entire process must support their work, but not aim to replace them. It was great to see how ethical the approaches to AI are in our community. This is something I continually highlight in the training sessions I design and deliver, and I believe as time goes on those issues will remain vital.
Ethics, Representation, and Colonial Legacies
The fourth session, chaired by Siddharth Soni, addressed the ethical footprint of AI, particularly concerning colonial history and excluded knowledge. Josh Fitzgerald explored the use of AI-infused approaches to analyse sixteenth-century Nahuatl-Latin lectionaries. This was followed by a collaborative presentation by Maya Indira Ganesh, Joel McKim, and David Waterhouse, who discussed experiments at the Scott Polar Research Institute. They examined how generative AI handles sensitive items like scrimshaw holdings and indigenous records. The session was a stark reminder that AI is not neutral; it carries the biases of its training data. For those of us in the education sector, this means we have a profound responsibility to share this information with staff students that might now be aware of it and discover how to interrogate AI outputs for cultural and historical inaccuracies.
This ethical approach to understanding bias and spreading awarness is a crucial element of our Digital Skills Programme and have been really important to the 1400 participants of my courses last year, as each session contains a space to discuss ethical issues associated with Generative AI and bias was the most heavily mentioned issue. We will continue our mission of spreading awareness in years to come, having a real impact on our community.
Strategic Learnings for the ELM Platform
The learnings from the conference reinforced that the architectural choices we made for ELM at EDINA are directly solving the most pressing concerns of the Higher Education community. One of the most significant takeaways was the validation of our Secure and Responsible pillars. Many speakers expressed concern about the “data leakage” associated with commercial models, particularly when handling rare or sensitive cultural artifacts. ELM’s zero data retention agreements and university-hosted servers provide exactly the secure “sandbox” that researchers are calling for.
Furthermore, the message of the discussion on the environmental impact of AI during the final panel with Anna Breger and Anna-Maria Sichani aligned with our Climate Sensitive principle. ELM’s focus on locally hosted, greener, open-source optimised LLMs is not just a secondary feature; it is an essential requirement for institutions aiming to align their digital innovation with their sustainability goals. The ArCH project’s goal of providing a “gateway” for non-technical users also mirrors our Inclusive and Responsive goals. We must continue to develop ELM as a platform that “levels the playing field,” ensuring that the most powerful LLMs are accessible to all students and staff regardless of their departmental funding or technical background.
Implications for Digital Skills and Training Delivery
As a member of the Digital Skills, Design and Training Team, this conference has inspired me in how I will design and deliver our training sessions relating to Generative AI & ELM that are a part of the Digital Skills Programme moving forward. The ArCH conference was an amazing opportunity to speak with many academics, librarians and technical staff from other university’s and understand how different institutions understand meaningful, responsible and safe Generative AI Adoption. AI is at its most powerful when it is used to augment human expertise rather than replace it. In my training sessions, I will continue to move the conversation beyond simple text generation and into the realm of exploring how this technology can strategically fit into our work in a safe and responsible way.
Moreover, the session on colonial legacies has provided many important insights into this issue. My goal is to ensure that every student and staff member at the University of Edinburgh understands that being “AI literate” involves more than just knowing how to write a prompt; it requires an understanding of data provenance, environmental impact, and the ethical implications of the tools they use. By using ELM as the primary tool in our training, I can demonstrate a responsible model for AI engagement in many aspects.
Final Thoughts
The ArCH conference was a powerful opportunity to connect with higher education institutions and to learn more about what they are doing in relation to Generative AI adoption. Through the combination of having our secure ELM platform and our team’s comprehensive Digital Skills Programme we are well positioned to help adopt Generative AI in a meaningful way by our community.

