Educated Prompting: Coding Without Writing a Single Line

Introduction
Abstract
In my second month as the DLAM Digital Accessibility Intern, I’ve been contributing key accessibility and UX improvements to an open-source sheet music application. This blog describes how a GenAI workflow helped fast-track my understanding of Python coding, and allowed me to make impactful changes to an open-source project, through a process I call “Educated Prompting”.
Closer to code than we might think…
Working with computers day-to-day, many of us are closer than we think to writing a line of code or two. Some might have only gone as far as using the “inspect element” feature to make their social media following look much more massive than it really is, while others might write extensive amounts of code daily.
My daily experience with code
In my work testing website accessibility, I often look at HTML or CSS code, although I never have to write any myself. However, my research into tools for visually impaired musicians – such as my best friend – led me to an amazing open-source project called Talking Scores, which did get me coding.
Discovering Talking Scores
Talking Scores lets users upload MusicXML (the standard for digital sheet music) files to its website, and converts them into readable HTML. This output can be read aloud by screen readers, creating truly “talking” scores. However, at this stage, the output was much more detailed than my friend needed – he only needs to see the note names and to be able to hear the music. Additionally, the site’s bright interface was difficult for him to navigate.
For these reasons, I set out to implement the following:
- Adding a native dark mode function
- Removing the clunky repetition information displayed in the output.
The Challenge
I discussed these improvements with Stewart, my line manager, who supported the idea. He connected me with Tallulah Thompson, the DLAM Lecture Recording Data intern. She is familiar with Python – the language in which most of Talking Scores is written – and I spoke with her about the feasibility of my planned improvements. Our discussion highlighted the value of Generative AI in programming and she introduced me to InfPALS resources about Git and GitHub, which I realised were tools I would need to become familiar with to contribute to Talking Scores.
Even after working through some of these materials, I still had to find a way of actually writing the first line of code to implement the features I wanted. While part of the dark mode switch would only require a bit of HTML, I would still need to add JavaScript logic for it to function properly, and I was not familiar with JavaScript either.
So I turned to NoteBookLM (Google’s Large Language Model (LLM) interface that can essentially only respond from the information you give it), because it can handle a huge amount of data uploaded at once, asking it to annotate the main project files so that I could understand the code’s structure. Reading through these did help me to understand Python syntax, which is all the more useful: learning Python will be included in one of my courses next semester, and this is therefore a great kick-start into that. However, there was not much JavaScript on the site at this point, and I still lacked the confidence to write that first line of code, as I had no idea where to begin.
That’s when I had the idea to get Generative AI to write the code for me, as it allowed me to get started immediately, without taking time to learn programming fully – which I hadn’t time to do.
Developing My AI-Powered Workflow
Knowing I wanted to use Generative AI was one thing, but figuring out how to start was another. My approach wasn’t random; it evolved into a deliberate workflow with several key steps. It began with the most fundamental decision: what tools was I actually going to use?
Choosing the Right Tools
There certainly is a lot of choice when it comes to how to work with Generative AI for programming. Specifically, two main questions need to be answered:
1. What LLM should be used?
This is arguably the more complex of the two questions. I myself used Gemini 2.5 Pro for the bulk of my work, as it is one of the best available for the task and I have almost unlimited access to it for a six-month period as a student. However, attending a workshop by the AI Adoption Interns, where they mentioned that they personally used Claude as it is essentially built for programming, prompted me to explore which models would be most effective for my needs. This brings me to the second question:
2. How should the LLM be accessed?
As a University of Edinburgh employee, the obvious answer is ELM (Edinburgh access to Language Models). However, obvious is not always correct. For me, ELM was not the best solution as, although it has many privacy advantages (albeit not actually relevant to this open-source project), the per-chat usage cap was too low for the amount of iteration this task required, and I didn’t want to have to restart chats that often either. Another easy answer is through the model’s purpose-built interface, e.g. ChatGPT for OpenAI LLMs. However, these cost a lot of money for the best models and the highest usage levels, apart from Gemini for me as I get it for free. The other answer I could find to this is through GitHub Copilot. This is GitHub’s own portal to interact with LLMs, similar to ELM, but made for GitHub integration, and with access to different LLMs, like Claude, and also higher usage limits, with the GitHub Education plan.
So, after some research, I settled on Claude Sonnet 4 through GitHub Copilot. However, it should be noted that I ran out of prompt tokens towards the end, and therefore finished off with GPT-4.1, so it is truly a big advantage to get Gemini 2.5 Pro for free, with much greater usage limits than anything else I could get, as far as I know.
Now that we know the ‘what’ of coding with LLMs, let’s look at the ‘how’: how should we work with our chosen LLM?
My Method: Educated Prompting
I call my workflow “Educated Prompting”. I quickly learned that it’s not as simple as asking for a feature and copy-pasting the code (although I did still manage to work by copy-pasting the code). You have to guide the AI and find workarounds when it gets stuck. For instance, while I didn’t need to understand the code to ask for a dark mode switch, I did need to recognise that the icons weren’t centring because of their styling, not the icons themselves, this is a simple example, but an AI can get stuck on such issues for a long time. It’s unbelievable how often you need to steer the AI onto the right path, but obviously, this is still far easier than learning a whole programming language.
The main drawback is that I couldn’t say with 100% certainty that I understand everything I have put into my code. However, thanks to the comments the AI added, the fact that LLMs are pretty good at simple coding tasks, and the simple reality that it works, I’m content with this workflow.
Below are a few tips I picked up to maximise the efficiency of coding with LLMs, as well as some ways to make prompts into “educated prompts”:
Key Tips and Tricks
- Provide Full Context. Context is essential. Always start by giving the AI the relevant files. In tools like GitHub Copilot, you can select or upload files directly. In other scenarios, copy and paste the entire file’s contents into your first prompt, or at any other point, to make sure you and your AI assistant are on the same page.
- Ask for a Plan First. Instead of asking an AI to immediately “implement a dark mode switch”, ask it to strategise first. A good prompt is: “First, develop a detailed implementation plan without writing any code. Wait for me to approve your plan before you begin the implementation.” This ensures the AI’s approach aligns with your goals before it wastes time on code, and allows you to spot flaws in its planned implementation, if you understand how your application works.
- Set Ground Rules. It’s useful to add a few rules at the start of a conversation. A good one is: “Throughout this conversation, use British English spelling and ensure all output is UTF-8 compatible.” Requesting UTF-8 helps prevent strange copy-paste errors caused by non-standard characters.
- Don’t Be Afraid to Start a New Chat. Sometimes an LLM gets stuck in a loop, fixing one issue but recreating an old one. When this happens, I have found it extremely useful to open a new chat window. This allows the AI to see the problem with a fresh pair of eyes, often breaking the cycle.
The following template incorporates all of these tips and is a great starting point for any AI-assisted coding project:
My Progress
After about two weeks of on-and-off work on Talking Scores – some of it in my own time as this is of personal interest to me, listed as the Personal Goal of my internship as: “Find out more about and acquire relevant accessibility skills in order to further help my visually impaired best friend” – I’ve managed to make some pretty significant improvements. Below is a video showcasing my new version of the site and comparing it to the state I found it in:
(N.B. No audio on this video)
Next Steps
I have now created a pull request on the Talking Scores GitHub page, which means that if the contributors accept it, my version will become active on the live site, which was my goal all along. I will give an update on this in my next blog post, if I have any news about this by then.
Thank you to Tallulah for helping me to get started on this whole project. She has being doing amazing work in DLAM, so definitely check out her blog, where she has recently published a really interesting article on Identifying movement in lecture recordings.
Tallulah Thompson’s blog
(Header image generated with Gemini)