Experiments in GenAI: Excel vs ELM for constructing and merging content
Our team has been experimenting with AI over the past few months. As part of this, we recently held a head-to-head test between an old technology (Excel) and a new one (ELM, our in-house large language model) for a time-consuming activity we do annually for UCAS summaries. Spoiler: Excel won, demonstrating its usefulness for fixed content manipulation and editorial tasks.
Since 2024, we’ve run a number of experimentation sessions to see whether or not we could use GenAI to manipulate and edit content.
Experimenting with GenAI: why we won’t be using it to help with degree finder editing for now
As part of this, I planned a session less focused on its generative capabilities, and more on AI’s analytical or content-manipulation abilities.
The task: creating programme overviews for UCAS
Each year, the Content Ops team has to ‘create’ short overviews of our undergraduate degree programmes for our listings on the UCAS site. Our main issues with this task are:
- UCAS has a 4,000- character limit for summaries.
- It uses markdown for formatting.
- It’s a single field (no subheadings).
In previous years, colleagues simply pulled an export from the system, grabbed the text in the ‘Description’ field, trimmed it back in Excel to meet the UCAS character limit, then did a laborious ‘tidying’ job to catch anything truncated mid-sentence or that didn’t match the system formatting. It technically did the job, but it wasn’t the most elegant solution, and was time-consuming when repeated for over 350 programmes.
We also had some feedback from School editors that they wanted their overviews to be a bit ‘better’ – more comprehensive and descriptive, maybe pulling in some detail from our new ‘Programme benefits’ field. To do this, we’d need to merge a couple of fields and ideally add a heading in between.
It looked like a good test case for getting some automatic tools on the task, so we decided to test both how we could use ELM and Excel to do this.
What we learned: Excel can do most of this task
Excel could do pretty much everything we needed to do for this task. With an export of the degree finder, it could:
- concatenate the content under two subheadings together.
- tell us which cells were over 4,000 characters that we would need to edit down (using conditional formatting).
- remove some special characters for us, like formatting for links using the Find and Replace function.
What Excel couldn’t do: reformat markdown
The tricky bit was getting Excel to reformat any subheading markdown (hashes) into asterisks, which is what the UCAS CMS uses. Unfortunately, it wasn’t an easy find-and-replace because:
- we were dealing with unique subheadings within the content (that is, the subheadings schools added under ‘about’ and ‘benefits’)
- the hashes were only at the start of the heading, and we needed asterisks at either side
So we couldn’t just replace #### with **, for example. We also needed to get ** at the other side of these unique headings.
…But ELM couldn’t do it reliably either
We then turned to ELM to help with this task. It could successfully turn the hashes into asterisks, but we had to explicitly ask it to write the content using markdown we could copy and paste. Otherwise, it turned the asterisks into bold text in the interface.
In contrast, other team members tried out ELM exclusively for this task and had a much harder time. ELM:
- struggled with larger datasets, only able to edit a few programme overviews at a time
- varied in how it trimmed content that was too long depending on how you asked it to (In one example, it took the character count too literally and stopped a piece of content in the middle of a sentence!)
In essence, ELM was unreliable, inconsistent, and not easily exportable for our purposes.
How we solved it
In the end, we came up with a different Excel-based solution. We concatenated (merged) the two ‘about’ and ‘benefits’ fields, but told it to ‘add’ a heading in between the two fields, formatted using the correct UCAS-appropriate style.
This resulted in a column of formulas that we couldn’t copy and paste, so to further manipulate and search in the output text, we had to ‘Paste values’ into another column so we could work with it as paragraph text. Then all we had to do was trim the fields with over 4,000 characters (only 2 of these this year!), do our Find and replace action for anything incompatible with UCAS, and we were done.
And the winner is…Excel
Clear win for Excel. Excel could carry out most of this task on its own and do it faster and more reliably than ELM. There was no danger of it ‘inventing’ content, so for a straightforward content manipulation task, it did exactly what we needed it to.
ELM on the other hand was unreliable, particularly with large datasets, seemingly unable to follow the instructions consistently. It also wasn’t adept at exporting it in a format we could share easily with our colleagues – we would have had to paste it into a Word document.
Always use the best tool for the job
This experiment was a good reminder that just because a new tool exists, doesn’t mean you have to use it if it doesn’t fit the task at hand. I was quite strict that I didn’t want the solution to invent anything, so was relying on the purported analytical properties of GenAI, not it’s creative ones. It performed ok with small test cases, but wasn’t suitable for repeated actions across large, structured datasets.
Conversely, that is a primary function of Excel. While it might seem odd using a spreadsheet for editorial tasks, depending on what you’re doing it may be just what you need.
At the risk of sounding like an advert, if it’s consistency and scaleability you’re looking for – even with content manipulation and editorial exercises – Excel is the tool for you.
How are you using GenAI?
If you are experimenting with AI tools, or have just rediscovered a love for Excel, we’d love to hear about this in the comments below.