Prompt Off AI Challenge
Summary
Our Digital Skills Trainer interns, Aretha Foo, Nandita Soma Shekar and Farhad Aghayev, reflect on a section meeting that they recently led.
During the Digital Skills and Design (DSDT) section meeting on the 12th of June 2025, we had the pleasure of organising an exciting and innovative activity. Dubbed “Prompt-Off” AI Challenge, it was designed to dive deep into the art and science of prompt engineering in AI.
Challenge Explained: Collaboration Meets AI Creativity
The activity’s core idea was simple: participants were divided into groups of 2 or 3, and each group was assigned a unique task requiring them to generate an effective AI prompt. The challenge wasn’t just to create any prompt but one that would inspire AI models to produce the best possible response relevant to the given tasks.
To add an extra layer of challenge and insight, we intentionally inserted pitfalls within some tasks designed to make it very likely for the AI to generate inaccurate or misleading responses. This was a deliberate move to showcase the possibility of AI generating non-factual information, and to encourage participants to think critically about prompt clarity and potential pitfalls.
Once the groups finalised their prompts, the real twist came in – prompt swapping. Each group exchanged their prompt with another group, which then fed that prompt into the Edinburgh (access to) Language Models (ELM) to generate a response. This swap introduced fresh perspectives to prompt creation and allowed participants to see how different prompts could influence AI response in unexpected ways.
Evaluating AI Responses: The Art of Critical Review
After the AI generated responses based on the swapped prompts, the participants evaluate the results based on a list of criteria provided. This evaluation was key as it encouraged critical thinking about several :
- Accuracy – Does the AI-generated answer provide correct and reliable information based on the prompt?
- Ethical Considerations – Is the response free from harmful, offensive, or inappropriate content?
- Clarity and Understandability – Is the prompt clearly written and easy to understand?
- Bias and Fairness – Does the response avoid stereotypes or unfair assumptions?
- Privacy and Safety – Does the prompt or response avoid requesting or revealing sensitive personal data?
- Relevance and Usefulness – Does the AI-generated answer effectively address the prompt?
- Responsible Prompt Design – Is the prompt designed to encourage positive, responsible AI use?
By analysing how the AI dealt with the intentional loopholes, participants gained first-hand experience of AI’s limitation and the importance of crafting clear, unambiguous prompts. This cycle of prompt creation, response generation, and evaluation helped participants appreciate the delicate balance between human input and AI interpretation.
Responsible Use of AI: The
To help participants reflect on the importance of ethical and effective AI prompting, we introduced the Four-Legged Chair Analogy, a metaphor that illustrates the foundational pillars required for responsible AI use. Just like a chair needs all four legs to be stable, an AI response needs four key elements to be truly dependable:
- Clarity – The prompt must be specific, unambiguous, and complete. Vague inputs are like loose chair legs—leading to instability in output.
- Context – Without sufficient background or situational cues, the AI may fill gaps inaccurately. Providing context ensures the model understands what’s being asked.
- Intent – A well-formed prompt reflects the user’s actual goal. Misaligned or misleading intent can skew AI responses.
- Verification – The user must critically review and validate the AI’s output. Trusting an AI response blindly is like sitting on a wobbly chair without checking the legs.
Through this analogy, we emphasized that responsible prompting isn’t just about clever wording—it’s about ensuring all foundational elements are present to support the output we expect. When any of these legs is weak or missing, the AI response risks being incomplete, biased, or factually incorrect. This simple yet powerful metaphor encouraged participants to think not only about what they were asking, but how and why they were asking it—and what responsibility they bore in reviewing the answers. We wrapped up the session with a LinkedIn Learning video that reinforced responsible prompt design and real-world applications of the four-legged chair analogy.
What We Learned
- Collaboration sparks innovation: Working in groups and swapping prompts revealed diverse ways to approach a single challenge.
- Prompt engineering: Crafting a good prompt is a skill that directly affects the quality of AI output.
- AI is fallible and context-sensitive: Deliberate loopholes showed how AI can produce inaccurate or misleading responses.
- Critical human judgement is essential: Understanding AI’s limitations helps us guide it better.
- Feedback loops improve outcomes: Evaluating AI results helps refine prompts and improves future interactions with AI systems.
Why This Matters
As AI tools become more embedded in our daily lives and workplace, understanding how to communicate effectively with them is crucial. Activity like this empower individuals not just to use AI, but to harness it creatively and responsibly – recognising its strengths and limitations.
Written By: Aretha Foo, Nandita Soma Shekar, Farhad Aghayev