AI touching human

How can AI play a role in the development of OSCEs?

AI touching human
Credit: Jingjing Wang

In this extra post, 4th year medical student, Jess McKenzie, explores how AI can play a role in the development of  OSCES (Objective Structured Clinical Examinations) in medical education.


Introduction

OSCEs (Objective Structured Clinical Examinations) represent an integral way of assessing skills key for success as a medical professional. OSCEs typically involve students rotating through a series of standardised stations where, by observing student interactions with patient actors (PAs), examiners can evaluate a student’s performance in domains such as professionalism and competency in clinical and communication skills.

Creating and running OSCEs successfully can be time-consuming and challenging to execute. They require the creation of medically accurate and realistic cases, alongside scripts and training for the patient actors. These represent just a few difficulties faced by medical educators and examiners. Could artificial intelligence (AI) help resolve some of these challenges?

In this blog post, I will draw from the research and my own use of AI tools as a medical student to help answer the question: How can AI play a role in the development of OSCEs?

How can AI augment OSCE development?

Increasingly, generative AI is being used in universities by students, posing a threat to academic integrity and challenging exam regulators. But how could it be used to the educator’s advantage? Drawing from the literature, I have identified three key ways AI may enhance the OSCE examination process.

  1. Case development in ‘History-taking’ stations

Creating cases that are representative of real-life patients in all their complexities can be a difficult and time-consuming task. Patients present with a diverse range of clinical presentations and unique concerns, which the OSCE case scenarios attempt to capture. There is potential for the use of AI to rapidly create diverse cases with a single, simple prompt. Using OpenAI’s Chat Generative Pre-trained Transformer (ChatGPT), I wanted to try this for myself. AI was able to generate a detailed case which included clear instructions for the patient actor. As something created in less than a minute, I was rather impressed with the result. I am certain that this will come in useful for my own studies when preparing for OSCEs, and would encourage others to try it for themselves.

  1. Assessment criteria and feedback

When assessing OSCE performance, checklists are often used by the examiner to outline key expectations from the student. These again are typically time-consuming to make manually, yet similarly could be made rapidly by AI.

  1. Standardisation and improvement of patient actor training

Typically, patient actors are given a script to memorise. Multiple factors may impact how this is interpreted and acted out: some PAs will be trained actors, whilst many are patients themselves who are passionate about giving back to the NHS. Some PAs may have been assisting in OSCEs for years, whilst for others this is their first time. Students also ask questions in multiple ways, further posing a challenge on how best to reveal information. These factors may impact the ability to maintain consistency between OSCE stations. So how could AI help tackle these challenges?

In an attempt to overcome this challenge, a group of researchers used AI to create mock-OSCE scenarios. This involved ChatGPT posing as a student and engaging in the clinical scenario to help train the PA. This prepares the PA for the actual exam, allowing them to practice how they may respond to a range of different questions and prompts. This has the potential to save on the time and costs associated with PA training, also allowing the PA more freedom in how/when they can practice. As a consequence, it is hoped that this would increase patient actor confidence and improve the ability to give more consistent performances.

What are the main challenges surrounding the use of AI in OSCE development?

One of the most powerful arguments against the use of AI in medical education is the potential for medical inaccuracy in AI-generated content. As academics we value exams highly, often committing the content examined to memory and using it to guide future revision. As a medical student, I can still remember how patients presented and what mistakes I made in my own OSCEs, even two years later. If medically inaccurate AI-generated information was to be included in OSCEs, students may remember this inaccurate content and believe it to be true. Not only may this negatively impact performance in future examinations, but it also has the potential to cause severe negative implications in future patient care. A system where the AI-generated content is closely inspected by clinicians and medical educators would reduce the likelihood of medically inaccurate information slipping through the cracks, but could this risk be completely eliminated?

Further challenges include the question of AI-generated content ownership, and the potential risk of a student being able to generate a case identical to that used in exams. Finally, and perhaps most importantly, the question remains of how well is AI really able to capture the unique essence of what makes patients human?

Conclusions

Whilst challenges remain, the use of AI in collaboration with clinicians and medical educators could help reduce the time and costs associated with designing and implementing OSCEs. This represents just a snapshot of AI’s potential uses within medical education, and I would encourage medical educators to look at the emerging pool of research out there. For many, there remains a feeling of discomfort surrounding AI, yet, as beautifully captured by a reflection on this topic, one should be encouraged to “compliment rather than resist the tides of change”.

References

Misra SM, Suresh S. Artificial Intelligence and Objective Structured Clinical Examinations: Using ChatGPT to Revolutionize Clinical Skills Assessment in Medical Education. Journal of Medical Education and Curricular Development [Internet]. 2024 Jan 1 [cited 2024 Sep 5];11. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11273588/

Soong TK, Ho CM. Artificial Intelligence in Medical OSCEs: Reflections and Future Developments. Advances in Medical Education and Practice. 2021 Feb;Volume 12:167–73.

Read another Teaching Matters blog post about the development of OSCEs from a student-staff co-creation perspective: Student assessors in Objective Structured Clinical Examinations (OSCEs).


photo of the authorJess McKenzie

Jess is a 4th Year Undergraduate Medical Student passionate about Medical Education. She has always been interested in digital tools and resources, frequently using them to aid her own studies. With the emergence of AI taking the world by storm, she immediately became fascinated by its potential uses within Medicine.


Acknowledgement: Jinging Wang

Jingjing Wang is a final year medical student at the University of Edinburgh with an intercalated degree in BMedSci (Hons) Surgical Sciences with elective in Inflammation and Tissue Repair. She is an Associate Fellow of the Higher Education Academy with a keen interest in both teaching and art. Jingjing is an aspiring Orthopaedic Surgeon, particularly interested in surgical education, widening participation in medicine, and collaborative research, as demonstrated by her work in the Edinburgh Student Surgical Society (2019-2024), Edinburgh Accessibility in Medicine (2022-2024), and the Student Audit and Research in Surgery Collaborative (2020-present). Jingjing is very pleased to illustrate for this blog post on AI which contributes to the important and exciting discussions regarding AI in OSCE development.

Leave a Reply

Your email address will not be published. Required fields are marked *