
In this post, Femke Morrison and Dr Magdalena Cerbin-Koczorowska address survey fatigue, and argue that we need to recognise the limitations of student surveys and find other ways to gather student data. Femke is the eProgramme Support Officer for the MSc Clinical Education Programme at Edinburgh Medical School and Magdalena directs the MSc programme. This posts belongs to the Student Voice in Practice series.
“I have a lack of faith in feedback forms”*
Struggling with low response rates for student surveys is neither a new nor uncommon phenomenon. While some authors (Adams & Umbach, 2012) suggested that the reason for this is survey fatigue resulting, among others, from the multitude of points on the timeline when different people try to obtain various data from the same student cohort, others (Fass-Holmes, 2022) identified insufficient evidence to support this claim.
In the recently published article, ‘Data-Driven Approaches to Reduce Survey Fatigue and Enhance Student Engagement in Surveys’ (Halper et al., 2025), the authors stated that, “valuable data in higher education is entirely dependent on students actively participating in surveys”. Through this blog post, we would like to acknowledge that data about student experiences is dispersed, and if we hope to truly make use of it, we may have to rethink our approach to educational data collection.
We would like to begin by emphasising that, while discussions about students’ experiences are often quickly reoriented toward response rates, the goal we are pursuing is not a high number of surveys, but rather the collection of information about students’ learning experiences. There is room for debate about whether or not this is the best approach (Marley et al., 2021), and whether the data-guided quality improvement approach is more ‘robust’ than pedagogically-sound initiatives. For this blog post, however, despite our personal views, we acknowledge that the higher education sector is not isolated from the datafication trend (Williamson et al., 2020). With the continuously growing use of technology, we are witnessing an expanding number of “digital traces” that can undergo further analysis, arousing interest and desire among researchers and decision-makers.
Assuming our community agrees on the goal of gathering students’ insights in a formal and transparent way, it becomes necessary to critically examine the existing survey-centred mechanisms. If we were to adopt a researcher perspective and the measurement tool used did not provide us with the data we needed, we would probably consider changing the method. Since we are truly curious about student experiences (data), and not a survey (tool), we should open ourselves to other sources of information. We believe, therefore, that recognising the limitations of student surveys and trying to find other ways to gather the data is the key step to moving forward.
Is feedback expressed in an email less valuable than that reported in centralised surveys? Is a student’s spontaneous appreciation of an, “absolutely brilliant [textbook] – one of the most useful references [they’ve] ever been steered towards!”, less important just because the same feedback was not reflected in the official programme evaluation three months later? Although we believe that the fact that we cannot (yet) skilfully analyse data provided outside formally-defined quality assurance mechanisms does not make it less valuable, the answer to this question probably depends on who plans to use the data and for what purpose.
After reading the blog post published recently by Haolan Tu (2025), we were struck by the fact that access to student voice may not be provided to those who have a direct and extremely significant influence on educational experiences. This brings us to another area that requires community-wide discussion: How to respect and respond to the diverse needs of different educational process stakeholders to evaluate learning experiences? A tutor, a program director, a dean of education, a programme administrator, the head of the student experience unit – each of them may have different needs. However, we hypothesise that some of these needs can be met using the same data set or supplementing existing data sets with single variables. For example, how does the value of NSS/PTAS/PRES survey results differ from the end-of-programme survey used for each separate programme?
There is a tension between the lack of access to useful feedback and data from students and the strong need felt by educators to use students’ opinions to improve education, which increasingly leads to more or less formal (additional) ways to obtain feedback being organised at a programme or course level. This should raise concerns not only about optimising the use of existing resources but above all about the ethical aspects of increasing the feedback burden on our learners.
The British Educational Research Association (2024) call researchers to, “recognise concerns relating to the time and effort that participation in some research can require […]”. They even highlighted, “the repeated involvement […] in survey research or in testing for research or evaluation purposes”. At the same time, other nationally approved guidelines for collecting and analysing educational data call against collecting the same type of data unnecessarily and for optimising the use of existing datasets. (Finnish National Board on Research Integrity TENK, 2019). So, is it ethical to multiply “measurement points” if we are not fully utilising the data we are already collecting, perhaps informally?
In this short text, we have asked many questions without really providing any answers. We are not calling for radical solutions and a complete abandonment of student surveys. However, we encourage you to open up to other aspects of the dialogue between us and the learners, appreciating the value of the opinions they want to share with us more than the format in which these opinions are expressed. Survey fatigue does not have to mean that students do not have the strength or willingness to share their feedback. Perhaps it is time to rethink our approach to the mechanisms of its collection.
*quote provided as a part of a formal nationwide survey aimed to evaluate student learning experiences
References
Adams, M. J. D., & Umbach, P. D. (2012). Nonresponse and Online Student Evaluations of Teaching: Understanding the Influence of Salience, Fatigue, and Academic Environments. Research in Higher Education, 53(5), 576–591. https://doi.org/10.1007/s11162-011-9240-5
British Educational Research Association. (2024). Ethical Guidelines for Educational Research (5th ed.). www.bera.ac.uk/publication/ethical- guidelines-for-educational-research-2024
Fass-Holmes, B. (2022). Survey Fatigue—Literature Search and Analysis of Implications for Student Affairs Policies and Practices. Journal of Interdisciplinary Studies in Education, 11(1), 56–73. https://www.ojed.org/jise/article/view/3262
Finnish National Board on Research Integrity TENK. (2019). The ethical principles of research with human participants and ethical review in the human sciences in Finland. https://tenk.fi/sites/default/files/2021-01/Ethical_review_in_human_sciences_2020.pdf
Halper, L. R., Szeyller, E., Freggens, M. J., Edmunds, C., & Regan, E. P. (2025). Data-Driven Approaches to Reduce Survey Fatigue and Enhance Student Engagement in Surveys. Intersection: A Journal at the Intersection of Assessment and Learning. https://doi.org/10.61669/001C.131924
Marley, C., Faye, A. D., Hurst, E., Moeller, J., & Pinkerton, A. (2021). Moving Beyond ‘You Said, We Did’: Extending an Ethic of Hospitality to the Student Feedback Process. 1–19. https://doi.org/10.1007/978-3-030-77673-2_1
Tu, H. (2025, March 25). Gathering student feedback as a postgraduate tutor. Teaching Matters Blog, The University of Edinburgh. https://blogs.ed.ac.uk/teaching-matters/gathering-student-feedback-as-a-postgraduate-tutor/
Williamson, B., Bayne, S., & Shay, S. (2020). The datafication of teaching in Higher Education: critical issues and perspectives. Teaching in Higher Education, 25(4), 351–365. https://doi.org/10.1080/13562517.2020.1748811
Femke Morrison
Femke works as an eProgramme Support Officer on the MSc in Clinical Education programme, where she supports students and staff with their learning and teaching. She is also currently a student on the MSc in Digital Education at Moray House.
Magdalena Cerbin-Koczorowska
Dr Magdalena Cerbin-Koczorowska directs the MSc Clinical Education Programme at Edinburgh Medical School.