Questionnaire overload or an indicator of relative importance?
Those of you who have read my previous posts on this blog will know that I am creating student progress ‘dashboards’ as part of a learning analytics pilot for one of our online Masters programmes. Alongside release of these dashboards to students, a questionnaire was sent to all first and second year students on-programme (around 230 individuals) in an effort to gauge the value of such a resource. With only a 10% response rate to date, I am in real danger of non-response bias!
I’ve followed the golden rules of how to avoid survey fatigue…well, at least those elements within my control. I have kept my questionnaire short (12 questions on a single page), easy to take (online via any device, mostly ‘yes/no/comments’ variety of answer options), anonymous (no personal details requested), and relevant (questions relate directly to aspects of the students’ study). I’ve incentivised students by informing them that I will feedback the findings of the survey in due course.
So far, so good. However, the first rule when carrying out any questionnaire is not to over-survey the same people. Unfortunately, one can only guess at the number of questionnaires/surveys/feedback requests that our students receive via their university e-mail accounts (and that’s assuming all of our students even check their accounts!). Co-ordination with your institution’s survey schedules is, therefore, key but not very feasible in my experience, thus questionnaire overload is a real possibility.
Unreliability of responses
With only a tenth of the class responding to the survey to date, how reliable are the findings? Perhaps I shouldn’t be too surprised at the low response rate. With the growing use of online surveys across all sectors, survey fatigue is a genuine problem. Higher Education is not exempt. As a result of the sheer number of survey solicitations in an academic year, they are perceived by some students to be ‘ubiquitous artifacts’ . Tschepikow (2012) suggests that the more students consider that their responses will engender change by the institution, the more likely they are to take part in a survey , hence why it is vital to close the feedback loop by sharing with students the survey findings and informing them of actions to be implemented as a result.
Biersdorff (2009) argues that it is not response rate per se we should be anxious about, rather the representativeness of respondents . Obviously, the more respondents there are, the less chance of a non-representative sample, but confidence in your data is dependent on the nature of your survey and the population being surveyed. With usability testing for example, Nielsen (2000) states that you only need 5 users; adding any more does not mean that you will glean more information, just a repeat of what’s already been found . If your population being surveyed is large and non-diverse, you can achieve reliable data from a lower response rate than if it was very heterogeneous.
According to the Higher Education Academy, 25% is an average response rate for an online survey (HEA: Guide to Raising Response Rates 2015). A look at the responses to our programme’s end-of semester online student surveys shows a decline since their inception in 2007/08:
Note: after a sharp decline in student responses in 2009/10, we introduced the incentive of the chance to win an author-signed book. This tack appeared to have some beneficial effect, but it has not been long-lasting!
So, rather than being disheartened by the current response rate, perhaps I should accept it as being indicative of the times, and take the figure as a form of feedback within itself. If the data represented in the ‘dashboard’ were provocative, presumably the response rate would have been much higher; the fact that only 1 in 10 were provoked into providing their views suggests that the topic is not especially important to students. This is borne out in the answers given to the survey questions. Learning analytics data are viewed as a form of feedback, with students seeing the dashboard of progress metrics as a useful adjunct to assess where they sit in relation to their peers, but the majority of respondents did not wish the information to be made available to them in real-time or on a daily basis.
Of course, I’m interested to hear what the other ninety percent of the class think…
 Tschepikow, W. K. (2012). Why Don’t Our Students Respond? Understanding Declining Participation in Survey Research Among College Students.Journal of Student Affairs Research and Practice, 49(4), 447–462.
 Biersdorff, K.K. (2009) How many is enough? The quest for an acceptable survey response rate. Bright Ideas Blog.
 Nielsen, J. (2000) Why You Only Need to Test with 5 Users. Nielsen Norman Group Blog
Paula Smith, IS-TEL Secondee