In this post, Professor Tim Drysdale, Chair of Technology Enhanced Science Education in the School of Engineering, reflects on why people may fail to take-up a new learning and teaching technological initiative…
Don’t expect people to beat a path to your door to buy your fancy new mousetrap.
This advice from Business 101 was ringing in my ears as I waved goodbye to a technically-difficult teaching innovation project of mine that had had zero uptake – not even from me. What went wrong?
Well, where do I start? It was the first decade of the new millennium, and in-class voting with clickers seemed like a great way to cope with large classes. I should have known something was up when the colleague that was championing them volunteered a whole evaluation team to come along to my first lecture with them, for “you know, support.” That was great, I thought to myself, because the evaluation team’s outputs would help me get a bit more credit for all the work I had put into preparing the session (dozens of cross-linked slides so I could run a diagnostic quiz to cover the whole course). But their presence de-emphasised a critical problem with the system: the hardware/software interface was flaky, and one of the evaluation team was there simply to baby-sit in case it hiccupped.
The quiz was a great success, but I decided I didn’t like the multi-choice question restriction and wanted the system to do open-ended, complex number questions to better suit open-ended engineering problems. A complex number is actually two numbers in one, but the clickers only handle sending one answer at a time. “No problem, we can code around that,” I thought.
Fast forward a few months, and I’d got the cash, hired a valiant programmer, and finally, after much back and forth with the company, we’d got the documents signed so we could obtain the software we needed to develop with their hardware. On our first look, we could see why there had been problems. A huge chunk of the project resource then went into trying to combat these issues, so we’d have something more useable afterward (we hoped). Our complex number voting feature received less time and effort than originally planned, but it worked just fine. It presented the results in an intelligible way with bubble graphs, and seemed to deliver more or less on what we said we’d do. Great!
We presented the results at a few internal conferences. Everyone was nice about it, but none of my co-investigators on the grant could be persuaded to try it out in their lectures, let alone anyone not previously connected with the project. Looking back now, I suspect my heart wasn’t really in promoting it because I knew, deep down, that there was no way it would be sustainable to have a ‘system babysitter’ attend every lecture. The base system we had bought simply needed to be more reliable right out of the box, regardless of anything we might try to do to mitigate any issues with it afterwards or any fancy features we might add.
And boxes were the other problem. There was the tedious logistical overhead of distributing hardware to students to contend with too. So, I learned a few lessons from that project that were painful at the time, but worth their weight in gold on a future occasion when I once again decided to combine teaching innovations and coding (but that is a story for another day).
As a coda to that episode, I did use the basic commercial clicker system once more, a few years later at a British Science Festival. It totally crashed two minutes before my Award Lecture started. I was relying on the audience answers to set the scene for what came next – oh dear! Fortunately, anticipating that these sorts of antics might occur, I had browbeaten the School into funding an overnight trip for a technical support person, who set about intensive troubleshooting while I ad-libbed until it came online again.
Before I finish, I do have one beef with the business advice about mousetrap sales. A good few years after the clicker project, I was part way through semester when I was told that my second year maths class was increasing from 150 this year to 400 students the following year. Marking overload! Panic!
Solution? To obtain automated electronic exam marking and try it out immediately on the smaller class size. Together with the teaching office, we translated the entire process in from another School in just a few weeks. Due to the timing of the relevant boards, we were proceeding without formal permission, so it wasn’t a risk-free manoeuvre. Even with the extra work and risk it involved, colleagues got wind of it and demanded it for themselves too. Several courses implemented the electronic exam marking process that semester alongside me, all successfully. I left the University shortly after, but, on a subsequent visit, was pleased to learn that the majority of first- and second-year courses were using the system for marking exams.
What?! Great uptake with no selling needed?! Why?
The process was reliable, and it converted a low-value task (marking) into a high-value task (intellectual effort in designing the assessment). It didn’t seem to matter that it added some risk and the need to learn a new way of doing things. This was the supposedly impossible mousetrap – its value proposition had people beating a path to the door. Another lesson quietly stored away for a rainy day ….