mathematics

Automatic online assessment in Mathematics

online assessment in Mathematics

“It was the best of times, it was the worst of times….” is a good opening to a story, and an appropriate theme for a discussion of the state of contemporary computer aided assessment (CAA).

Automatic online assessment has moved a long way beyond multiple choice and other similar question types. In my own discipline of mathematics it is now relatively standard practice to accept answers from students in the form of an algebraic expression. The assessment software contains an expert (computer algebra) system which can establish whether a student’s answer satisfies relevant objective properties. For example, the system can establish if the student’s answer is (1) equivalent to the correct answer and (2) expressed in a conventional written form. The system can then generate automatic feedback. A demonstration system is available at STACK Demonstration site.

There are many examples of sophisticated, subject specific, assessments in a wide variety of other disciplines. For example, in computer programming students’ code fragments can be automatically marked. The internet also facilitates techniques such as comparative judgement, which would be infeasible to implement with paper.

Assessment drives students’ learning, and immediate feedback has the potential to improve students’ performance, although we cannot take for granted that feedback is automatically helpful. Kluger and DeNisi suggested that about a third of feedback intervention studies actually made students’ work worse!

An important educational impediment is that even with more sophisticated question types the kinds of tasks which can be automatically assessed tend to require only procedural skills. Such skills are important but are not the whole story. That said, such skills remain important, and since staff (and graduate tutor) time is much better spent not doing repetitive marking I think we should make judicious use of CAA tools.

A significant practical drawback is the time it takes to author the questions. By this I don’t mean the normal time invested in creating interesting and challenging assessments, but I do mean the additional work needed to automate them. This is a serious barrier to wider use. The difficulty of automation is inversely related to the sophistication of the tool, and this is especially true of adaptive learning systems. It appears to me to be almost impossible for a regular teacher to author questions on a week by week basis.

I’d be interested to hear of a counter example to this problem. The investment needed is only repaid when courses run in a stable form for a number of years with large groups of students. Mathematics is fortunate in that a significant proportion of students need, and will continue to need, to learn calculus and algebra methods across all STEM disciplines.

Another serious practical difficulty is the inevitable fact that the most innovative features are only found in disparate systems. Even where communities of practice exist, such as amongst developers of online assessment systems for mathematics, one centrally provided system is unlikely to offer the facilities which are required to cater to the specific needs of each discipline. The solution to connecting together functionality lies in technical web standards (e.g. SCORM and LTI). This is all deeply unglamorous plumbing, but it is essential to interconnecting systems reliably and securely in a way which provides students and teachers with a decent experience.

There is a lot happening in automatic online assessment. What used to be an innovative project is becoming mature and mainstream. The next few years are likely to see a convergence of features and better integration, together with a deeper appreciation of how best to use the tools and data generated to help students learn more effectively.

Reference: Kluger, A. N. and DeNisi, A. (1996) Effects of feedback intervention on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin, 119(2), 254-284. DOI:10.1037/0033-2909.119.2.254.

Chris Sangwin

Professor Chris Sangwin is Professor of Mathematics Education as part of the Centre for Technology Enhanced Science Education. He is interested in the automatic assessment of mathematics using computer algebra, in particular the development of the STACK system, in mathematical problem solving using Moore method and similar student-centred approaches and in curriculum development.

One comment

  1. In this article, the current state of automatic online assessment in mathematics is thoroughly examined. Both its progress and its problems are highlighted. The author correctly points out that automatic online tests have changed from the old multiple-choice forms to more advanced ones, especially in maths.

Leave a Reply

Your email address will not be published. Required fields are marked *