Learning the language of assessment: Assessment literacy and assessment guidance for students

students-politics

How do we provide the best guidance for students on their assessment? This issue was highlighted recently by students taking courses in Politics and International Relations (PIR). Staff were going to great lengths to prepare help and resources for students on how to tackle their assignments, especially when introducing new forms of assessment, such as policy briefs, literature reviews or group presentations. Yet we heard consistently from students that they felt anxiety, confusion, and wanted something more. What was going wrong here?

As part of a renewed focus on teaching development in PIR we held a special staff development workshop on providing guidance to students on assessment. The session built on further consultations with students, colleagues in the Institute for Academic Development (IAD), and research we conducted in the pedagogical literature on assessment. Our goal was to distil some basic principles and practical strategies staff could implement to improve the student experience of guidance on assessment.

Implicit and Explicit Understanding of Assessment: what does our assessment say to students?

We were fascinated to discover the importance of distinguishing between the explicit and implicit elements of assessment (Sambell and McDowell 1998). University students are normally adept at following explicit guidance such as ‘include a bibliography’; but much of the confusion seems to occur around the implicit dimensions of assessment. Implicit elements of assessment include things such as the norms embedded in the various sub-disciplines in which we work. For example, in PIR there will be implicit differences in our expectations around notions such as ‘critical analysis’: in analytical political philosophy this will refer to our emphasis on logical analysis of argument; in a more qualitative field (such as comparative politics) it may refer to interpretive judgment of evidence. Students may be taking such courses simultaneously but this distinction, and certainly such a comparison, would seldom be explicit in assessment guidance.

The form of assessment itself will communicate messages about values and expectations which may be counter to the course-leader’s explicit expectations. For example, we may offer a text-focused, reading-heavy course, say in the history of political thought. We may explicitly communicate the importance of reading very widely: ‘read the whole of Rousseau’s Social Contract and Discourses on Origins of Inequality to do well’. However, the course may be assessed primarily by an unseen exam where students have to write two essays on different topics in two hours. Whatever this form of assessment communicates, it is probably not ‘we value great comprehensive understanding based on profound engagement with wide range of key readings, and your ability to demonstrate this in a considered manner.’ No wonder students may feel confused about what we expect from them with various implicit and explicit messages about assessment being communicated, not all of which may be consistent.

What is Assessment Literacy?

If you want to learn a language, you need to do more than read a book about grammar and vocabulary. You need to understand the context in which the language is used, how this relates to your mother tongue and personal experience. Learning about assessment is much like learning a language. The explicit rules of the ‘grammar’ and ‘vocabulary’ of assessment may be clear (‘provide recommendations to policy makers’, ‘compare and contrast theories’ etc.). But the culture of our disciplines create implicit meanings within our assessment practices. Our disciplines act like dialects, or even languages, and to help students do well in assessment we must help them learn the language and culture of assessment within our disciplines.

We learn languages best through dialogue and practice, and learning the language of assessment is done best in dialogue with our students. Assessment literacy highlights that all the parties involved with assessment have implicit assumptions (Stiggins 1991). Key to cutting through confusion is eliciting these assumptions and building shared understanding of the meaning of assessment.

Good Guidance: building assessment literacy progressively

The primary message we gathered from the literature was that written guidance on explicit expectations is useful as a starting point, but this should be enhanced with activities for students to engage actively with the explicit and implicit expectations within the assessment (Rust et. al., 2003; Price et. al., 2012). In our session we focused on building student assessment literacy through increased opportunities for dialogue with students. These could include opportunities to discuss assessment and its criteria, focusing in particular on eliciting students’ views on how they understand the meaning of the key terms within the assessment guidelines.

Once students’ initial interpretations of the assessment criteria are more fully articulated, it may be useful to develop activities to help students internalise the explicit and implicit requirements of assessment. Asking individual students to mark a sample exercise individually and then compare results with other students, or designing a group-marking exercise of exemplars against marking criteria would all promote assessment literacy effectively. Now that students hopefully have a clearer understanding of the explicit and implicit aspects of the assessment, it could be useful for students to complete either a practice version of the assessment, or at least a short segment or condensed version as a low-stakes formative exercise.

Fear and Resistance about learning the language of assessment

Course-leaders might respond with some hesitation/trepidation. Each of these activities takes time and in an already crowded course, it can appear challenging to build in such opportunities for students and staff to engage in active deconstruction of sample work and marking descriptors. We would respond that these activities need not take very long. Indeed, evidence suggests that brief regular engagement is optimal, and thus short regular slots over a period of weeks (perhaps 10 minutes each at the end of a seminar or tutorial) would help deepen the students’ engagement with the assessment and enhance their literacy in its explicit and implicit elements (see Nicol & Macfarlane-Dick (2006) and Gibbs and Simpson (2004-5)).

Course leaders might also worry about the time required to mark practice or mock-assessments, but they need not create further marking and feedback for the tutor, as a peer-marking activity of these formative pieces would be a fruitful way to further clarify and internalise the goals of the assessment.

These may not be the only reservations course leaders may have about integrating assessment literacy into courses. Is it always appropriate to respond to the student demand for more guidance on how to tackle assessments with further support and resources? Three doubts can be highlighted. Firstly, given the way that student anxiety levels do not seem to decrease with the provision of additional guidance, does the continuous provision of guidance risk feeding student anxiety rather than alleviating it? Second, are we ‘spoon-feeding’ our students by responding to requests for more guidance, so not preparing them well for the real-world? Surely, part of the challenge of university assessment, just like the challenges of a working environment, is working out what one is expected to do, and thus, the ability to do that should be part of the skills students are expected to develop in the course of their studies? Thirdly, there is the fear that providing more guidance risks providing a hostage to fortune, such as when sample essays lead to student complaints of ‘well I did exactly that, and didn’t get the same mark.’

Through engaging with the literature, and trialling some of the above ideas in our own work, we think these fears are somewhat misplaced. It may be the case that the reason increased guidance has not alleviated anxiety has been due to its form – written guidance, samples posted online, and lecture-time devoted to answering questions – all somewhat useful, but not providing the ongoing active engagement and dialogue that we highlight above as required.

There is a risk in these didactic modes that students read guidance through their misconceptions and get the wrong end of the stick. In response to the risk of spoon-feeding, a few points can be made. Firstly, in the ‘good old days’ of our own undergraduate careers where guidance was thin on the ground and we were just expected to get on with it, nearly all courses in political science and international relations were assessed by essay and exam only and there was a very limited variety of assessment for us to get our heads around. Secondly, in most entry-level jobs, graduates will not be expected to produce ‘high-stakes’ work alone; they will receive training, guidance, and will be working in teams. Thirdly, in our work as researchers, we often work as teams, discuss our work with peers, co-author publications, and build off referees’ comments. As such, it is perhaps hypocritical to expect students to perform, to showcase their talents as we would like them to be able to do, without active dialogue around the standards we expect them to meet.

Conversing about Assessment

We perhaps need to reframe giving good guidance on how to tackle assignments: rather than think of it as additional work which is taking up time and ‘pandering’ to students, assessment literacy is part of good learning and teaching. It highlights the importance of conversation between staff and students about assessment. Lots of written guidance can provide a helpful basis for student learning about assessment. But it is best used as a starting-point for an on-going dialogue where we all begin to learn how to communicate better about our expectations and confusions around assessment; and this conversation has greatest meaning within the context of our disciplinary practices. Assessment literacy is a valuable framework for us all to become more fluent in the language of assessment.

——————————————–

References:
Gibbs, Graham and Claire Simpson (2004-5), Conditions Under Which Assessment Supports Students’ Learning, Learning and Teaching in Higher Education, 1, pp 3-31.

Gibbs, Graham, (2010), Using Assessment to Support Student Learning, Leeds Metropolitan University.

David Nicol & Debra Macfarlane-Dick (2006) “Formative assessment and self-regulated learning: A model and seven principles of good feedback practice” Studies in Higher Education, 31/2, pp.199-218.

Price, Margaret, Chris Rust, Berry O’Donovan, Karen Handley, Rebecca Bryant, (2012), Assessment Literacy: the foundation of improving student learning, OCSLD.

Rust Chris , Margaret Price , Berry O’Donovan (2003), Improving Students’ Learning by Developing their Understanding of Assessment Criteria and Processes, Assessment & Evaluation in Higher Education, 28/2, pp. 147-164.

Sambell, Kay and Liz McDowell (1998), ‘The Construction of the Hidden Curriculum: messages and meaning in the assessment of student learning’, Assessment and Evaluation in Higher Education, 23/4, pp. 391-402.

Stiggins, Richard, (1991), ‘Assessment Literacy’ Phi Delta Kappan, 72/7, pp. 534-39.

——————————————–

Philip Cook

Philip Cook is Lecturer in Political Theory in Politics and International Relations, School of Social and Political Science, and Editor-in-Chief of the journal Res Publica. His research focuses on the moral and political status of children.

Claire Duncanson

Claire Duncanson is a Senior Lecturer in International Relations in Politics and International Relations, School of Social and Political Science. Her research is situated at the intersection of gender politics and global politics, and focuses in particular on peacebuilding.

Leave a Reply

Your email address will not be published. Required fields are marked *