In this extra post, Steven Loughnan reflects on the shifting currency of knowledge in the era of AI, and the impact on teaching practices and assessment. Steven is a Professor of Social Psychology and the Director of Undergraduate Studies for Philosophy, Psychology, and Language Sciences.
At the heart of the university is the dissemination of knowledge, primarily through teaching. Of course, we don’t simply teach any old thing, rather we try to give our students valuable knowledge. How do we determine if knowledge is valuable? Well, typically, it’s something that’s both rare and useful — quite unlike my extensive knowledge of Pokémon, for example.
Over time, the value of knowledge can shift dramatically. I’m old enough to have been taught long division manually and remember vividly being told by my teachers that I wouldn’t always have a calculator at my disposal (take that, Mr. White, with my smartphone always within reach!). Although I’m not quite old enough to recall a time when literacy was a rare commodity in the Middle Ages, historians confirm that once upon a time in Europe, being able to read and write was indeed a coveted skill. Today, thanks to mass education and digital technology, basic literacy and complex arithmetic no longer carry the same value—they’ve become common rather than exceptional. Financially speaking, their “currency value” has plummeted.
This idea of knowledge as a currency — its value in the market and how it can be exchanged for other benefits — is potentially helpful for understanding the shifts we are seeing today with the advent of AI. Skills that were highly prized and extensively taught, such as summarising information, comparing and contrasting concepts, translating scientific articles for a general audience, simple coding, and structuring arguments, are now declining in perceived – and potentially actual – value. AI systems can accomplish these tasks quickly and, in many cases, with a level of proficiency comparable to people with an average university education. In this case, these skills are depreciating at a painfully rapid rate.
This transformation directly interacts with common concerns about AI and teaching. Conversations around AI with colleagues often quickly narrow to concerns around assessment integrity or security, making assessments ‘AI resilient’ or ‘AI proof’. Security matters, but only when it is protecting something valuable. Consider the very British example of tea. For the Victorians, tea leaves were zealously guarded in locked tea caddies. Today, if you visited a friend and found their teabags secured with a padlock, you’d probably raise an eyebrow. What changed? Advances in agricultural technology and the forces of global trade have lowered the cost and thus the value of securing tea. When something becomes more accessible or less valuable, the security around it can seem disproportionate and unnecessary.
This realisation should prompt us to reconsider not just how we secure assessments, but what we are assessing in the first place. If an AI can easily replicate a task, perhaps we don’t need to assess it with the rigour we used to. Just as we’ve adjusted to the depreciated necessity of locking away tea or practicing math by hand, we might need to recalibrate our educational objectives and assessment strategies to better match the capabilities and roles of AI.
Despite the challenges this shift presents, we should be measured in our concern. The value of knowledge and skills have always been in flux, and educational institutions have historically adapted to these changes; you can’t change the wind, but you can adjust your sails (or engine). What’s crucial, however, is that we continue to evolve. If we fail to adopt new teaching and assessment methods that reflect the current and likely future value of different types of knowledge, we risk short-changing our students. They could be left holding skills that are outdated by the pace of artificial intelligence innovation. Adapting to this new reality doesn’t mean abandoning our core mission but redirecting our resources and strategies to build on the changes brought about by AI. By doing so, we ensure our students knowledge stays valuable in a rapidly changing world.
Steve Loughan
Prof. Steve Loughnan is a Professor of Social Psychology and the Director of Undergraduate Studies for Philosophy, Psychology, and Language Sciences. He publishes on the psychology of AI and morality. This piece was informed by conversations in the AdvanceHE workshop on AI and Education, generously funded by the Institute for Academic Development.