Any views expressed within media held on this service are those of the contributors, should not be taken as approved or endorsed by the University, and do not necessarily reflect the views of the University in respect of any particular issue.

Marta Sukhno

Marta Sukhno

Blog for the course "Critical Issues in Digital Education" (2023/2024)[SEM2]

Three reasons I am not excited about the future of of AI in education (and one reason why I am!)

In week 7 of the Critical Issues in Digital Education course, we are talking about the hype of AIEd (the fusion of AI and EdTech) and challenging the claims and assumptions of AIEd supporters. In this post, I will summarise some of these claims and why I think we should be critical of them instead of blindly embracing the technology as well as highlight one reason why I think we should be excited about the future of AI in education despite all of its downfalls.

The enthusiasts of AIEd make a lot of bold claims about the potential of AI to transform and democratize education. When prompted to summarize them, ChatGPT does a rather good job quoting “personalized learning, enhanced teaching efficiency, and equity and inclusion” as the main ways how technology can be used to improve education. Indeed, these are the main reasons why AIEd vendors and their supporters say we should be embracing the use of AI technology by teachers and students. We are promised that more students will be able to receive access to education while simultaneously making it more tailored to individual students’ needs and freeing up more of the teachers’ time by automating the assessments and reducing the number of administrative tasks. But do these claims hold any truth?

The promise of AI to bring more equity and inclusion is the one I find least reassuring. Before we even start talking about the future of AI in education, I would like to take a step back and discuss my main concern around the future of AI in general: the politics of AI and how it is reinforcing our pre-existing power structures and biases at a much faster pace. In her recent book, the researcher of the social and political implications of artificial intelligence Kate Crawford (2021) examines how AI technology amplifies racist and sexist bias of its creators as well as that found in data used to train AI algorithms. The examples include racial profiling by facial recognition software or Zoom tuning down female voices when they overlap with their male colleagues due to software being mostly trained on male voices. With so many pre-existing social injustices built in AI logic, I believe it is naive to claim it has the potential to solve inequalities in education.

Looking closer at AI use in education, one of the reasons I am concerned about the promise of more efficiency is the threat of reducing academic assessments to mere low-stakes quizzes that are more easy to score with automated methods. We have already seen an emerging trend of using multiple-choice question exams due to them being “more efficient” and I am concerned that by claiming to improve the efficiency of teaching, this is precisely what AI enthusiasts have in mind. While it is an appropriate evaluation method for some types of knowledge, I believe solely using automatable quizzes for evaluating students’ progress would be detrimental to the quality of their education.

The third and final reason I would like to challenge the hype around AIEd is my concern about the potential implications of regular interaction of students with generative AI and how it might influence their critical thinking and foster their over-reliance on technology. This is an old educational debate going back to the use of the calculator in the classroom. I strongly believe in the importance of teaching students how to do manual calculations (or, in the case of AI, using their own critical thinking and research) before allowing them to rely on a calculator (in our case, ChatGPT or similar GenAI tools).

Having voiced my main concerns about AIEd supporters’ promises to improve education, I would like to articulate one reason I am still positive about certain uses of AI in an educational context. In his latest book, AI researcher Toby Walsh (2023) examines how AI’s main task is to fake human intelligence. This prompts him to ask many questions about what human intelligence and creativity really are. This is precisely the application of AI in education that I am most excited about: leveraging the discussions and encouraging experiments with AI in the curriculum to help students reflect on their own intelligence and creativity by asking questions on how it is different (or similar) to that of the machines they are interacting with. In short, how might we be using artificial intelligence to help us better understand what being human really means?

To conclude, there are many reasons why I believe claims about AI improving education should be approached critically and with caution. These include AI’s tendency to reinforce pre-existing power structures, prioritize automated assessments, and put our critical thinking in danger. At the same time, the potential of using AI as a tool for students to examine their own intelligence and creativity is what I believe is a promising potential application of AI in education that we should be talking more about.

References
  1. Crawford, K. (2001) Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. ‎Yale University Press.
  2. Walsh, T. (2023). Faking It, Artificial Intelligence in a Human World. Black Inc Books.

3 replies to “Three reasons I am not excited about the future of of AI in education (and one reason why I am!)”

  1. s2342859 says:

    Hi there, again, Marta!

    This blog post has a couple of improvements over the previous one. You develop your argument clearly throughout the text. Well done. You also have crafted an excellent introduction. Though, you can add a brief summary of the four topics that you will discuss. That will help the reader. Finally, this blog post has a catchy title. Remember to repeat these things in your next blog post.

    Let’s go to your text. The text has four big ideas that support your main argument. The first idea, related to equity and inclusion, is well-developed. I agree with you (and with Crawford): AI tends to replicate the biases of its creators. Timnit Gebru has been a vocal activist against AI. You can check her work here: https://scholar.google.com/citations?hl=en&user=lemnAcwAAAAJ&view_op=list_works&sortby=pubdate. Regarding this first idea, I would recommend weaving a bit more about the issue of equity and inclusion in relation to education. Perhaps you can discuss here Williamson’s paper on the bio-datafied child.

    Regarding your second idea, I think you can expand it, returning to Pelletier’s paper. Pelletier discusses personalized learning, one of the big promises of AI. I think this promise (personalized learning) is bundled with the promise of efficiency in the assessment process. Again, try coming back to prior readings when possible.

    Your third big idea can be expanded. I believe the topic is interesting, but your point needs more examples and explanations. Why do you think it is better to learn manual calculations before relying on a calculator? And how does AI resemble a calculator when we discuss critical thinking?

    Finally, I find interesting reflection on why you are excited about AI. Indeed, all technologies can be a focus of scrutiny inside a classroom. And AI, which seems to imply consciousness or human-like intelligence, allows the emergence of interesting discussions.

    Keep up the effort! Please, for the next blog post, return to the images and make it a bit longer so you can discuss thoroughly/expand your ideas.

    1. s2342859 says:

      Hi Marta,

      I must correct myself! You are just reading Pelletier’s paper. This week, you will have the opportunity to expand your reflections on assessment (and other tropes of education) through the critique of personalised learning. Please consider weaving both blog posts through these topics.

      1. Marta Sukhno says:

        Hi Nicolás,

        It’s perfect timing as I’ve just finished reading Williamson’s and Pelletier’s papers for this week’s topic of personalization and both resonated a lot and helped to add more context to the topic of potential uses of AI in education. I’ll be sure to weave in some of their examples and arguments in my next post!

Leave a reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

css.php

Report this page

To report inappropriate content on this page, please use the form below. Upon receiving your report, we will be in touch as per the Take Down Policy of the service.

Please note that personal data collected through this form is used and stored for the purposes of processing this report and communication with you.

If you are unable to report a concern about content via this form please contact the Service Owner.

Please enter an email address you wish to be contacted on. Please describe the unacceptable content in sufficient detail to allow us to locate it, and why you consider it to be unacceptable.
By submitting this report, you accept that it is accurate and that fraudulent or nuisance complaints may result in action by the University.

  Cancel