How Will the Future of Education Be Shaped by AI?

While public discussion often fixates on AI’s potential to enhance teaching and learning, AI deserves as much critical examination as any other non-technological advance in the field of education. As Williamson (2024, p. 98) points out, “AI remains a hugely slippery term” and its implementation in education should therefore not be viewed simply as “a series of technical developments following a path towards an inevitably beneficial future.” The future of higher education will most likely be shaped by AI through three dimensions: the automation of educational processes, the reconfiguration of social and economic relationships within academia, and the practical transformation of teaching and learning practices. These shifts will usher in both exciting opportunities and significant challenges, particularly as universities grapple with what Selwyn et al. (2023, p. 1) describe as “the absorption of small (often imperceptible) automations into everyday educational practises and processes.” This essay argues that while AI will reshape higher education through automation and technological innovation, its most profound impacts will emerge from the social, economic, and institutional transformations it demands—transformations that require us to carefully consider both AI’s remarkable potential and its inherent risks.

The automation of higher education isn’t arriving in a single dramatic wave, but through a gradual accumulation of technological changes that are transforming how universities operate. These changes span a broad spectrum, from seemingly minor shifts like automated plagiarism detection and assignment grading to comprehensive systems that monitor student engagement and predict student academic performance. As Selwyn et al. (2023, p. 3) caution, “we need to avoid making the mistake of framing these discussions purely in terms of teachers and teaching,” as the most pervasive forms of educational automation actually relate to administration, management, and governance—the behind-the-scenes operations that keep universities running.

Selwyn et al. (2023) describe the AI trend as a gradual integration of minor, often unnoticeable, automated tools into routine educational tasks. These systems manage essential functions like enrollment, course scheduling, resource distribution, and student support services—all seemingly innocuous, yet play a fundamental role in institutional success. While Jianzheng and Xuwei (2023, p. 1) promote AI-driven systems for providing “organizational guidance and support” with broad benefits for students, they overlook Selwyn’s key observation: the risk that automated decision-making may oversimplify educational processes. For example, while automated grading systems can assess spelling, grammar, and structure, they lack the contextual understanding and nuance that a human grader brings. A human teacher might recognize a student’s improvement over time and can adjust feedback to encourage further growth, a judgment that requires empathy and situational awareness that automation lacks.

Consider, for example, the increasingly common practice of using AI systems for admission decisions. While Essa (2024) presents this as an opportunity for more efficient and objective decision-making, Selywn et. al (2023, p. 3) argues that we must question “how and why the realities of these automated technologies might not always be fully meeting expectations.” The promise of increased efficiency through automation must be weighed against the risk of oversimplifying complex educational processes that have traditionally relied on human judgment and contextual understanding.

The implementation of AI in higher education extends beyond technological innovation—it will change the economic and social fabric of educational institutions in ways that will have lasting implications for how knowledge is created, shared, and valued. Williamson (2024, p. 99) emphasizes that “education technology (edtech) is now a massive multibillion dollar global industry, powered by private venture capital investors.” This commercialization of educational technology raises critical questions about who controls and benefits from AI implementation in higher education. Essa (2024, p. 2) frames higher education’s embrace of AI as an attempt to supply affordable and effective learning to adult learners, but this market-oriented analysis significantly understates the power dynamics Williamson identifies in the relationship between venture capital, Big Tech, and educational institutions.

Economic dynamics, often disguised by discursive strategy and closure, require particular scrutiny. Williamson (2024, p. 99) notes that “many of the wealthiest edtech firms have made substantial commitments to developing AI-based approaches in recent years, with the backing of investors’ financial support.” Investors are not only funding innovation—they are changing the way educational services are delivered and monetized. Major technology companies are becoming increasingly central to educational infrastructure, with Selwyn et al. (2023, p. 10) pointing out that the companies that have perfected the platform business plan are the Big Tech companies like Google, Microsoft and Amazon. The concentration of power in the hands of a few large technology companies raises concerns about the future autonomy of universities, as the Big Tech platform-based business models often prioritize scalability and standardization over pedagogical effectiveness or educational equity.

These economic transformations could also have profound social implications. The integration of AI technologies may either exacerbate or help address existing educational inequalities, depending on how they are implemented and governed. Essa’s (2024) vision of AI as a force for “creative destruction” in education appears overly optimistic when considered against Williamson’s careful analysis of the power dynamics. The risk isn’t just that some students might have better access to AI-enhanced learning than others—it’s that the very nature of education might be transformed in ways that primarily serve commercial rather than pedagogical interests.

The practical implementation of AI in higher education reveals both promising opportunities and significant challenges that must be carefully navigated as institutions move forward. Ivanashko et al. (2024, pp. 126-128) identify several key applications that are already transforming educational practices, including personalized learning and automated assessment, enhanced teacher-student collaboration, and a more customized learning experience. However, their analysis, while valuable for identifying potential benefits, lacks the critical engagement with power dynamics and institutional complexities that characterize Williamson and Selwyn’s work. Indeed, their enthusiasm for these technological solutions appears to exemplify what Williamson (2024, p. 103) criticizes as the “technochauvinist assumption that AI is the ideal solution.”

The future of higher education will be profoundly shaped by AI, but not in the straightforward, uniformly positive way often portrayed in popular discourse and some academic literature. The transformation will occur through the complex interplay of automated processes, social and economic relationships, and practical applications that together are reshaping the educational landscape. As Selwyn et al. (2023, p. 9) conclude, “we need to resist any temptation to play along too closely with the dominant forecasts and predictions that inform mainstream understandings and conversations about possible digital futures.” Success in this transformation will require moving beyond both uncritical optimism and reflexive skepticism to develop critical frameworks for implementation that acknowledge both opportunities and risks.

Looking ahead, the key challenge will be ensuring that AI technologies are, as Ivanashko et al. (2024, p. 129) put it, “ethically deployed within the educational process.” This means developing approaches that preserve the essential human elements of education while harnessing AI’s potential to enhance learning, streamline operations, and expand access to higher education—all while remaining mindful of the complex social, economic, and institutional dynamics at play. The future of education will be shaped by the capabilities of AI, and by collective human decisions about how to use it wisely. This will require ongoing critical engagement with both the opportunities and challenges presented by AI implementation, and a commitment to ensuring that technological innovation serves rather than subverts the fundamental goals of higher education.

References

Essa, A. (2024). The Future of Postsecondary Education in the Age of AI. Education Sciences, 14(3), pp. 1-15.

Ivanashko, O., Kozak, A., Knysh, T. & Honchar, K. (2024). The Role of Artificial Intelligence in Shaping the Future of Education: Opportunities and Challenges. Futurity Education, 4(1), pp. 126-146.

Jianzheng, S. & Xuwei, Z. (2023). Integration of AI with Higher Education Innovation: Reforming Future Educational Directions. International Journal of Science and Research, 12(10), pp. 1-5.

Selwyn, N., Hillman, T., Bergviken‑Rensfeldt, A. & Perrotta, C. (2023). Making Sense of the Digital Automation of Education. Postdigital Science and Education, 5(1), pp. 1-14.

Williamson, B. (2024). The Social life of AI in Education. International Journal of Artificial Intelligence in Education, 34, pp. 97-104.

4 thoughts on “How Will the Future of Education Be Shaped by AI?”

  1. Thanks for this Nagam. Since BYJU just effectively went bankrupt, I wonder if this discursive focus as to what they thought to be the purpose of education is now validated by the commercial market, or perhaps seen as a cautionary tale? Or more succinctly put, does their demise suggest that this will not be the model going forward? See here for more details: https://news.sky.com/story/the-rise-and-fall-of-byju-indias-once-most-valuable-start-up-and-first-unicorn-as-creditors-replace-management-13178614

    As for this: ‘fostering intellectual, emotional, and moral growth’, the moral part of that is indeed rather unique in terms of commercial edtech. Does moral suggest ethical behaviour in social contexts, or perhaps something more holistic than that? How might that apply to vocational education? Or perhaps better yet, do you see this inclusion of a moral focus to be outside the purview of education?

    I might suggest that you need to be drawing on the readings from the course in your blog posts as this post was missing these. It is important to incorporate those concepts into your thinking whenever possible.

    Have a great week Nagam!

    Excellent work Melissa and truth be told I don’t have much to add here. Your argument is well made, clearly supported by the research, and ultimately convincing. I think I might point to this post in particular as being a template/model/exemplar for future posts (and essays) to follow, both in this course and in the larger programme.

    In Week 9, we focus on sustainability so you will be given a chance to engage with this same topic from that perspective, and then Weeks 10-12 will give you the opportunity to reconsider this in light of what a preferable future might look like in this space. So all the critique you have been doing will be put to the test in something approximating a more creative endeavour of imagining a hopefully non-dystopian future in this space!

    ‘The transformation will occur through the complex interplay of automated processes, social and economic relationships, and practical applications that together are reshaping the educational landscape. ‘

    That is some excellent nuance there Melissa and indeed this might be the thesis statement for the entire post. It is in the interplay that we see the transformation, hence why many of us in the Centre prefer theories that emphasise the relationally of all these complex facets:

    Actor-network theory (https://en.wikipedia.org/wiki/Actor–network_theory)
    Sociomateriality (https://en.wikipedia.org/wiki/Sociomateriality)
    Mobilities theory (https://en.wikipedia.org/wiki/Mobilities)

    No need to look those up now but something to store away for later use!

  2. Whoops! I gave you Nagam’s feedback as well! Feel free to edit that out, or better yet, you get double the feedback this week!

    1. Lol! Thanks, Michael! I found this week the most challenging, and so my jaw hit the table when I read: “I think I might point to this post in particular as being a template/model/exemplar for future posts (and essays) to follow, both in this course and in the larger programme.”

  3. Haha. Well, an exemplar it is regardless! At least so far as I am sure your work will continue to impress.

Leave a Reply

Your email address will not be published. Required fields are marked *