The False Promise of Techno-Solutionism in Education
“AI will revolutionize education.”
“With this technology, we can finally personalize learning for every student.”
These promises sound familiar, don’t they? They echo the same rhetoric that has accompanied every new technology entering the educational space over the past decades. As I mentioned in my first blog post, we’ve gone through various cycles of tech hype, each one promising to transform education in unprecedented ways. The latest iteration of this cycle revolves around AI.
This pattern reflects what tech critic Evgeny Morozov calls “techno-solutionism” – the belief that for every complex social problem, there exists a neat technological solution. It’s the mindset that frames education’s challenges primarily as technical problems awaiting technical fixes rather than as complex social, cultural, and human issues.
The techno-solutionist approach to education is seductive in its simplicity. It offers clean, scalable “solutions” to the messy, contextual, human-centered work of teaching and learning. But as I’ve observed in my conversations with educators and in my own thinking about AI in education, this approach consistently falls short because it fundamentally misunderstands the nature of education itself.
Education isn’t just about information transfer or skill acquisition – it’s about human development in its fullest sense. As I noted in my fourth blog post, Biesta’s framework reminds us that education encompasses qualification (knowledge and skills), socialization (becoming part of social and cultural ‘orders’), and subjectification (developing independence in thinking and acting). When we reduce education’s challenges to technical problems, we inevitably focus on qualification at the expense of socialization and subjectification.
I saw this techno-solutionism at work in some of my conversations with educators in recent months. The schools experiencing the most “institutional vertigo” were often those most susceptible to techno-solutionist thinking. In their disorientation, the clear, confident promises of technological solutions seemed like a lifeline. But what these promises often mask is a reductive understanding of education’s purpose and processes.
Techno-solutionism also tends to ignore or minimize the very real structural inequalities that shape educational experiences and outcomes. As I wrote in my seventh blog post, AI won’t fix educational inequality because AI can’t address the systemic and structural inequalities that create educational disparities in the first place. When we frame education’s challenges as primarily technical rather than structural, political, or ethical, we divert attention and resources from addressing these deeper issues.
What’s particularly concerning about the current wave of techno-solutionism surrounding AI is that it’s occurring in a context of heightened uncertainty and complexity. As I observed in my tenth blog post, many educational institutions are experiencing a form of “institutional vertigo” when it comes to AI – struggling even to formulate the right questions, let alone find the right answers. This creates fertile ground for techno-solutionist narratives that offer clarity amid confusion.
So where does this leave my project? I believe we need to move beyond techno-solutionism toward a more nuanced approach that:
- Recognizes education as a complex human endeavor that cannot be reduced to technical problems with technical solutions.
- Centers educational values and purposes rather than technological capabilities when thinking about AI integration.
- Acknowledges the structural and systemic factors that shape educational experiences and outcomes, rather than framing challenges solely in technical terms.
- Prioritizes human agency and relationship in educational contexts, viewing technology as something that should enhance rather than replace the human dimensions of teaching and learning.
- Encourages critical engagement with technology rather than uncritical adoption or rejection.
This doesn’t mean rejecting technology outright – that would be a form of technological determinism in reverse. Rather, it means approaching technology critically and intentionally, asking not just “What can this technology do?” but “What educational values does this technology serve or undermine?”
As I move forward with my project, I’m increasingly convinced that what schools need isn’t more technological solutions, but richer frameworks for thinking about technology in the first place – frameworks that center educational values, honor human complexity, and recognize the structural dimensions of educational challenges. Only then can we move beyond the false promises of techno-solutionism toward a more thoughtful, values-driven approach to AI in education.
The question isn’t “How can AI solve education’s problems?” but “How can we create educational environments where AI serves our values and aspirations for learning?” That’s the question I hope to help schools explore.