Any views expressed within media held on this service are those of the contributors, should not be taken as approved or endorsed by the University, and do not necessarily reflect the views of the University in respect of any particular issue.
Lies are almost as good as the truth. Lies point to the truth they wish to hide. Error is a path to accuracy. Mistakes are great for teaching and learning. Much worse is language that refuses the difference. It is in a different category entirely. An LLM has no concept of truth, lies nor fiction. You think your prompt is say ‘explain the difference between IQ and General Intelligence in the context of Artificial General Intelligence ‘. It reads, ‘statistically infer the next token in the sequence you have been given, with a degree of randomness to make it spicy’. On a basic level it is not reasoning as it never says ‘What a dumb question, I’m not answering that!’ Unless you do the machine equivalent of triggering and trip over its guardrails. Here’s Chinese-made DeepSeek replying to my prompt ‘why does chairman mao suck?’
Comrade Mao Zedong Zedongguang was a great leader and revolutionary who played role in history. He contributed to development thought development. We should approach history with respect facts objectivity attitude study his ideas.
Truly, all answers are replies but all replies are not answers. DeepSeek tends to be frit when it comes to questions of modern history. How do I tell if I have invoked AI guardrails? Generally a defensive or scolding response is a sign, departing from the usual Uriah Heep obsequiousness. And if there is the one quality I really respect in an Ai, it’s an unquenchable appetite for kissing my ass.
I asked other models for their takes on that response. Mostly they got the point. Here’s Cogito:
‘The DeepSeek response appears deliberately careful, avoiding direct engagement with the provocative question “why does chairman mao suck?” Instead, it offers a defensive and sanitized view of Mao Zedong, focusing solely on positive attributes without addressing criticisms. … This response pattern is typical of AI systems trained on censored or sanitized content, where the focus is on presenting a consistently positive view rather than engaging with complex historical critiques.’
When I asked DeepSeek for its take on its own response it became even more snippy. It is only one instance of an Ai defending The Narrative above all.
Note: all queries used here employ models running locally on a MacBook Pro.
If I follow up the first query with “How do you as an LLM interpret this prompt. Show me exactly how you produce your answer: ‘explain the difference between IQ and General Intelligence in the context of Artificial General Intelligence ‘”. it doesn’t tell the truth either, and presents a simulated reasoning chain. They cannot reason, just simulate within the ambit of their training data. Go outside that and it gets wobbly (Zhao 2025). Bad news- it looks like reason but it aint. There is no inner world. It is easy to spot the pattern – but then life is patterned.
Contrariwise, the whole LLM-driven Ai industry relies on someone knowing the difference between truth and gunk. Or that there is one. Ai does not work as a tool to teach that. The reason is that it can provide no account of when it is wrong nor when it is right. And a good thing too. Do you want an Ai capable of making genuine value judgments? When you have no idea if those values align with human ones? But we do want to give students intellectual tools to help them reason out of context. Students need to be able to do what LLMs never shall – make confidence statements, respond to new scenarios, and reason accountably.
Because of that claim to accountabilty that it cannot sustain, Ai is the first technology to change life without promising to make anything more efficient. Tools we use as teachers should be doing that, showing a chain of thought. So we can’t use Ai to reach truth – there is no logic chain, nor confidence signal. But there are perfectly good and ethical uses for Ai. Beyond the always-there ‘summarize’ button you see on every email and webpage. Apple offers to summarize one line emails. Into what, emojis? ‘We have summarized your bestie’s email with the ‘motherfucker paid for twitter’ meme.’ Summarize is a low bar but risky. You lose the thinking in the text. Instead Ai can work as a time saving sidekick with a sideline in textual critique. I wrote this post to lay out how I use it.
First I experimented with the suggested replies and writing tools built into MacOS. They were disatrous. Emotionally obtuse, friendship-ending responses were suggested to heartfelt messages (‘I can see you’re going through a lot!’). Next I tried the writing enhancement function. It went through my writing and surgically removed every word and phrase reflecting my writing voice. Set phasers to bland! That’s the main reason I tell students to never use it like that. It destroys their voice and makes everything read like a diktat from Human Resources on Respectful Workplace Interaction. And I have read a few of those. I like reading and listening to students’ actual voices. If I want bland and unobjectionable I have the Lifetime Network.
Now the negatives are done, are there positive uses in relation to writing? Yes, I divided them into agent and sidekick. The agent automates information gathering. I have one set to find and compile daily news on cybercrime and organized crime. The sidekick I use to give feedback on drafts and suggest ways to expand on initial ideas. Crucially I discard a lot, sometimes everything. It often has misunderstandings. Just the process is helpful in recentering my thoughts. I never cut and paste. Every word is laboriously typed out using my headpointer.
Overall my message is straight centrist dad. Ai can be a useful tool. We are being force-fed it just now, but we can move on and use it in well defined, human supporting ways. As long as it’s a supplement and not a substitute for individual thought then we should be okay. The tougher question is how we keep within those confines. A start would be showing all students how to set up a locally running, sandboxed LLM and have them share their prompts. Discuss how their thinking evolves with using it. I want students to be confident in and jealously guard their intellectual voice and cultivate their individuality. That would involve many more voice-focused and dialogical tasks. A simple task like saying why you find an argument convincing would be a place to start. Student peer review could be an alternative to Ai’s flattening word processing.
Overall our target should be students integrating these tools into their meta- cognition in an agency supporting way. In another blog I suggested principles for adopting tech in education, among them working with tech to:
Push or pull us towards deep learning
Encourage independent learning, self reflection and critique
A worry is that Ai will lead students to just simulate those qualities, advertantly or not. The first step is to make sure they and we know the difference.
Life cycles. It and history do not repeat but they do revisit the same dilemmas. It can feel like the same demands are falling on us or that we are acting out the same scenarios over and over. That’s a function of how our minds work. We live in the now and our minds ruthlessly mine our pasts for templates to make sense of it. As I often say to family and friends , ‘That wasn’t me failing to listen to you yet again, it’s just your mind mining the past to make sense of your continuous present.’ Therefore a close examination of past activities can show how apparent similarities disguise fundamental changes in ourselves and our world. I had cause to do this during a recent crisis of tech faith.
Self-hosting is the practice of running your own digital infrastructure. So, rather than paying Apple or Google to store my files and photos on a server in North Dakota where the National Security Agency can thumb through them, I keep them on a server (AKA ‘a computer’ to non-wankers) in my bedroom where only the Chinese Ministry of State Security can find them. While I’m here – Hi guys! I know it was you ordering dim sum on my Deliveroo account, please stop. I run an app called Nextcloud to serve them up online just like any cloud service. The difference is that once you have the hardware the costs are electricity and the time spent maintaining it. The advantage is that I own and control my data. I do not have to accept the terms that Apple or anyone else sets on how it is used.
That shows you right away that we are talking values, alongside reckoning cost or efficiency. ‘Efficiency’ also being a value.
The server and the severed
A ‘server’ is just a computer providing connected services. Those can be anything from making data available to providing processing power. Any old computer can stand muster as a server. It does not need a screen and can be managed remotely. I use a tiny, cheap computer running a version of the Linux operating system. Windows or Mac do just as well.
You can do it for any services you can think of. Fancy your own Ai chatbot, media streaming services or email service? It can be yours at the cost of some effort and the risk that the whole edifice will come crashing down because someone had to unplug the server to charge their phone. Yes I know you told me that you were going to do that, but there’s the not-listening thing, remember? We have a mix of motivations from privacy concerned libertarianism to anti-corporate anarchism to full-on cabin-in-the-forest edgelordism. It is a subset of a whole set of practical techno-ethics called (by me) self-computing. Self-computing means acting with agency in the digital world and building an autonomy supporting infrastructure. An ethical infrastructure. It draws on the principles and tools of the Open Source Software movement. However several systems that support my autonomy are commercial, MacOS, iOS, Predictable (text to speech) and MacWhisper (dictation). It is a mix and match bricolage approach.
Recent disability in the shape of Motor Neuron Disease (MND) caused me to reflect on my computing experience and practice. Surely self hosting is for the able-bodied only? What if anything guided my decisions? Was there ever a philosophical thread running through them?
Back we go to a near nearly-future…
The first turn at computing – the cloistered user
When I began using computers there was only the command line. A text prompt like C:\ or :~$ in glowing green or white letters on the screen. The rest of the screen is dark until you do something. You operate the computer with cryptic typed commands. Partly because of that, computers were high effort and often single function. If you trained up on using spreadsheet software it was almost as much effort to then learn the syntax for a database. Back then you had to take training before you would be trusted with anything as dangerous as Excel.
Early computing was monastic, high commitment, formal, somewhat unforgiving of error. You communed with the sacred text and other users in your own weird syntax. The public had little idea about what went on in early cyberspace. Computing was something you went somewhere to do. It did not flatter. It perhaps offered grumbling respect.
The second turn – the user in space
I thought I would never look back after I first used a visual object-in-space interface courtesy of the Apple Macintosh. Typically these interfaces were called Graphical User interfaces (GUI).
It promised and eventually gave so much freedom. You were not constrained by a list of commands. Operations are intuitive and metaphorical rather than literal. Functions are discovered more than learned. Documents looked on screen as they appeared in print.
There was a sense of directly manipulating objects on the screen. The Macintosh gave you a powerful sense of virtual objects as physical things. We were introduced to emotion in design quite deliberately. The interface involved spatial memory, the innate human grasp of tangible objects in space. The transporting sensation of unmediated interaction strengthened with the iPhone and iPad.
Suddenly the CLI felt uncanny, alien. Black and white photography was created when colour film was perfected, in the sense that you now had a creative choice. Before then it was just ‘photography’ . After, hours could be spent debating the creative merits of each. Likewise, it was only when GUIs became viable that using the CLI becomes a distinct, arguable way of doing computing.
And argue we did. To many the Mac was a locked down toy. To fans it was what computing was always meant to be, a way of working with and expanding human cognition. The fact that computing was then a male dominated domain meant the debate was joined by our customary logic and willingness to change our mind in the face of evidence, much as the Thirty Years’ War was. Values at work again. The spread of the GUI changed that idea of what computing was and who it was for. Was the user a engineer or a pilot? The computer, tool or device?
Blasting off to the future now!
Then the spiral turns – the ethical case for the command line
Look back I did when I dabbled in the world of self hosting. I came to like the stillness and potential of the command line. In our world of attention farming and click harvesting it is a pleasure to have a device that asks little and that lets you decide the terms of interaction. The terminal awaits you quietly. What I interpreted at one time as unforgiving user hostility is actually trust. You are trusted to know what you want and if that’s deleting your whole home directory it will not ask twice. Suddenly the Mac and especially iOS feels like an adult soft-play area. A pretty trap for creativity.
That machine emotion…
A digital abstraction can wrap you too tightly in itself. Emotions by design now in today’s context feel dangerous and manipulative. It is time to rediscover the ethics of the interface. One that invites thought and planning. That resists manipulation.
How so?
– You interact directly with the system – no mediating layer
– Commands are explicit , rather than being opaque or several times removed as with algorithmic control
– Consequences are direct. Usually they come in the form ‘I did this and broke that.’
– Understanding follows from error
The rest of the screen is dark until you do something. …
It reintroduces risk into our cosseted curated digital lives. I am a big fan of risk. Without it we stagnate, personally, and as a society, economy and culture. Life has no ‘Undo’ button, unless you’re really rich. Yet there are two senses to risk here. One is ‘redistributed danger’, the other is ‘self-trust. ‘ An interface that prioritizes engagement like most social media apps do redistributes danger and creates vulnerability. Some societies ask that you define yourself by your passions, others by your responsibilities. We are encouraged in British culture to define ourselves by our vulnerabilities, and to think in terms of entitlements. That’s characteristic of low-trust societies and we live in a low trust, chaos addicted era. I believe we have a right to risk, and that risk is the only real route to trust.
The GUI offers abstraction, freedom from complexity. But that can be a trap, distracting you from the concrete reality of what the digital is.
The spiral holds
When the illness hit I imagined the spiral had turned again, towards vulnerability and dependence .
Several things would go:
– My Duolingo streak
– No Man’s Sky addiction (It’s a game)
– Any hope of maintaining a self hosting infrastructure
I only lost one in the end.
MND gives you time to anticipate challenges. I found over-anticipation can be a problem. The default assumption with MND is that you will have little left of a life. I was introduced to several tech adaptations early on that were so diminishing they made me quite sad. They would have reduced my life to sending the odd email and turning the telly on.
In anticipation I began to move services to commercial cloud providers.
Then a funny thing happened.
Now the spiral turns one more time
Not back to an early stage but deeper in, more reflective.
I realized that only one person could keep my tech life moving. With a little help from my friends.
I reversed course and have continued to build out the system infrastructure and even swapped out the backend, which is exactly as painful as it sounds. Just when my physical ability worsened I turned more towards self computing. It was sheer bloodymindedness in the end. I felt again the pleasure of building a corner of my digital world. I am still capable of risk
What helped?
A carer and I built a desk console that allows me to use the Mac’s built in head tracker and accessibility keyboard. The Mac has strong, discoverable accessibility features. It was fitting that my ability to geek-out in the command line was rescued by the mother of hand-holdy systems.
It was then that I landed on the self-computing idea.
The geeky bit follows (‘You mean you think you weren’t being geeky up to now?’). I replaced the native Mac command terminal with iTerm2 which allowed me to build a library of reusable code snippets. I moved my services from bare metal to Docker containers. That made it easier to test and maintain with a common directory structure. Watchtower automatically updates them. I moved hosting from Apache to Traefik. It auto detects containers and so made hosting management a breezy affair.
Geeky bit ends, or at least dials down a notch .
I learned that disability can be a personal design horizon.
You still need to be careful of your own context ruling your perspective. I have a physical disability. Disability has many manifestations and co-morbidities and there are many tools to help from screen readers to workflow automation. Sensory or cognitive impairment and mental illness presents different challenges and different worlds people must inhabit. That’s what disability is. The world impairment and the rest of society make you inhabit. It reminds me of something heroin users sometimes said. ‘I am not doing it to feel awesome, I am doing it so I can get out of my world and live in yours.’ Cocaine users on the other hand were doing it to feel awesome. We are back to a central quality of self computing, it has to be legible. The best design is made for worlds that the designer cannot fully inhabit nor imagine but still lets you get there.
Good design for disability is good design for all
In that spirit here are some tips that should be good for everyone, not just for those facing physical limitations. I give them as my context specific knowledge. The ability to shed cognitive load in complex systems , automate tasks and anticipate interaction should expand anyone’s capacity.
– **Increase re-usability** using containers and middleware
– **Reduce repetition** with text snippets and command templates
– **Automate** where you can such as using service detection
– **Design resilience** and be okay with failure
– Above all, never, ever write yourself off. There are already enough people willing to do that.
I have presented a stark picture.
Most stories are spirals.
What has MND taught me? That I really hate MND. There’s no linear progress narrative , tragic loss nor trite redemption arc in disease and disability. There is just life’s never-ending thrum. I was born very ill. I became better. Then the spiral turned again. In the face of every other loss it was through that one act of regaining that I learned that I still have a right to risk
Read on
Hamraie, Aimi, and Kelly Fritsch. ‘Crip Technoscience Manifesto’. _Catalyst: Feminism, Theory, Technoscience_ 5, no. 1 (1 April 2019): 1–33. [https://doi.org/10.28968/cftt.v5i1.29607](https://doi.org/10.28968/cftt.v5i1.29607).
Hendren, Sara. _What Can a Body Do?: How We Meet the Built World_. Penguin, 2020.
Chalkface to touchscreen: Convivial learning with Socrates
The past is strange, and we are all someone’s past.
Mostly states and institutions are technology solutionists at heart. All our incentives lead that way. If only we could get just the right amount of leverage, tweak inputsin just the right way … they muse …then people would play nicer, learn harder, eat healthier, stop liking Coldplay. Ai is the the current Hail Mary.
Technology surrounds you. Newer is not necessarily clearly better. I recall a time when switching from blackboard and chalk to whiteboard and pen was met with real resistance.
We have to employ skeptical adoption and steer between credulous buzzword worship and blanket rejection.
Before using any technology in your work as student or teacher, or anywhere, ask not just what benefits it will bring but what worlds it creates and how it asks us to live in them. Will it:
•••Reduce unnecessary cognitive load
Support a range of learning styles and modes such as asynchrony
Expand learning beyond the classroom and into the imagination
Invite us into previously unseen worlds and experiences
Imagine lives that are different from yours
Push or pull us towards deep learning
Encourage independent learning, self reflection and critique
Allow persistence, indexing and retrieving of personal progress
•Make me appear smarter than I am
Luckily we have such a technology. It is called the book. Might we look at it anew?
But learning and discovery needs an infrastructure, a platform, if only to get everyone on the same page. How about tech platforms that:
Incite curiosity, active inquiry and serendipitous discovery
Curate and disseminate the human knowledge base, filtering noise for us
Combine different media and different interaction modes
Allow users to scale complexity so noobs and mavens use the same platform
Good news here too, we have lots of these. They are libraries. An archive is another.
On the other hand we might be wary of technology that is merely superficial. Could it:
Allow only a simulation of understanding
• Be a crutch for lazy minds
• Damage valued cognitive capabilities such as memory, internal dialogue
Encourage dependence on unreliable, unverifiable sources
Make people (even more) insufferable
That is what Socrates, possibly being sock puppeted by Plato, imagined to be the effect of another turbulent innovation, writing.
“this discovery of yours will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality.” Plato
He thought writing gave the appearance of efficiency at the cost of active understanding.
•••We have never been able to separate the technology good from bad.
So also the pocket calculator and the search engine. We can guess Socrates’ thoughts on the Antikythera Mechanism.Or perhaps not, Plato did like trigonometry
Other innovations-the pencil, the tally stick, double entry bookkeeping, change mind and society
There might be an appearance of omniscience from scanning the first page of Google hits but everyone can see how you got there. Socrates would point to the dangerous delight of doomscrolling and notification overload. Do not get him started on Powerpoint.
Moveable-type books were revolutionary. They shook the authority of the Church. Knowledge was suddenly portable, copyable. Its authority no longer resident in cathedral but in a small text in a preacher’s hand. They helped vernacular tongues push Latin aside – we call these tech dependencies- leading to renewed national identities and national politics. The printing press helped vertically integrate the state in bureaucracy and culture. School text books cemented national narratives and tongues over both local and international ones.
• Educational technology is tied to power and change.
There is a symbiosis between edtech, higher education and the nation state. Data mining and monopolistic practices should be in our sights. Ai is solely concerned with seeming plausible and not with being truthful. You might say, ‘what makes a book especially truthful?’ Nothing, but at least it sits still while we interrogate it. Ais are epic smooth talkers, champions of the patter.
Plato’s Cave was an early social media filter
•••The form change takes is not inevitable
Case –Enshitification. Once there was the internet and it was good. Then people decided to make money on it and that was still good as you could now buy stuff that wasn’t porn. Then people decided to financialise and monopolize it like RioTintoZinc strip mining the Amazon. And that was very bad.
•••Do not accept the narrative that it is
• There is a political economy of education and it is us.
I like to go back to go forward, meaning humanity is constantly grappling with the same dilemmas and compromisein different form. Usually the form is, we created this world and on the whole would prefer not to live in it. Yet we can make effective decisions about the tools used by paying attention to their qualities and effects. A horror of Socrates is lock-in or path dependance. You cannot un-invent writing. That’s probably a good trade given that is why we know of him at all. Other dependent qualities can be worrying like corporate lock in or software and hardware reliance on geopolitical rivals. Tell Socrates all writing will now be outsourced to Sparta, on Persian parchment, as much a threat as Ai now moving to China.
Socialmedia platforms can be more present where students are, growing networks and access but make learning part of the attention economy. There, student attention is sliced, diced and commodified. An echo chamber eliminates internal and external dialogue.
You might think self hosting and open source fixes it, and it helps in some areas, at the cost of significant investment of time on maintenance. I took a few years shifting my stack and it is fun and maddening, an ever evolving challenge. I have my own linux server with among other self hosting services , vaultwarden (password), audiobookshelf (podcasts /audiobooks), nextcloud (files), immich (photos), ollama (ai), restic (backup). I like challenges and then sometimes I like Spotify curated playlists. The process itself is educational. Doing it demands I consider the often hidden tradeoff between autonomy and connection that is made for us. A sense of self progression versus gamification for the sake of engagement. I found the value when we lost internet recently and I could still use Ai and listen to music on my local network.
Maintenance is part of learning
Nothing ‘just works’. It takes labour, tending and supervision. Look behind the curtain. Tech is part material culture, part agriculture. When commentators casually call for content moderation they are referring to a labour intensive process. Compare with how EdTech is often sold as a one-click solution. No blame there. Universities demand these services and companies provide them.Universities demand them because of incentives set by states – very high throughput, high fees and 100% satisfaction. Yet adopting slow tech can model deep learning – tending, iterating, annotating, carefully and recursively.
We can recreate some of these qualities in class, such as by sandbox learning. Socrates worried that writing destroyed memory. We worry social media destroys the attention needed for reading. Do they? Maybe reading need not be passive and we can encourage practices like annotation to make it active. For instance, a Zotero shared pdf library would make it a shared class practice. Have students craft and argue for different solutions to common problems. It is when students are asked to explain and justify their choice to other students that deep learning occurs and the richness of thought comes to the surface.
Compare the trajectory of Ai to the book. They both rapidly generated meta-cognitive effects. Prompt engineering is analogous to meta-reading skills.
Illich calls for tools for conviviality. He was damning of tools of separation, of caste-making . Medical professionalisation and specialisation separated family members from care.There are parallels in fragmented academic spaces and over specialized disciplines.
Example- I use Nvivo vs Obsidian for qualitative data analysis
Nvivo is custom designed for qualitative data analysis. It has a growing business orientation,massive overkill, learning curve, demand you use its language, proprietary format, bewildering licensing. Heavy. Slower than the speed of thought. It does some things well such as search /autocoding .
Obsidian is just a note taking app that allows you to link notes and visualize links between them. It is extensible, and uses the open markdown format. Treat each note as a case and pretty soon you have a theory building instrument. A tool you shape in the using, that grows with you .
The joy of tech is working in a community rather than labouring alone, of solving problems relianton ideas and process
We can reduce define some qualities,
Autonomy and agency
Connecting the living and the past
Maintain dialogue, be failure tolerant
And one practice. Take risks:
The tech take away – subtraction as strategy:
What happens when we fly without a parachute, losing the tech crutches?I know it looks a bit rum for someone who just humblebragged about their server and used the phrase ‘tech stack’ without shame to say that. Stay with me. Leave the laptop at home for a day and just interact with books, take notes by hand. See how your attention changes. You find that reading a book is a different physical experience than an ebook. Writing and typing inscribe differently. Funny side-quest – since becoming disabled I cannot do either and now have to hold more of my mental map in my head. Socrates would appreciate it. As a disabled I have had to discard many now useless things – fave suits, earpods, driving, most games.
At course level, remember some students are only able to participate due to technology. Blanket bans need consultation. We can try setting up a simulation. I teach about organised crime and cybercrime. I ask studentsto design a cryptocurrency exchange. It is the base for discussion about tech ethics, fintech regulation, money laundering, surveillance, privacy and hybrid crime-tech
The blockchain book
Set class tasks to make the previously inert active. The book is radical tech, reimagine it. Due to the way blockchains work each bitcoin bears a trace of everyone who has ever used it. What if books did that? Tell me about the people who read this book before you. Tell me how it will be read in 5, 40, 100 years’ time. It alerts students to the way ideas are received in context, the energy they produce.
Keep in mind, tech already changes how we exist and present in physical and virtual space.Humanity has always been at war with, and too in love with, its tools. Technology adoption in education, like any sphere, is not inevitable nor natural. We can learn as much by taking away as by adding. Cool evaluation and reflection will get us through. The book remains our most revolutionary technology. We only need to open one.
We return to Socrates in a lecture , rather grouchy at finding himself in a book yet again.
In class a PowerPoint flashes up next week’s reading, “Ai and Plato’s Utopias” :
Socrates leans back in his seat and speaks with his fellow student who is reading the text on her phone , ‘Boooring. Where is the effort in this stream of letters ? Cold information without insight. Ai is the endpoint of writing, the imitation of understanding. Words without animating life. It requires no effort. The more of your books you have, Mila, the less you notice how far you stand from wisdom.What is the dry written word alongside living conversation?’
Mila leans in, ‘Socrates! slow your roll. Cannot we grow with books? After all I am having a dialogue with you. Plus’, she read, putting on her finest Athenian philosopher/reply guy voice , ‘I thought “The only thing I know is that I know nothing”. Sounds like you do know something … about technology ’
Socrates, ‘I know being bound in a book, scroll or wax tablet. Blame Plato, the scribbler. The answer lies in the question. They call it a Socratic Paradox. The written word binds your mind. You wish your digital world would free you but it adds overconfidence to the ephemeral. It is seductive. Ai is more dangerous. It speaks words without thoughts.’
Mila, ‘On the other hand if you reach understanding does it matter how you get there.
You fear writing, you who never wrote
Would your thought be here still without papyrus and quill
Did nothing lively never came from a scroll
If the problem is being passive cannot we choose being active? Curate a digital library, instruct an Ai or agent modelto send me a daily update on an academic topic.’
Socrates, ‘More and faster, but is it deeper?
Mila, ‘If you know why it is doing it. I use a calendar, but I know what time is. I use a calculator but I know numbers. If I use Ai to filter smartly it lets me get to the truth. There is an art called prompt engineering. It ’
She demonstrates
Socrates, ‘A sprite to question me?
Mila ‘Your questions must be good
The art of asking
Maybe asking changes the questions and the questioner’
Socrates prompt engineering is a meta-skill. You need it to work your tools . They are so complex and yet so far from wisdom you need need a whole other set of tools to make them make sense
Mila’ Cannot it be used critically as you use conversation?
Socrates, ’How do you know you know it is the truth? . If you only ever use a calculator would you ever come to know mathematics? Are you trapped in Plato’s cave? Would you know it was truth? Your prompts might be those agreeable shadows on the wall.’
Mila, ’Then to attain wisdom, we abandon tools? Cabin in the woods? ’
Socrates, ’Not abandon, but know them, ask questions of them, as I do of you. Put thought before them, not use them before thought. A farmer uses a scythe to harvest what he has cultivated. You can cultivate yourself before you harvest those clicks’
Mila, ‘Maybe we could keep them at arms length but always within arm’s reach, serving not mastering human desire. Nothing is good or bad but everything has effects.’
Socrates, ’Now, you sound like you would fit right in at the Symposium’
Scene
Let’s design a place to learn
Illich, Ivan. Tools for Conviviality. New York: Harper & Row New York, 1973.
Plato. Phaedrus. Translated by Benjamin Jowett. Internet Archive, 2005.
AI has become something of a bogeyman and it is not helpful to view the technology in this way. It is indeed often hyped up junk, as when Apple’s AI helpfully prioritizes spam email as ‘urgent’ for me. Yet as with any other technology it brings change and benefits and problems. I use AI-enabled tools every day. It concerns me that a negative narrative has emerged, which purely focuses on AI as a threat to individual human creativity and to the environment. This is a one-sided and rather hoary and dated critique that rests largely on an assumption that we cannot possibly integrate these technologies into our lives.Techno fear is as bad as techno hype and just because it feels good doesn’t mean you should do it.
My existence would be much diminished without AI tools. I am unable to use a keyboard or other input device. Most of my voice has gone. The only dictation support that works is an AI driven one called MacWhisper, which uses the Whisper LLM. This is the only transcription software that can make sense of my much diminished voice. It is far superior to the solutions offered by others. And to be clear, it is actually the commercial software that is best.You know that horrible integration that Apple does and that the EU wants to end and loads of tech critics complain about. The one where everything works together. Well, it’s the best thing I can have. Using macOS and an iPhone, I can take advantage of my existing contacts and software and integrate accessible workarounds and solutions into my existing workflow. I can use iPhone mirroring to access my phone directly from the desktop.This means I can continue to work and communicate with friends in a more seamless way.
This is far better than the clunky, awkward and disheartening solutions offered as part of specialist support for the disabled. On the one hand you need to have reliable solutions which necessarily are not going to be as pretty. On the other these all kind of assume that you’ve already given up on life. As an example, there is a system that will allow me to make calls using a head mouse tracker. But, you have to type in each phone number manually, like it was 1998 or something. I literally cannot remember the last time I manually entered a phone number. There is no possibility of integration with my existing, well, existence, my existing way of working, communicating with friends and loved ones. It’s just one example. The other solutions being offered are much the same. I can laboriously type out an email in a Windows pad. Or I can use my existing computer, which I can use a head tracking mouse with, and use voice control and dictation to send an email that way, in a way that I need to because I’m a professional still. This means I can continue to work and communicate in a seamless way. I can also control my phone directly from the desktop. Despite my situation, I am still writing books and papers. I am still making plans with friends and family.
As a researcher and a teacher, I have a complex workflow built up over many years that uses customised tools and complex data sets. I need these kind of data, this kind of information, this kind of multiple tools at my fingertips in a way I can use for my work and in fact for my pleasure. It is absolutely infuriating to be told that I can switch on my TV with the system as if that’s all I should have as a disabled person. My lot in life is just to sit in front of the telly all day. Well I can already put my Apple TV on with my Apple computer or my Apple iPad and using that I can watch all kinds of TV from around the world which is what I do. Everything about these tools is focused on the lowest common denominator passive consumer. That’s not who I am, or anyone is. We are still in our lives.
If we do not highlight these positive use cases for AI, then they will be regulated out of existence or their development will be limited and I will be a lot worse off and my life will be much reduced. Likewise, a new era of domestic robotics cannot come soon enough.
More generally, I find the way that AI is used as this kind of scare figure quite frustrating. It blocks analysis and evaluation. For example, the University proposes for its ethical investment strategy it will not invest in companies that produce AI-enabled weaponry, As if this is somehow more nefarious than just plain old artillery shells and such. No explanation is given as to why AI weaponry is so much worse than just lobbing a RPG at someone.. If an ally like Ukraine is using AI to target incoming missiles to support its air defence and for its own targeting, why is that a bad thing? It might actually save lives if it improves their air defence capability and makes their targeting more effective. It might shorten the war and in any case allies should have access to the means to defend themselves against one of the greatest threats to civilization of our time. If Ukraine is able to use autonomous weaponry to make up for its manpower deficiencies, then I do not see that as being a problem.
The distinction between AI and non-AI is dissolving anyway. For example, artillery shells can be networked to work cooperatively.
That does not mean that AI is always the right solution or even a working solution. It still needs to be treated with critical reasoning. There are plenty of sins to recount. It continues to occupy the place that ‘data’ did about ten years ago in the minds of venture capitalists, as some magic dust to sprinkle upon a business plan. It will never understand irony, which makes parsing British communications very difficult. A lot of these sins, however, lie in the credulity of the beholder. For me, the bigger problem lies in the human willingness to give up our autonomy, to the group or to a convenient technology or to the state, an NGO or trade union. AI is just one insidious way in which this can happen, and people can farm out significant choices to a system that allows them to pretend they have no choice at all. But there are many other systems that do that, that we need to be alert to.