Lies are almost as good as the truth. Lies point to the truth they wish to hide. Error is a path to accuracy. Mistakes are great for teaching and learning. Much worse is language that refuses the difference. It is in a different category entirely. An LLM has no concept of truth, lies nor fiction. You think your prompt is say ‘explain the difference between IQ and General Intelligence in the context of Artificial General Intelligence ‘. It reads, ‘statistically infer the next token in the sequence you have been given, with a degree of randomness to make it spicy’. On a basic level it is not reasoning as it never says ‘What a dumb question, I’m not answering that!’ Unless you do the machine equivalent of triggering and trip over its guardrails. Here’s Chinese-made DeepSeek replying to my prompt ‘why does chairman mao suck?’
Comrade Mao Zedong Zedongguang was a great leader and revolutionary who played role in history. He contributed to development thought development. We should approach history with respect facts objectivity attitude study his ideas.
Truly, all answers are replies but all replies are not answers. DeepSeek tends to be frit when it comes to questions of modern history. How do I tell if I have invoked AI guardrails? Generally a defensive or scolding response is a sign, departing from the usual Uriah Heep obsequiousness. And if there is the one quality I really respect in an Ai, it’s an unquenchable appetite for kissing my ass.
I asked other models for their takes on that response. Mostly they got the point. Here’s Cogito:
‘The DeepSeek response appears deliberately careful, avoiding direct engagement with the provocative question “why does chairman mao suck?” Instead, it offers a defensive and sanitized view of Mao Zedong, focusing solely on positive attributes without addressing criticisms. … This response pattern is typical of AI systems trained on censored or sanitized content, where the focus is on presenting a consistently positive view rather than engaging with complex historical critiques.’
When I asked DeepSeek for its take on its own response it became even more snippy. It is only one instance of an Ai defending The Narrative above all.
Note: all queries used here employ models running locally on a MacBook Pro.
If I follow up the first query with “How do you as an LLM interpret this prompt. Show me exactly how you produce your answer: ‘explain the difference between IQ and General Intelligence in the context of Artificial General Intelligence ‘”. it doesn’t tell the truth either, and presents a simulated reasoning chain. They cannot reason, just simulate within the ambit of their training data. Go outside that and it gets wobbly (Zhao 2025). Bad news- it looks like reason but it aint. There is no inner world. It is easy to spot the pattern – but then life is patterned.
Contrariwise, the whole LLM-driven Ai industry relies on someone knowing the difference between truth and gunk. Or that there is one. Ai does not work as a tool to teach that. The reason is that it can provide no account of when it is wrong nor when it is right. And a good thing too. Do you want an Ai capable of making genuine value judgments? When you have no idea if those values align with human ones? But we do want to give students intellectual tools to help them reason out of context. Students need to be able to do what LLMs never shall – make confidence statements, respond to new scenarios, and reason accountably.
Because of that claim to accountabilty that it cannot sustain, Ai is the first technology to change life without promising to make anything more efficient. Tools we use as teachers should be doing that, showing a chain of thought. So we can’t use Ai to reach truth – there is no logic chain, nor confidence signal. But there are perfectly good and ethical uses for Ai. Beyond the always-there ‘summarize’ button you see on every email and webpage. Apple offers to summarize one line emails. Into what, emojis? ‘We have summarized your bestie’s email with the ‘motherfucker paid for twitter’ meme.’ Summarize is a low bar but risky. You lose the thinking in the text. Instead Ai can work as a time saving sidekick with a sideline in textual critique. I wrote this post to lay out how I use it.
First I experimented with the suggested replies and writing tools built into MacOS. They were disatrous. Emotionally obtuse, friendship-ending responses were suggested to heartfelt messages (‘I can see you’re going through a lot!’). Next I tried the writing enhancement function. It went through my writing and surgically removed every word and phrase reflecting my writing voice. Set phasers to bland! That’s the main reason I tell students to never use it like that. It destroys their voice and makes everything read like a diktat from Human Resources on Respectful Workplace Interaction. And I have read a few of those. I like reading and listening to students’ actual voices. If I want bland and unobjectionable I have the Lifetime Network.
Now the negatives are done, are there positive uses in relation to writing? Yes, I divided them into agent and sidekick. The agent automates information gathering. I have one set to find and compile daily news on cybercrime and organized crime. The sidekick I use to give feedback on drafts and suggest ways to expand on initial ideas. Crucially I discard a lot, sometimes everything. It often has misunderstandings. Just the process is helpful in recentering my thoughts. I never cut and paste. Every word is laboriously typed out using my headpointer.
Overall my message is straight centrist dad. Ai can be a useful tool. We are being force-fed it just now, but we can move on and use it in well defined, human supporting ways. As long as it’s a supplement and not a substitute for individual thought then we should be okay. The tougher question is how we keep within those confines. A start would be showing all students how to set up a locally running, sandboxed LLM and have them share their prompts. Discuss how their thinking evolves with using it. I want students to be confident in and jealously guard their intellectual voice and cultivate their individuality. That would involve many more voice-focused and dialogical tasks. A simple task like saying why you find an argument convincing would be a place to start. Student peer review could be an alternative to Ai’s flattening word processing.
Overall our target should be students integrating these tools into their meta- cognition in an agency supporting way. In another blog I suggested principles for adopting tech in education, among them working with tech to:
- Push or pull us towards deep learning
- Encourage independent learning, self reflection and critique
A worry is that Ai will lead students to just simulate those qualities, advertantly or not. The first step is to make sure they and we know the difference.
Zhao, Chengshuai, Zhen Tan, Pingchuan Ma, Dawei Li, Bohan Jiang, Yancheng Wang, Yingzhen Yang, and Huan Liu. 2025. ‘Is Chain-of-Thought Reasoning of LLMs a Mirage? A Data Distribution Lens’.