
The most recent knowledge portal in many organisations isn’t just a document repository or an intranet that holds key documents; it’s probably an AI chatbot. If you just ask it about a particular policy, or a decision that was made, or what actions you must take according to a process, it will answer your question instantly and clearly. The danger isn’t whether the answer is right or wrong; it’s that, over time, you stop knowing to what extent the answers are correct, which parts are assumed, and which sections are simply well-written guesswork.
According to Simkute et al., introducing generative AI into knowledge work often shifts users’ roles from active work producers to more passive roles where they instead evaluate, debug, and bring together machine-generated texts. Simkute et al. call this shift the “production-to-evaluation shift,” and it is highlighted as one of the key productivity challenges of generative AI systems.
Not surprisingly at all, that shift is now reaching the knowledge management (KM) domain. For many users, the chatbot is becoming the first, and unfortunately sometimes the only, interface to the organisational knowledge like policies, project history, lessons learned, and “how we do things here.”
When AI is poorly applied to KM, the failure is not going to be noisy; it fails silently. The answers sound logical with a bit of confidence, and the organisation keeps moving unnoticed, as small damages are accumulating. By the time passing, the organisations will learn more that the risk is not a wrong answer, the issue in fact is being lost between what the organization knows and what it does not, why it knows it, and most importantly, whether it can be trusted.
It is quite important to know that GenAI amplifies whatever it is fed. If the data used is inconsistent, outdated, missing context, or even wrong, AI will summarise it anyway and produce polished text, often without the gaps. Here, organisations end up with accelerating noise instead of creating visibility and knowledge, where ambiguity becomes a well-structured paragraph, and contradiction is written in a nice way to appear as “a balanced view.” The AI has just created more confusion, adding that to the organisational knowledge, would that be considered as “understanding” or “learning”?
The role of humans in the learning process in organizations is central, and organizations must pay attention to their learning process and how critical thinking is crucial for better KM outcomes. Look, for example, at the study of Lee et al. (2025), who presented the results of a survey of 319 knowledge workers, which showed that higher confidence in generative AI was associated with less critical thinking, while higher self-confidence predicted more critical thinking. Does that ring a bell?
That should worry any leader betting on “AI-enabled learning.” If people outsource sense-making to a tool that may have learned from wrong sources, are they learning better, faster, or practising critical thinking less?
So, it is not only about “hallucinations”! The bigger risk in knowledge processes is the degradation of critical thinking in a process where the work is delegated to a tool, leading to non-verifiable factual errors, as argued by Sarkar et al. (2024). Over time, thoughts like “it is good enough” or “it looks okay at this stage” become normal and, of course, less challenging, less cross-checking, more acceptance of whatever reads well.
The discussion about KM and AI makes it crucial to discuss accountability. For instance, when an AI-generated answer affects a decision, the arguable question here: who is responsible for the consequences: is it the employee, the system owner, or “the model”? In their research on AI in knowledge sharing, Rezaei et al. (2024) identify key ethical challenges in this cycle, like transparency and explainability, along with accountability and responsibility, as these significant ethical challenges influence decision-making quality.
In addition, trust in organisational knowledge may become fragile. In the case the AI answers questions or generates knowledge without explaining how this knowledge is relevant, employees, in the end, either stop trusting the tool or stop trusting the knowledge itself. Even worse, AI outputs can quietly become the “new organisational knowledge,” rewriting the whole institutional memory in a form of polished summaries and interpretations that employees store back into repositories, corporate documents, presentations, and standard operating procedures.
Leaders must notice here that AI Governance gaps may make this erosion hard to observe. According to Wang and Blok (2025), the focus on the tool brings many challenges, , like the challenge of building trust and the hidden, unknown, long-term harms that emerge over time. So, I think the real question for KM leaders is no longer: “How do we get answers faster?” I think it is: how do we ensure our AI is supporting organisational knowledge rather than slowly eroding it?
The question now is, what is your organisation’s priority: clearer knowledge, or faster noise?
References
[1] H.-P. Lee, A. Sarkar, L. Tankelevitch, I. Drosos, S. Rintel, R. Banks, and N. Wilson, “The impact of generative AI on critical thinking: Self-reported reductions in cognitive effort and confidence effects from a survey of knowledge workers,” in Proc. CHI Conf. Human Factors in Computing Systems (CHI ’25), ACM, 2025, doi: 10.1145/3706598.3713778.
[2] M. Rezaei, M. Pironti, and R. Quaglia, “AI in knowledge sharing: Which ethical challenges are raised in decision-making processes for organisations?” Management Decision, 2024, doi: 10.1108/MD-10-2023-2023.
[3] A. Sarkar, X. Xu, N. Toronto, I. Drosos, and C. Poelitz, “When Copilot becomes Autopilot: Generative AI’s critical risk to knowledge work and a critical solution,” arXiv, arXiv:2412.15030v1, 2024.
[4] A. Simkute, L. Tankelevitch, V. Kewenig, A. E. Scott, A. Sellen, and S. Rintel, “Ironies of generative AI: Understanding and mitigating productivity loss in human-AI interaction,” Int. J. Human–Computer Interaction, vol. 41, no. 5, pp. 2898–2919, 2025, doi: 10.1080/10447318.2024.2405782.
[5] H. Wang and V. Blok, “Why putting artificial intelligence ethics into practice is not enough: Towards a multi-level framework,” Big Data & Society, pp. 1–14, 2025, doi: 10.1177/20539517251340620.

