Any views expressed within media held on this service are those of the contributors, should not be taken as approved or endorsed by the University, and do not necessarily reflect the views of the University in respect of any particular issue.

The Fastest Noise: When AI Weakens Knowledge Instead of Strengthening It

The most recent knowledge portal in many organisations isn’t just a document repository or an intranet that holds key documents; it’s probably an AI chatbot. If you just ask it about a particular policy, or a decision that was made, or what actions you must take according to a process, it will answer your question instantly and clearly. The danger isn’t whether the answer is right or wrong; it’s that, over time, you stop knowing to what extent the answers are correct, which parts are assumed, and which sections are simply well-written guesswork.

According to Simkute et al., introducing generative AI into knowledge work often shifts users’ roles from active work producers to more passive roles where they instead evaluate, debug, and bring together machine-generated texts. Simkute et al. call this shift the “production-to-evaluation shift,” and it is highlighted as one of the key productivity challenges of generative AI systems.

Not surprisingly at all, that shift is now reaching the knowledge management (KM) domain. For many users, the chatbot is becoming the first, and unfortunately sometimes the only, interface to the organisational knowledge  like policies, project history,  lessons learned, and “how we do things here.”

When AI is poorly applied to KM, the failure is not going to be noisy; it fails silently. The answers sound logical with a bit of confidence, and the organisation keeps moving unnoticed, as small damages are accumulating. By the time passing, the organisations will learn more that the risk is not a wrong answer, the issue in fact is being lost between what the organization knows and what it does not, why it knows it, and most importantly, whether it can be trusted.

It is quite important to know that GenAI amplifies whatever it is fed. If the data used is inconsistent, outdated, missing context, or even wrong, AI will summarise it anyway and produce polished text, often without the gaps. Here, organisations end up with accelerating noise instead of creating visibility and knowledge, where ambiguity becomes a well-structured paragraph, and contradiction is written in a nice way to appear as “a balanced view.” The AI has just created more confusion, adding that to the organisational knowledge, would that be considered as “understanding” or “learning”?

The role of humans in the learning process in organizations is central, and organizations must pay attention to their learning process and how critical thinking is crucial for better KM outcomes. Look, for example, at the study of Lee et al. (2025), who presented the results of a survey of 319 knowledge workers, which showed that higher confidence in generative AI was associated with less critical thinking, while higher self-confidence predicted more critical thinking. Does that ring a bell?

That should worry any leader betting on “AI-enabled learning.” If people outsource sense-making to a tool that may have learned from wrong sources, are they learning better, faster, or practising critical thinking less?

So, it is not only about “hallucinations”! The bigger risk in knowledge processes is the degradation of critical thinking in a process where the work is delegated to a tool, leading to non-verifiable factual errors, as argued by Sarkar et al. (2024). Over time, thoughts like “it is good enough” or “it looks okay at this stage” become normal and, of course, less challenging, less cross-checking, more acceptance of whatever reads well.

The discussion about KM and AI makes it crucial to discuss accountability. For instance, when an AI-generated answer affects a decision, the arguable question here: who is responsible for the consequences: is it the employee, the system owner, or “the model”? In their research on AI in knowledge sharing, Rezaei et al. (2024) identify key ethical challenges in this cycle, like transparency and explainability, along with accountability and responsibility, as these significant ethical challenges influence decision-making quality.

In addition, trust in organisational knowledge may become fragile. In the case the AI answers questions or generates knowledge without explaining how this knowledge is relevant, employees, in the end, either stop trusting the tool or stop trusting the knowledge itself. Even worse, AI outputs can quietly become the “new organisational knowledge,” rewriting the whole institutional memory in a form of polished summaries and interpretations that employees store back into repositories, corporate documents, presentations, and standard operating procedures.

Leaders must notice here that AI Governance gaps may make this erosion hard to observe. According to Wang and Blok (2025), the focus on the tool brings many challenges, , like the challenge of building trust and the hidden, unknown, long-term harms that emerge over time. So, I think the real question for KM leaders is no longer: “How do we get answers faster?” I think it is: how do we ensure our AI is supporting organisational knowledge rather than slowly eroding it?

The question now is, what is your organisation’s priority: clearer knowledge, or faster noise?

References

[1] H.-P. Lee, A. Sarkar, L. Tankelevitch, I. Drosos, S. Rintel, R. Banks, and N. Wilson, “The impact of generative AI on critical thinking: Self-reported reductions in cognitive effort and confidence effects from a survey of knowledge workers,” in Proc. CHI Conf. Human Factors in Computing Systems (CHI ’25), ACM, 2025, doi: 10.1145/3706598.3713778.

[2] M. Rezaei, M. Pironti, and R. Quaglia, “AI in knowledge sharing: Which ethical challenges are raised in decision-making processes for organisations?” Management Decision, 2024, doi: 10.1108/MD-10-2023-2023.

[3] A. Sarkar, X. Xu, N. Toronto, I. Drosos, and C. Poelitz, “When Copilot becomes Autopilot: Generative AI’s critical risk to knowledge work and a critical solution,” arXiv, arXiv:2412.15030v1, 2024.

[4] A. Simkute, L. Tankelevitch, V. Kewenig, A. E. Scott, A. Sellen, and S. Rintel, “Ironies of generative AI: Understanding and mitigating productivity loss in human-AI interaction,” Int. J. Human–Computer Interaction, vol. 41, no. 5, pp. 2898–2919, 2025, doi: 10.1080/10447318.2024.2405782.

[5] H. Wang and V. Blok, “Why putting artificial intelligence ethics into practice is not enough: Towards a multi-level framework,” Big Data & Society, pp. 1–14, 2025, doi: 10.1177/20539517251340620.

 

Ethical AI Starts with Knowledge, Not Code

Without strong knowledge management, ethical AI collapses. Can you believe that? Let me tell you, then! Missing documentation and weak institutional memory do more than slow down teams; they also fuel bias, hide risks, and lead to unsafe decisions and reputation damage.

The Missing Link Between AI Ethics and Knowledge

Fairness, safety, and explainability don’t come from algorithms because it is not simply about the code; it is all about how organisations manage and govern their AI systems, and that is, in fact, coming from the knowledge that these organisations possess. For instance, the situation where teams can’t trace decisions or access the correct information is bad, as AI in that case becomes opaque and risky. For that, Strong knowledge management is required to create transparency and accountability across the AI lifecycle, making it the bridge that helps organisations to establish ethical and trustworthy systems.

Documentation as the Backbone of Ethical AI

Suppose AI is the engine that all organisations are seeking. In that case, I believe that documentation is the control panel. If it’s missing, no one would know what’s really happening under the hood. Documentation by itself is not the objective; it is meant for a purpose. Having clear documentation keeps teams aligned, exposes risks early, and makes fairness and accountability measurable instead of just theoretical.

On the other hand, weak documentation creates chaos. When there is no clear reference documentation, you may expect biased outputs to go unnoticed, governance to break down when a reference is absent, and knowledge to become fragmented across departments and remain in minds, which can be lost in different ways. To understand the impact of this documentation, just ask public-sector teams who have been through audits to question decisions made by AI systems for justification. So, as mentioned before, documentation is not for documentation; in fact, strong documentation closes these gaps by providing a single source of truth on, for instance, how the model was built, why specific data were used, and what risks still need watching. The message is simple here: start your disciplined documentation, probably not an easy task, but definitely avoid bigger risks and harms.

Knowledge Workflows and Institutional Memory

Organisations need a knowledge management system for many reasons, and knowledge workflows serve as a safety net that prevents them from forgetting what matters. For example, decision logs, structured documentation, and knowledge-sharing routines keep teams aligned and stop history from being rewritten every time staff changes. It was proven in a public-sector research and KM studies on cases in Dubai that without solid workflows, institutional memory collapses and mistakes repeat. With a robust knowledge management system in place, the story will be different, and organisations will remain consistent, accountable, and more competent over time. To elaborate, this system and these workflows aren’t just operational tools; they’re ethical defences that help organisations maintain transparent, evidence-based decisions, not just guesswork. Protecting organisational memory with powerful tools, systems and governance is essential, and I even think it’s a non-negotiable requirement for organisations that intend to survive and become smarter.

Learning Cultures and Organisational Behaviour

Being in a strong learning culture or learning organisation versus not having that makes the difference between responsible AI and risky AI. Imagine teams in a continuous learning process, adopting organisational knowledge management practices that help learning across functions, and how this can help catch ethical issues. Would that be faster? Think of public health experts validating model assumptions or urban planners questioning bias in urban development tools. Such validations, ethical reviews, and post-deployment checks are intended to keep systems honest and compliant. And when an organisation’s staff can challenge assumptions without hesitation, AI becomes a shared responsibility across all functions rather than a “black box” no one understands. Having this mindset cuts blind reliance on opaque models and sharpens fairness and safety decisions across government services. Understanding this connection between knowledge management and responsible AI is crucial if organisations want trustworthy AI; they need a culture that learns relentlessly and questions everything.

Practical Steps to Strengthen Knowledge for Ethical AI

For organisations seeking ethical AI, they need to keep in mind well-managed corporate knowledge management practices, with no exceptions. To start, have standard documentation templates in place so every model is described clearly and consistently across the organisation. It is probably recommended to have an AI governance portal to centralise policies, design choices, and risk insights, and serve as a single reference for all knowledge required for AI ethics and governance. In addition, you can’t have proper AI governance without running regular ethical risk reviews to catch bias early, and this requires knowledge to be shared across the organisation. This includes, in addition to a model registry to track versions, data lineage, and performance records, a primary reference for AI governance activities. Also, make sure teams are trained to maintain institutional memory so knowledge doesn’t disappear when people move on and keep the knowledge internalised and sustained.

Final Thoughts

In the end, I would like to emphasise that ethical AI isn’t just about code; it’s in fact about the knowledge behind it. So don’t look at knowledge management just as documentation or admin; it is, in fact, core ethical infrastructure. And remember when organisations protect their memory, they will be better able to protect the integrity of their AI systems.

 

 

References

[1]          A. Tlili, M. Denden, M. Abed, and R. Huang, “Artificial intelligence ethics in services: are we paying attention to that?!,” The Service Industries Journal, vol. 44, no. 15–16, pp. 1093–1116, Dec. 2024, doi: 10.1080/02642069.2024.2369322.

[2]          J. Guo, J. Chen, and S. Cheng, “Perception of Ethical Risks of Artificial Intelligence Technology in the Context of Individual Cultural Values and Intergenerational Differences: The Case of China,” Feb. 15, 2024, In Review. doi: 10.21203/rs.3.rs-3901913/v1.

[3]          M. Rezaei, M. Pironti, and R. Quaglia, “AI in knowledge sharing, which ethical challenges are raised in decision-making processes for organisations?,” MD, Apr. 2024, doi: 10.1108/MD-10-2023-2023.

[4]          L. Pérez‐Nordtvedt, B. L. Kedia, D. K. Datta, and A. A. Rasheed, “Effectiveness and Efficiency of Cross‐Border Knowledge Transfer: An Empirical Examination,” J Management Studies, vol. 45, no. 4, pp. 714–744, June 2008, doi: 10.1111/j.1467-6486.2008.00767.x.

[5]          P. Bharati, W. Zhang, and A. Chaudhury, “Better knowledge with social media? Exploring the roles of social capital and organizational knowledge management,” Journal of Knowledge Management, vol. 19, no. 3, pp. 456–475, May 2015, doi: 10.1108/JKM-11-2014-0467.

[6]          Marc Demarest, “Knowledge Management: An Introduction.” 1997. Accessed: Nov. 29, 2025. [Online]. Available: https://www.noumenal.com/marc/km1.pdf

[7]          X. Cong and K. V. Pandya, “Issues of Knowledge Management in the Public Sector,” Electronic Journal of Knowledge Management, vol. 1, no. 2, p. pp181‑188-pp181‑188, Dec. 2003, Accessed: Nov. 29, 2025. [Online]. Available: https://academic-publishing.org/index.php/ejkm/article/view/701

[8]          M. Biygautane and K. Al-Yahya, “Knowledge Management in the UAE’s Public Sector: The Case of Dubai”, [Online]. Available: https://repository.mbrsg.ac.ae/server/api/core/bitstreams/59089da4-c3d5-455b-811e-dbdf45d40a74/content

[9]          A. M. Al-Khouri, “Fusing Knowledge Management into the Public Sector: A Review of the Field and the Case of the Emirates Identity Authority,” Information and Knowledge Management, vol. 4, pp. 23–74, 2014, [Online]. Available: https://api.semanticscholar.org/CorpusID:152686699

[10]       H. Wang and V. Blok, “Why putting artificial intelligence ethics into practice is not enough: Towards a multi-level framework,” Big Data & Society, vol. 12, no. 2, p. 20539517251340620, June 2025, doi: 10.1177/20539517251340620.

[11]       B. Martin, “Knowledge Management and Local Government: Some Emerging Trends,” Asia Pacific Management Review, vol. 8, no. 1, Mar. 2003, Accessed: Nov. 29, 2025. [Online]. Available: https://www.proquest.com/docview/1115707453/abstract/8B7E1DC127544C9BPQ/1

Developin a Project Idea

Introduction

As Artificial intelligence (AI) grows rapidly, transforming public services, complex ethical challenges are rising and requiring more governance. Many organisations thrive on having more AI embedded in their decision-making processes, which emphasises how crucial it is to have AI ethical deployment. In this growing domain, selecting a well-defined data and AI ethics project is challenging due to the many issues involved.

AI Ethics Interventions

There are many areas in mind where data and AI ethics interventions are needed, but two are particularly important for my research:

  • AI Bias and Fairness: AI systems have shown societal biases; however, they are increasingly used in hiring processes and customer service applications. Lack of governance here may lead to discrimination against marginalised groups, impacting employment opportunities and consumer experiences.
  • AI and Privacy: The increasing use of AI for data analysis and automation raises serious privacy concerns, especially regarding personally identifiable information (PII). With the force of competition, governments and corporations process vast amounts of sensitive data, requiring robust data protection measures. For that, AI governance must be present to address these concerns and prevent the misuse of personal data.

Ethical Interventions

To address these challenges, I will explore multiple ethical interventions. First, I will provide some policy recommendations that would help establish ethical guidelines and regulatory policies to ensure fair AI practices. I will also explore how to strengthen oversight and auditing processes to hold AI systems accountable. Besides, I will provide AI developers with ethical AI design frameworks that would help incorporating fairness and transparency into AI systems.

Project Focus

While AI ethics research offers many possible directions, tackling broad ethical challenges may lead to issues like scope creep, data accessibility, or missing timelines. Avoiding these issues, I will keep the following questions, as a framework, in mind, aiming to have a project that is both meaningful and achievable within the timeframe:

  • Is the issue pressing and impactful in the industry I prefer?
  • Can it be addressed within three months using available resources?
  • Can my project build upon and enhance existing AI governance models?
  • Who benefits from the research, policymakers, AI developers, or the public?

Final Project Idea

Based on the framework, and my preference to study the public sector in the UAE, my research will focus on how to bridge the governance gap in the UAE’s public sector through evaluating AI governance frameworks and AI Ethics tools use, mainly red teaming and auditing. I aim assessing whether existing AI governance frameworks effectively address ethical risks associated with AI adoption.

If I succeed in structuring this project very well, it will be feasible within three months. My focus will be on developing a framework, analysing policies,  reviewing case studies, gathering expert insights, and exploring how AI ethics tools can be integrated into UAE government AI governance strategies.

Conclusion

I can’t say that I have developed a well-defined AI ethics project; however, I think I’m on the right track to do that soon. Until now, I have identified key ethical concerns in AI governance and narrowed my focus to a structured research project that can contribute meaningful insights to responsible AI policymaking. However that might require another round of refinement, which I will do in my next steps hoping it will contribute to the foundation that will strengthen AI governance in the UAE.

Understanding the Implications and Anticipating Objections to Methodological Choices in Research

 

Methodological decisions are at the core of any research proposal, as it defines its structure and dictates the quality of its outcomes. However, researchers must be carefully upon making these decisions, as they come with significant implications and limitations that must be critically examined to ensure a robust and ethically sound approach. This blog discusses these implications and how I plan to manage and minimize their effects on my research project.

Implications of Methodological Choices

Every methodological decision involves compromises between different priorities. For example, choosing qualitative methods like interviews can provide in-depth insights but may lack generalizability. Quantitative approaches, such as interviews, allows for detailed exploration of a topic, however, it has limitations in terms of generalizability, while,  quantitative methods that prioritize may fail to capture details and have oversimplification issues, despite its wide adoption .

The ethical considerations also need to be prioritized upon deciding the methodological approach. For instance, predictive modelling raises critical ethical concerns in case the project is AI-based, may face ethical challenges, such as the risk of biased results, a lack of inclusivity, and unrepresentative data, which require attention to ensure that these technologies do not unintentionally support systemic inequalities.

Anticipating and Addressing Objections

It is anticipated in my project that stakeholders are likely to raise questions about the methodologies used, especially when dealing with ethical sensitivities or in case high resource demands are involved. Some of the main concerns may include:

  1. Resource Intensity: In my project, decision-makers may view hybrid methodologies requiring significant time and resources. This perception could result in hesitation to adopt these approaches and may affect the scope of my project.
  2. Ethical Concerns: My project may involve collecting information from a wide group of people, and in this case, I must pay attention to different aspects, such as issues related to privacy and confidentiality, ensuring informed consent, addressing potential biases in data collection, maintaining cultural sensitivity, and safeguarding the inclusivity and representativeness of the sample.

In my response to these issues, I will be addressing these concerns through the following strategies:

  • Ensure Transparency: Regularly communicate with stakeholders about the rationale behind methodological choices, demonstrating how they align with ethical principles such as beneficence, non-maleficence, and accountability.
  • Engage Stakeholders Early: Actively involve stakeholders during the initial proposal and planning phases to build trust, encourage collaboration, and anticipate potential objections or concerns.
  • Implement Feedback Mechanisms: Establish iterative feedback processes that enable continuous refinement of methodologies and ensure alignment with stakeholder expectations throughout the project.

Limitations and Self-Critique

I believe that there is no perfect methodology, and I think adopting a reflective approach will help identify and address potential limitations. For instance:

  • Bias Risks: Automated tools and AI methods may unintentionally introduce bias. Addressing this requires implementing bias mitigation strategies and involving diverse review teams to ensure fairness.
  • Scalability Challenges: Certain methods may not perform effectively with larger datasets or participant groups. Testing techniques through pilot studies before full-scale implementation is a practical way to manage this issue.
  • Ethical Complexity: Following ethical guidelines, such as those from the Edinburgh Futures Institute, involves balancing potential harm and benefits, safeguarding data privacy, and promoting equity throughout the research process.

Final Thoughts

From my perspective, being transparent about the limitations and ethical implications of methodologies strengthens the overall quality and ethical foundation of research. With this reflective approach, applying actively critiquing methods and being open to stakeholder feedback, I ensure stronger commitment to integrity and ethical research practices.

Welcome!

Welcome to your new blog site!

In this fast-evolving digital era, data and artificial intelligence are reshaping nearly every aspect of our lives. From banking, healthcare, education, and event governmental services, these technologies are making decisions that impact individuals, communities, and society at large. However, with that great power comes an even greater responsibility: how do we ensure that these technologies serve us fairly, transparently, and ethically?

This blog is dedicated to exploring the complexities, discussing challenges, coining ideas and suggesting solutions at the intersection of data, AI, and ethics. Here, I’ll delve into very important questions around data privacy, AI bias, accountability, and the very nature of decision-making. I’ll discuss the implications of AI-driven systems in areas like justice, equality, and autonomy, ask some difficult questions and look at how organizations can embed ethical practices into their data policies.

As we navigate this journey, the aim is not only to understand how these technologies work, I will try to uncover how they should work for the good of people. I’m targeting wide spectrum of audience like tech enthusiasts, policy makers, and anyone curious about the ethical dilemmas of our digital age.

In simple terms, this blog will be your guide to understanding the role of AI and data in our world.

Ready to take off!

Amer Maithalouni

css.php

Report this page

To report inappropriate content on this page, please use the form below. Upon receiving your report, we will be in touch as per the Take Down Policy of the service.

Please note that personal data collected through this form is used and stored for the purposes of processing this report and communication with you.

If you are unable to report a concern about content via this form please contact the Service Owner.

Please enter an email address you wish to be contacted on. Please describe the unacceptable content in sufficient detail to allow us to locate it, and why you consider it to be unacceptable.
By submitting this report, you accept that it is accurate and that fraudulent or nuisance complaints may result in action by the University.

  Cancel