Any views expressed within media held on this service are those of the contributors, should not be taken as approved or endorsed by the University, and do not necessarily reflect the views of the University in respect of any particular issue.

Ethical AI Starts with Knowledge, Not Code

Without strong knowledge management, ethical AI collapses. Can you believe that? Let me tell you, then! Missing documentation and weak institutional memory do more than slow down teams; they also fuel bias, hide risks, and lead to unsafe decisions and reputation damage.

The Missing Link Between AI Ethics and Knowledge

Fairness, safety, and explainability don’t come from algorithms because it is not simply about the code; it is all about how organisations manage and govern their AI systems, and that is, in fact, coming from the knowledge that these organisations possess. For instance, the situation where teams can’t trace decisions or access the correct information is bad, as AI in that case becomes opaque and risky. For that, Strong knowledge management is required to create transparency and accountability across the AI lifecycle, making it the bridge that helps organisations to establish ethical and trustworthy systems.

Documentation as the Backbone of Ethical AI

Suppose AI is the engine that all organisations are seeking. In that case, I believe that documentation is the control panel. If it’s missing, no one would know what’s really happening under the hood. Documentation by itself is not the objective; it is meant for a purpose. Having clear documentation keeps teams aligned, exposes risks early, and makes fairness and accountability measurable instead of just theoretical.

On the other hand, weak documentation creates chaos. When there is no clear reference documentation, you may expect biased outputs to go unnoticed, governance to break down when a reference is absent, and knowledge to become fragmented across departments and remain in minds, which can be lost in different ways. To understand the impact of this documentation, just ask public-sector teams who have been through audits to question decisions made by AI systems for justification. So, as mentioned before, documentation is not for documentation; in fact, strong documentation closes these gaps by providing a single source of truth on, for instance, how the model was built, why specific data were used, and what risks still need watching. The message is simple here: start your disciplined documentation, probably not an easy task, but definitely avoid bigger risks and harms.

Knowledge Workflows and Institutional Memory

Organisations need a knowledge management system for many reasons, and knowledge workflows serve as a safety net that prevents them from forgetting what matters. For example, decision logs, structured documentation, and knowledge-sharing routines keep teams aligned and stop history from being rewritten every time staff changes. It was proven in a public-sector research and KM studies on cases in Dubai that without solid workflows, institutional memory collapses and mistakes repeat. With a robust knowledge management system in place, the story will be different, and organisations will remain consistent, accountable, and more competent over time. To elaborate, this system and these workflows aren’t just operational tools; they’re ethical defences that help organisations maintain transparent, evidence-based decisions, not just guesswork. Protecting organisational memory with powerful tools, systems and governance is essential, and I even think it’s a non-negotiable requirement for organisations that intend to survive and become smarter.

Learning Cultures and Organisational Behaviour

Being in a strong learning culture or learning organisation versus not having that makes the difference between responsible AI and risky AI. Imagine teams in a continuous learning process, adopting organisational knowledge management practices that help learning across functions, and how this can help catch ethical issues. Would that be faster? Think of public health experts validating model assumptions or urban planners questioning bias in urban development tools. Such validations, ethical reviews, and post-deployment checks are intended to keep systems honest and compliant. And when an organisation’s staff can challenge assumptions without hesitation, AI becomes a shared responsibility across all functions rather than a “black box” no one understands. Having this mindset cuts blind reliance on opaque models and sharpens fairness and safety decisions across government services. Understanding this connection between knowledge management and responsible AI is crucial if organisations want trustworthy AI; they need a culture that learns relentlessly and questions everything.

Practical Steps to Strengthen Knowledge for Ethical AI

For organisations seeking ethical AI, they need to keep in mind well-managed corporate knowledge management practices, with no exceptions. To start, have standard documentation templates in place so every model is described clearly and consistently across the organisation. It is probably recommended to have an AI governance portal to centralise policies, design choices, and risk insights, and serve as a single reference for all knowledge required for AI ethics and governance. In addition, you can’t have proper AI governance without running regular ethical risk reviews to catch bias early, and this requires knowledge to be shared across the organisation. This includes, in addition to a model registry to track versions, data lineage, and performance records, a primary reference for AI governance activities. Also, make sure teams are trained to maintain institutional memory so knowledge doesn’t disappear when people move on and keep the knowledge internalised and sustained.

Final Thoughts

In the end, I would like to emphasise that ethical AI isn’t just about code; it’s in fact about the knowledge behind it. So don’t look at knowledge management just as documentation or admin; it is, in fact, core ethical infrastructure. And remember when organisations protect their memory, they will be better able to protect the integrity of their AI systems.

 

 

References

[1]          A. Tlili, M. Denden, M. Abed, and R. Huang, “Artificial intelligence ethics in services: are we paying attention to that?!,” The Service Industries Journal, vol. 44, no. 15–16, pp. 1093–1116, Dec. 2024, doi: 10.1080/02642069.2024.2369322.

[2]          J. Guo, J. Chen, and S. Cheng, “Perception of Ethical Risks of Artificial Intelligence Technology in the Context of Individual Cultural Values and Intergenerational Differences: The Case of China,” Feb. 15, 2024, In Review. doi: 10.21203/rs.3.rs-3901913/v1.

[3]          M. Rezaei, M. Pironti, and R. Quaglia, “AI in knowledge sharing, which ethical challenges are raised in decision-making processes for organisations?,” MD, Apr. 2024, doi: 10.1108/MD-10-2023-2023.

[4]          L. Pérez‐Nordtvedt, B. L. Kedia, D. K. Datta, and A. A. Rasheed, “Effectiveness and Efficiency of Cross‐Border Knowledge Transfer: An Empirical Examination,” J Management Studies, vol. 45, no. 4, pp. 714–744, June 2008, doi: 10.1111/j.1467-6486.2008.00767.x.

[5]          P. Bharati, W. Zhang, and A. Chaudhury, “Better knowledge with social media? Exploring the roles of social capital and organizational knowledge management,” Journal of Knowledge Management, vol. 19, no. 3, pp. 456–475, May 2015, doi: 10.1108/JKM-11-2014-0467.

[6]          Marc Demarest, “Knowledge Management: An Introduction.” 1997. Accessed: Nov. 29, 2025. [Online]. Available: https://www.noumenal.com/marc/km1.pdf

[7]          X. Cong and K. V. Pandya, “Issues of Knowledge Management in the Public Sector,” Electronic Journal of Knowledge Management, vol. 1, no. 2, p. pp181‑188-pp181‑188, Dec. 2003, Accessed: Nov. 29, 2025. [Online]. Available: https://academic-publishing.org/index.php/ejkm/article/view/701

[8]          M. Biygautane and K. Al-Yahya, “Knowledge Management in the UAE’s Public Sector: The Case of Dubai”, [Online]. Available: https://repository.mbrsg.ac.ae/server/api/core/bitstreams/59089da4-c3d5-455b-811e-dbdf45d40a74/content

[9]          A. M. Al-Khouri, “Fusing Knowledge Management into the Public Sector: A Review of the Field and the Case of the Emirates Identity Authority,” Information and Knowledge Management, vol. 4, pp. 23–74, 2014, [Online]. Available: https://api.semanticscholar.org/CorpusID:152686699

[10]       H. Wang and V. Blok, “Why putting artificial intelligence ethics into practice is not enough: Towards a multi-level framework,” Big Data & Society, vol. 12, no. 2, p. 20539517251340620, June 2025, doi: 10.1177/20539517251340620.

[11]       B. Martin, “Knowledge Management and Local Government: Some Emerging Trends,” Asia Pacific Management Review, vol. 8, no. 1, Mar. 2003, Accessed: Nov. 29, 2025. [Online]. Available: https://www.proquest.com/docview/1115707453/abstract/8B7E1DC127544C9BPQ/1

Leave a Reply

Your email address will not be published. Required fields are marked *

css.php

Report this page

To report inappropriate content on this page, please use the form below. Upon receiving your report, we will be in touch as per the Take Down Policy of the service.

Please note that personal data collected through this form is used and stored for the purposes of processing this report and communication with you.

If you are unable to report a concern about content via this form please contact the Service Owner.

Please enter an email address you wish to be contacted on. Please describe the unacceptable content in sufficient detail to allow us to locate it, and why you consider it to be unacceptable.
By submitting this report, you accept that it is accurate and that fraudulent or nuisance complaints may result in action by the University.

  Cancel