Any views expressed within media held on this service are those of the contributors, should not be taken as approved or endorsed by the University, and do not necessarily reflect the views of the University in respect of any particular issue.

Ethical AI Starts with Knowledge, Not Code

Without strong knowledge management, ethical AI collapses. Can you believe that? Let me tell you, then! Missing documentation and weak institutional memory do more than slow down teams; they also fuel bias, hide risks, and lead to unsafe decisions and reputation damage.

The Missing Link Between AI Ethics and Knowledge

Fairness, safety, and explainability don’t come from algorithms because it is not simply about the code; it is all about how organisations manage and govern their AI systems, and that is, in fact, coming from the knowledge that these organisations possess. For instance, the situation where teams can’t trace decisions or access the correct information is bad, as AI in that case becomes opaque and risky. For that, Strong knowledge management is required to create transparency and accountability across the AI lifecycle, making it the bridge that helps organisations to establish ethical and trustworthy systems.

Documentation as the Backbone of Ethical AI

Suppose AI is the engine that all organisations are seeking. In that case, I believe that documentation is the control panel. If it’s missing, no one would know what’s really happening under the hood. Documentation by itself is not the objective; it is meant for a purpose. Having clear documentation keeps teams aligned, exposes risks early, and makes fairness and accountability measurable instead of just theoretical.

On the other hand, weak documentation creates chaos. When there is no clear reference documentation, you may expect biased outputs to go unnoticed, governance to break down when a reference is absent, and knowledge to become fragmented across departments and remain in minds, which can be lost in different ways. To understand the impact of this documentation, just ask public-sector teams who have been through audits to question decisions made by AI systems for justification. So, as mentioned before, documentation is not for documentation; in fact, strong documentation closes these gaps by providing a single source of truth on, for instance, how the model was built, why specific data were used, and what risks still need watching. The message is simple here: start your disciplined documentation, probably not an easy task, but definitely avoid bigger risks and harms.

Knowledge Workflows and Institutional Memory

Organisations need a knowledge management system for many reasons, and knowledge workflows serve as a safety net that prevents them from forgetting what matters. For example, decision logs, structured documentation, and knowledge-sharing routines keep teams aligned and stop history from being rewritten every time staff changes. It was proven in a public-sector research and KM studies on cases in Dubai that without solid workflows, institutional memory collapses and mistakes repeat. With a robust knowledge management system in place, the story will be different, and organisations will remain consistent, accountable, and more competent over time. To elaborate, this system and these workflows aren’t just operational tools; they’re ethical defences that help organisations maintain transparent, evidence-based decisions, not just guesswork. Protecting organisational memory with powerful tools, systems and governance is essential, and I even think it’s a non-negotiable requirement for organisations that intend to survive and become smarter.

Learning Cultures and Organisational Behaviour

Being in a strong learning culture or learning organisation versus not having that makes the difference between responsible AI and risky AI. Imagine teams in a continuous learning process, adopting organisational knowledge management practices that help learning across functions, and how this can help catch ethical issues. Would that be faster? Think of public health experts validating model assumptions or urban planners questioning bias in urban development tools. Such validations, ethical reviews, and post-deployment checks are intended to keep systems honest and compliant. And when an organisation’s staff can challenge assumptions without hesitation, AI becomes a shared responsibility across all functions rather than a “black box” no one understands. Having this mindset cuts blind reliance on opaque models and sharpens fairness and safety decisions across government services. Understanding this connection between knowledge management and responsible AI is crucial if organisations want trustworthy AI; they need a culture that learns relentlessly and questions everything.

Practical Steps to Strengthen Knowledge for Ethical AI

For organisations seeking ethical AI, they need to keep in mind well-managed corporate knowledge management practices, with no exceptions. To start, have standard documentation templates in place so every model is described clearly and consistently across the organisation. It is probably recommended to have an AI governance portal to centralise policies, design choices, and risk insights, and serve as a single reference for all knowledge required for AI ethics and governance. In addition, you can’t have proper AI governance without running regular ethical risk reviews to catch bias early, and this requires knowledge to be shared across the organisation. This includes, in addition to a model registry to track versions, data lineage, and performance records, a primary reference for AI governance activities. Also, make sure teams are trained to maintain institutional memory so knowledge doesn’t disappear when people move on and keep the knowledge internalised and sustained.

Final Thoughts

In the end, I would like to emphasise that ethical AI isn’t just about code; it’s in fact about the knowledge behind it. So don’t look at knowledge management just as documentation or admin; it is, in fact, core ethical infrastructure. And remember when organisations protect their memory, they will be better able to protect the integrity of their AI systems.

 

 

References

[1]          A. Tlili, M. Denden, M. Abed, and R. Huang, “Artificial intelligence ethics in services: are we paying attention to that?!,” The Service Industries Journal, vol. 44, no. 15–16, pp. 1093–1116, Dec. 2024, doi: 10.1080/02642069.2024.2369322.

[2]          J. Guo, J. Chen, and S. Cheng, “Perception of Ethical Risks of Artificial Intelligence Technology in the Context of Individual Cultural Values and Intergenerational Differences: The Case of China,” Feb. 15, 2024, In Review. doi: 10.21203/rs.3.rs-3901913/v1.

[3]          M. Rezaei, M. Pironti, and R. Quaglia, “AI in knowledge sharing, which ethical challenges are raised in decision-making processes for organisations?,” MD, Apr. 2024, doi: 10.1108/MD-10-2023-2023.

[4]          L. Pérez‐Nordtvedt, B. L. Kedia, D. K. Datta, and A. A. Rasheed, “Effectiveness and Efficiency of Cross‐Border Knowledge Transfer: An Empirical Examination,” J Management Studies, vol. 45, no. 4, pp. 714–744, June 2008, doi: 10.1111/j.1467-6486.2008.00767.x.

[5]          P. Bharati, W. Zhang, and A. Chaudhury, “Better knowledge with social media? Exploring the roles of social capital and organizational knowledge management,” Journal of Knowledge Management, vol. 19, no. 3, pp. 456–475, May 2015, doi: 10.1108/JKM-11-2014-0467.

[6]          Marc Demarest, “Knowledge Management: An Introduction.” 1997. Accessed: Nov. 29, 2025. [Online]. Available: https://www.noumenal.com/marc/km1.pdf

[7]          X. Cong and K. V. Pandya, “Issues of Knowledge Management in the Public Sector,” Electronic Journal of Knowledge Management, vol. 1, no. 2, p. pp181‑188-pp181‑188, Dec. 2003, Accessed: Nov. 29, 2025. [Online]. Available: https://academic-publishing.org/index.php/ejkm/article/view/701

[8]          M. Biygautane and K. Al-Yahya, “Knowledge Management in the UAE’s Public Sector: The Case of Dubai”, [Online]. Available: https://repository.mbrsg.ac.ae/server/api/core/bitstreams/59089da4-c3d5-455b-811e-dbdf45d40a74/content

[9]          A. M. Al-Khouri, “Fusing Knowledge Management into the Public Sector: A Review of the Field and the Case of the Emirates Identity Authority,” Information and Knowledge Management, vol. 4, pp. 23–74, 2014, [Online]. Available: https://api.semanticscholar.org/CorpusID:152686699

[10]       H. Wang and V. Blok, “Why putting artificial intelligence ethics into practice is not enough: Towards a multi-level framework,” Big Data & Society, vol. 12, no. 2, p. 20539517251340620, June 2025, doi: 10.1177/20539517251340620.

[11]       B. Martin, “Knowledge Management and Local Government: Some Emerging Trends,” Asia Pacific Management Review, vol. 8, no. 1, Mar. 2003, Accessed: Nov. 29, 2025. [Online]. Available: https://www.proquest.com/docview/1115707453/abstract/8B7E1DC127544C9BPQ/1

How the culture of an organisation affects the use of ethical AI in the public sector

You may have noticed how quickly AI is moving in the public sector. Have you checked to see if culture is keeping up? If you agree with me that “the systems aren’t the problem; organisational culture and mindsets are,” then you probably also agree that even advanced government AI can fail the people it’s meant to protect if it doesn’t have good ethical habits.

Why Culture Matters for Ethical

Culture is what makes or breaks AI ethics. How can that be true? If an organisation cares about transparency and fairness, consequently teams are far more likely to spot bias and question risky automation. Culture-change needs leaders, so now imagine when leaders show the organisation that ethics is important for them, probably through actions and not only slogans, then employees will follow their lead. Also, incentives can play a crucial role in reinforcing the same message by rewarding responsible decisions. As behaviours in AI ethics count, open conversations about risk and accountability should be promoted and normalised, and not just occasionally. It is pretty important to mention that organisations with strong governance and ethical discipline are more capable of handling bias, privacy issues, and transparency challenges with confidence. If you decide to bring this to your organisation, the bottom line would be: build a culture that empowers people to ask hard questions, and your AI will be safer, smarter, and more trusted.

Cultural Norms That Support Ethical AI

The main idea of this blog is that ethical AI isn’t just about technology; culture comes at its core. Imagine an organisation where teams can challenge assumptions without fear; this would surface biases early, whether in procurement algorithms or in chatbot design for citizen services. In fact, effective collaboration keeps blind spots in check, and at the same time, strong data-transparency norms make it easier to trust AI tool, e.g., policing AI systems. For instance, adopting a slow-down mindset that enables testing before deployment prevents high-risk mistakes in public health and other critical domains. It is like proactive risk awareness, giving urban planners a chance to fix long-term equity issues before they harden into policy. The adoption of such cultural traits helps in reducing harm. It creates a public-sector environment where AI is safer, smarter, and aligned with societal values.

Cultural Norms That Stop Ethical AI

Based on what we’ve talked about so far, ethical AI fails quickly when culture gets in the way, and in many public-sector settings, strict hierarchy stops honest debate. For example, if junior staff can not question flawed procurement algorithms, would spreading bias unnoticed be surprising? Workplaces where fear is in control make the situation even worse, as people will stop raising concerns, and risky AI tools slide through unchallenged. Another issue is what I call the “Checkbox Compliance”,  which adds another layer of danger, and this happens when hitting regulatory minimum requirements becomes more important than questioning, “Is this actually fair?” Another growing issue now is the blind trust in AI that really amplifies mistakes. For instance, blind trust in AI in urban planning can lead to poor models that can completely skew investment decisions. This is especially true when teams work in silos: each person focuses on their part, while no one sees the full picture. The issue of such norms is not just about slowing progress; they erode public confidence and compromise ethical standards, and in that sense, fixing culture is the first real step toward responsible AI.

Behaviours of leaders that affect how ethical AI is used

You don’t need a magician to make Ethical AI possible, and there is no chance for it to happen by accident; it must be driven by leadership that sets the tone from day one. Strong leaders must set expectations and make it clear that fairness, transparency, and accountability are mandatory. Words are not enough as leaders must back those words with action, from ordering ethical audits to explaining exactly how decisions are made and what data is being used. Leaders must also fund governance, tools, and training so teams know how to spot risks before they escalate into crises. Leaders also bring in diverse perspectives because real innovation doesn’t come from a single viewpoint, whether it’s technical or business. And in situations when something doesn’t feel right, they slow things down, push for deeper analysis, and refuse to rush untested AI into public use, avoiding any potential harm. These leadership behaviours do more than just guide projects; they, in fact, shape culture and highlight their priorities. Such leaders believe that, in organisations aiming to use AI responsibly, culture is the real safeguard that keeps technology aligned with human values, and they adopt this notion.

Practical Steps to Create a Culture for Ethical AI

Creating a culture that genuinely supports ethical AI starts with everyday habits inside the organisation; and the very first step in that direction is to inject ethics into procurement and project gates so every AI tool is checked for fairness and transparency before it goes live. Also, ethical AI practice requires a multidisciplinary governance team (e.g., policy, law, technology, community engagement) to ensure decisions aren’t made in a vacuum. Also, equipping staff with practical training to recognise bias and risk quickly must be prioritised and cannot be a one-time activity. Besides, empowering people and giving them safe ways to challenge algorithmic decisions with no fear, no stigma, and just clarity is a must. And when teams show ethical leadership, rewarding them becomes a requirement. The main target of such measures is to help create a culture where responsible AI is more than a slogan; it is a standard that keeps organisations credible, trusted, and ready for the future.

Final Thoughts

In the end, ethical AI isn’t a technical matter; it’s a cultural one. The AI systems you build will mirror the behaviours you have in your organisation. So before you proceed in your AI implementations, ask yourself: does your organisation push for transparency, courage, and accountability, or just delivery? The future of ethical AI in your organisation starts with the mindset you promote right now.

 

 

 

 

References

[1]          M. Rezaei, M. Pironti, and R. Quaglia, “AI in knowledge sharing, which ethical challenges are raised in decision-making processes for organisations?,” MD, Apr. 2024, doi: 10.1108/MD-10-2023-2023.

[2]          B. Diab and M. El Hajj, “Ethics in the age of artificial intelligence: unveiling challenges and opportunities in business culture,” Cogent Business & Management, vol. 11, no. 1, p. 2408440, Dec. 2024, doi: 10.1080/23311975.2024.2408440.

[3]          C. Cannavale, L. Claudio, and D. Koroleva, “Digitalisation and artificial intelligence development. A cross-country analysis,” European Journal of Innovation Management, vol. 28, no. 11, pp. 112–130, Dec. 2025, doi: 10.1108/EJIM-07-2024-0828.

[4]          E. Özkan, “The importance of cultural diversity in Artificial Intelligence systems,” 2025.

[5]          A. Tlili, M. Denden, M. Abed, and R. Huang, “Artificial intelligence ethics in services: are we paying attention to that?!,” The Service Industries Journal, vol. 44, no. 15–16, pp. 1093–1116, Dec. 2024, doi: 10.1080/02642069.2024.2369322.

[6]          H. Wang and V. Blok, “Why putting artificial intelligence ethics into practice is not enough: Towards a multi-level framework,” Big Data & Society, vol. 12, no. 2, p. 20539517251340620, June 2025, doi: 10.1177/20539517251340620.

[7]          J. Rose, “The Ethics of Artificial Intelligence in Education: Practices, Challenges, and Debates: edited by Wayne Holmes and Kaśka Porayska-Pomsta, New York, NY, Routledge, 2023, 288 pp., $52.95 (paperback), $180 (hardback), $47.69 (ebook), 9780367349721, 9780367349714, 9780429329067,” The Serials Librarian, vol. 85, no. 5–6, pp. 169–171, July 2024, doi: 10.1080/0361526X.2024.2427948.

[8]          J. Guo, J. Chen, and S. Cheng, “Perception of Ethical Risks of Artificial Intelligence Technology in the Context of Individual Cultural Values and Intergenerational Differences: The Case of China,” Feb. 15, 2024, In Review. doi: 10.21203/rs.3.rs-3901913/v1.

 

Linking Knowledge, Culture, and Ethics: Building a Framework for Ethical AI in Qatar

I study how cultural values shape the use and understanding of ethical AI in Qatar’s public sector. I also look at how knowledge moves inside these organisations. Who collects it? Who shares it? How it turns into decisions. I draw on three areas: knowledge management and acquisition, cultural studies, and locally grounded AI ethics. Together, these strands help me show something simple and practical. Culture influences technology choices, and it also influences whether people trust AI or push back against it.
Knowledge Management and Knowledge Acquisition
The KM literature explains how organisations handle what they know. Pérez‐Nordtvedt et al. (2008) point to four markers of effectiveness: comprehension, usefulness, speed, and economy. Zahra et al. (2000) add three angles on learning: breadth, depth, and speed, linking them to adaptation and innovation. Bjorvatn and Wald (2020) bring in people and pressure. Trust and time matter. In Qatar’s public sector, I want to see if these same forces shape how AI-related knowledge is effectively acquired. Does hierarchy slow the flow? Does caution keep people from sharing or experimenting? I will test these models against local practice, where authority and comfort with uncertainty play different roles.
Cultural Dimensions and Organisational Behaviour
Hofstede’s (2013) and GLOBE’s (2025) models are common tools for individual cultures. I will study these cultures, for example, Power Distance and Uncertainty Avoidance, to explore the impact of these cultures on knowledge acquisition. GLOBE’s views on collectivism and performance orientation, for instance, help explain how group expectations drive accountability. I will not treat these models as universal rules, as they were built on Western samples. I want to see where they fit and where they miss in Qatar’s institutions. This aligns with Al-Alawi et al. (2007) and Jiacheng et al. (2010), who show how trust and motivation shape cross-cultural knowledge sharing.
Ethical AI and Cultural Alignment
Recent studies by Özkan (2025), Kladko (2023), and Shin et al. (2022) argue for ethics that match local norms. Not a single global template, and of course, local values matter. In Qatar, these points toward community-oriented reasoning and collective responsibility. Cannavale et al. (2025) show that AI develops differently across countries, which supports a comparative approach rather than one standard model. My goal is to build an AI governance approach that works for Qatar’s governmental organisations and reflects its culture.
Where the Literature Falls Short
When I started this project, one thing I found out is that working from the Middle East on culture and AI reliance in public organisations is rare. Most studies look at Western or East Asian contexts. That gap pushed me to look closer. In this research, I connect knowledge management, culture, and ethical AI to shape a model suited to Qatar. The aim is practical. If you respect people’s cultural values, AI governance becomes fairer, more stable, and easier to trust.
References
Hofstede (2013); GLOBE Project (2025); Pérez‐Nordtvedt et al. (2008); Zahra et al. (2000); Bjorvatn & Wald (2020); Al-Alawi et al. (2007); Jiacheng et al. (2010); Özkan (2025); Kladko (2023); Shin et al. (2022); Cannavale et al. (2025).

css.php

Report this page

To report inappropriate content on this page, please use the form below. Upon receiving your report, we will be in touch as per the Take Down Policy of the service.

Please note that personal data collected through this form is used and stored for the purposes of processing this report and communication with you.

If you are unable to report a concern about content via this form please contact the Service Owner.

Please enter an email address you wish to be contacted on. Please describe the unacceptable content in sufficient detail to allow us to locate it, and why you consider it to be unacceptable.
By submitting this report, you accept that it is accurate and that fraudulent or nuisance complaints may result in action by the University.

  Cancel