Any views expressed within media held on this service are those of the contributors, should not be taken as approved or endorsed by the University, and do not necessarily reflect the views of the University in respect of any particular issue.

Ethical AI Starts with Knowledge, Not Code

Without strong knowledge management, ethical AI collapses. Can you believe that? Let me tell you, then! Missing documentation and weak institutional memory do more than slow down teams; they also fuel bias, hide risks, and lead to unsafe decisions and reputation damage.

The Missing Link Between AI Ethics and Knowledge

Fairness, safety, and explainability don’t come from algorithms because it is not simply about the code; it is all about how organisations manage and govern their AI systems, and that is, in fact, coming from the knowledge that these organisations possess. For instance, the situation where teams can’t trace decisions or access the correct information is bad, as AI in that case becomes opaque and risky. For that, Strong knowledge management is required to create transparency and accountability across the AI lifecycle, making it the bridge that helps organisations to establish ethical and trustworthy systems.

Documentation as the Backbone of Ethical AI

Suppose AI is the engine that all organisations are seeking. In that case, I believe that documentation is the control panel. If it’s missing, no one would know what’s really happening under the hood. Documentation by itself is not the objective; it is meant for a purpose. Having clear documentation keeps teams aligned, exposes risks early, and makes fairness and accountability measurable instead of just theoretical.

On the other hand, weak documentation creates chaos. When there is no clear reference documentation, you may expect biased outputs to go unnoticed, governance to break down when a reference is absent, and knowledge to become fragmented across departments and remain in minds, which can be lost in different ways. To understand the impact of this documentation, just ask public-sector teams who have been through audits to question decisions made by AI systems for justification. So, as mentioned before, documentation is not for documentation; in fact, strong documentation closes these gaps by providing a single source of truth on, for instance, how the model was built, why specific data were used, and what risks still need watching. The message is simple here: start your disciplined documentation, probably not an easy task, but definitely avoid bigger risks and harms.

Knowledge Workflows and Institutional Memory

Organisations need a knowledge management system for many reasons, and knowledge workflows serve as a safety net that prevents them from forgetting what matters. For example, decision logs, structured documentation, and knowledge-sharing routines keep teams aligned and stop history from being rewritten every time staff changes. It was proven in a public-sector research and KM studies on cases in Dubai that without solid workflows, institutional memory collapses and mistakes repeat. With a robust knowledge management system in place, the story will be different, and organisations will remain consistent, accountable, and more competent over time. To elaborate, this system and these workflows aren’t just operational tools; they’re ethical defences that help organisations maintain transparent, evidence-based decisions, not just guesswork. Protecting organisational memory with powerful tools, systems and governance is essential, and I even think it’s a non-negotiable requirement for organisations that intend to survive and become smarter.

Learning Cultures and Organisational Behaviour

Being in a strong learning culture or learning organisation versus not having that makes the difference between responsible AI and risky AI. Imagine teams in a continuous learning process, adopting organisational knowledge management practices that help learning across functions, and how this can help catch ethical issues. Would that be faster? Think of public health experts validating model assumptions or urban planners questioning bias in urban development tools. Such validations, ethical reviews, and post-deployment checks are intended to keep systems honest and compliant. And when an organisation’s staff can challenge assumptions without hesitation, AI becomes a shared responsibility across all functions rather than a “black box” no one understands. Having this mindset cuts blind reliance on opaque models and sharpens fairness and safety decisions across government services. Understanding this connection between knowledge management and responsible AI is crucial if organisations want trustworthy AI; they need a culture that learns relentlessly and questions everything.

Practical Steps to Strengthen Knowledge for Ethical AI

For organisations seeking ethical AI, they need to keep in mind well-managed corporate knowledge management practices, with no exceptions. To start, have standard documentation templates in place so every model is described clearly and consistently across the organisation. It is probably recommended to have an AI governance portal to centralise policies, design choices, and risk insights, and serve as a single reference for all knowledge required for AI ethics and governance. In addition, you can’t have proper AI governance without running regular ethical risk reviews to catch bias early, and this requires knowledge to be shared across the organisation. This includes, in addition to a model registry to track versions, data lineage, and performance records, a primary reference for AI governance activities. Also, make sure teams are trained to maintain institutional memory so knowledge doesn’t disappear when people move on and keep the knowledge internalised and sustained.

Final Thoughts

In the end, I would like to emphasise that ethical AI isn’t just about code; it’s in fact about the knowledge behind it. So don’t look at knowledge management just as documentation or admin; it is, in fact, core ethical infrastructure. And remember when organisations protect their memory, they will be better able to protect the integrity of their AI systems.

 

 

References

[1]          A. Tlili, M. Denden, M. Abed, and R. Huang, “Artificial intelligence ethics in services: are we paying attention to that?!,” The Service Industries Journal, vol. 44, no. 15–16, pp. 1093–1116, Dec. 2024, doi: 10.1080/02642069.2024.2369322.

[2]          J. Guo, J. Chen, and S. Cheng, “Perception of Ethical Risks of Artificial Intelligence Technology in the Context of Individual Cultural Values and Intergenerational Differences: The Case of China,” Feb. 15, 2024, In Review. doi: 10.21203/rs.3.rs-3901913/v1.

[3]          M. Rezaei, M. Pironti, and R. Quaglia, “AI in knowledge sharing, which ethical challenges are raised in decision-making processes for organisations?,” MD, Apr. 2024, doi: 10.1108/MD-10-2023-2023.

[4]          L. Pérez‐Nordtvedt, B. L. Kedia, D. K. Datta, and A. A. Rasheed, “Effectiveness and Efficiency of Cross‐Border Knowledge Transfer: An Empirical Examination,” J Management Studies, vol. 45, no. 4, pp. 714–744, June 2008, doi: 10.1111/j.1467-6486.2008.00767.x.

[5]          P. Bharati, W. Zhang, and A. Chaudhury, “Better knowledge with social media? Exploring the roles of social capital and organizational knowledge management,” Journal of Knowledge Management, vol. 19, no. 3, pp. 456–475, May 2015, doi: 10.1108/JKM-11-2014-0467.

[6]          Marc Demarest, “Knowledge Management: An Introduction.” 1997. Accessed: Nov. 29, 2025. [Online]. Available: https://www.noumenal.com/marc/km1.pdf

[7]          X. Cong and K. V. Pandya, “Issues of Knowledge Management in the Public Sector,” Electronic Journal of Knowledge Management, vol. 1, no. 2, p. pp181‑188-pp181‑188, Dec. 2003, Accessed: Nov. 29, 2025. [Online]. Available: https://academic-publishing.org/index.php/ejkm/article/view/701

[8]          M. Biygautane and K. Al-Yahya, “Knowledge Management in the UAE’s Public Sector: The Case of Dubai”, [Online]. Available: https://repository.mbrsg.ac.ae/server/api/core/bitstreams/59089da4-c3d5-455b-811e-dbdf45d40a74/content

[9]          A. M. Al-Khouri, “Fusing Knowledge Management into the Public Sector: A Review of the Field and the Case of the Emirates Identity Authority,” Information and Knowledge Management, vol. 4, pp. 23–74, 2014, [Online]. Available: https://api.semanticscholar.org/CorpusID:152686699

[10]       H. Wang and V. Blok, “Why putting artificial intelligence ethics into practice is not enough: Towards a multi-level framework,” Big Data & Society, vol. 12, no. 2, p. 20539517251340620, June 2025, doi: 10.1177/20539517251340620.

[11]       B. Martin, “Knowledge Management and Local Government: Some Emerging Trends,” Asia Pacific Management Review, vol. 8, no. 1, Mar. 2003, Accessed: Nov. 29, 2025. [Online]. Available: https://www.proquest.com/docview/1115707453/abstract/8B7E1DC127544C9BPQ/1

How the culture of an organisation affects the use of ethical AI in the public sector

You may have noticed how quickly AI is moving in the public sector. Have you checked to see if culture is keeping up? If you agree with me that “the systems aren’t the problem; organisational culture and mindsets are,” then you probably also agree that even advanced government AI can fail the people it’s meant to protect if it doesn’t have good ethical habits.

Why Culture Matters for Ethical

Culture is what makes or breaks AI ethics. How can that be true? If an organisation cares about transparency and fairness, consequently teams are far more likely to spot bias and question risky automation. Culture-change needs leaders, so now imagine when leaders show the organisation that ethics is important for them, probably through actions and not only slogans, then employees will follow their lead. Also, incentives can play a crucial role in reinforcing the same message by rewarding responsible decisions. As behaviours in AI ethics count, open conversations about risk and accountability should be promoted and normalised, and not just occasionally. It is pretty important to mention that organisations with strong governance and ethical discipline are more capable of handling bias, privacy issues, and transparency challenges with confidence. If you decide to bring this to your organisation, the bottom line would be: build a culture that empowers people to ask hard questions, and your AI will be safer, smarter, and more trusted.

Cultural Norms That Support Ethical AI

The main idea of this blog is that ethical AI isn’t just about technology; culture comes at its core. Imagine an organisation where teams can challenge assumptions without fear; this would surface biases early, whether in procurement algorithms or in chatbot design for citizen services. In fact, effective collaboration keeps blind spots in check, and at the same time, strong data-transparency norms make it easier to trust AI tool, e.g., policing AI systems. For instance, adopting a slow-down mindset that enables testing before deployment prevents high-risk mistakes in public health and other critical domains. It is like proactive risk awareness, giving urban planners a chance to fix long-term equity issues before they harden into policy. The adoption of such cultural traits helps in reducing harm. It creates a public-sector environment where AI is safer, smarter, and aligned with societal values.

Cultural Norms That Stop Ethical AI

Based on what we’ve talked about so far, ethical AI fails quickly when culture gets in the way, and in many public-sector settings, strict hierarchy stops honest debate. For example, if junior staff can not question flawed procurement algorithms, would spreading bias unnoticed be surprising? Workplaces where fear is in control make the situation even worse, as people will stop raising concerns, and risky AI tools slide through unchallenged. Another issue is what I call the “Checkbox Compliance”,  which adds another layer of danger, and this happens when hitting regulatory minimum requirements becomes more important than questioning, “Is this actually fair?” Another growing issue now is the blind trust in AI that really amplifies mistakes. For instance, blind trust in AI in urban planning can lead to poor models that can completely skew investment decisions. This is especially true when teams work in silos: each person focuses on their part, while no one sees the full picture. The issue of such norms is not just about slowing progress; they erode public confidence and compromise ethical standards, and in that sense, fixing culture is the first real step toward responsible AI.

Behaviours of leaders that affect how ethical AI is used

You don’t need a magician to make Ethical AI possible, and there is no chance for it to happen by accident; it must be driven by leadership that sets the tone from day one. Strong leaders must set expectations and make it clear that fairness, transparency, and accountability are mandatory. Words are not enough as leaders must back those words with action, from ordering ethical audits to explaining exactly how decisions are made and what data is being used. Leaders must also fund governance, tools, and training so teams know how to spot risks before they escalate into crises. Leaders also bring in diverse perspectives because real innovation doesn’t come from a single viewpoint, whether it’s technical or business. And in situations when something doesn’t feel right, they slow things down, push for deeper analysis, and refuse to rush untested AI into public use, avoiding any potential harm. These leadership behaviours do more than just guide projects; they, in fact, shape culture and highlight their priorities. Such leaders believe that, in organisations aiming to use AI responsibly, culture is the real safeguard that keeps technology aligned with human values, and they adopt this notion.

Practical Steps to Create a Culture for Ethical AI

Creating a culture that genuinely supports ethical AI starts with everyday habits inside the organisation; and the very first step in that direction is to inject ethics into procurement and project gates so every AI tool is checked for fairness and transparency before it goes live. Also, ethical AI practice requires a multidisciplinary governance team (e.g., policy, law, technology, community engagement) to ensure decisions aren’t made in a vacuum. Also, equipping staff with practical training to recognise bias and risk quickly must be prioritised and cannot be a one-time activity. Besides, empowering people and giving them safe ways to challenge algorithmic decisions with no fear, no stigma, and just clarity is a must. And when teams show ethical leadership, rewarding them becomes a requirement. The main target of such measures is to help create a culture where responsible AI is more than a slogan; it is a standard that keeps organisations credible, trusted, and ready for the future.

Final Thoughts

In the end, ethical AI isn’t a technical matter; it’s a cultural one. The AI systems you build will mirror the behaviours you have in your organisation. So before you proceed in your AI implementations, ask yourself: does your organisation push for transparency, courage, and accountability, or just delivery? The future of ethical AI in your organisation starts with the mindset you promote right now.

 

 

 

 

References

[1]          M. Rezaei, M. Pironti, and R. Quaglia, “AI in knowledge sharing, which ethical challenges are raised in decision-making processes for organisations?,” MD, Apr. 2024, doi: 10.1108/MD-10-2023-2023.

[2]          B. Diab and M. El Hajj, “Ethics in the age of artificial intelligence: unveiling challenges and opportunities in business culture,” Cogent Business & Management, vol. 11, no. 1, p. 2408440, Dec. 2024, doi: 10.1080/23311975.2024.2408440.

[3]          C. Cannavale, L. Claudio, and D. Koroleva, “Digitalisation and artificial intelligence development. A cross-country analysis,” European Journal of Innovation Management, vol. 28, no. 11, pp. 112–130, Dec. 2025, doi: 10.1108/EJIM-07-2024-0828.

[4]          E. Özkan, “The importance of cultural diversity in Artificial Intelligence systems,” 2025.

[5]          A. Tlili, M. Denden, M. Abed, and R. Huang, “Artificial intelligence ethics in services: are we paying attention to that?!,” The Service Industries Journal, vol. 44, no. 15–16, pp. 1093–1116, Dec. 2024, doi: 10.1080/02642069.2024.2369322.

[6]          H. Wang and V. Blok, “Why putting artificial intelligence ethics into practice is not enough: Towards a multi-level framework,” Big Data & Society, vol. 12, no. 2, p. 20539517251340620, June 2025, doi: 10.1177/20539517251340620.

[7]          J. Rose, “The Ethics of Artificial Intelligence in Education: Practices, Challenges, and Debates: edited by Wayne Holmes and Kaśka Porayska-Pomsta, New York, NY, Routledge, 2023, 288 pp., $52.95 (paperback), $180 (hardback), $47.69 (ebook), 9780367349721, 9780367349714, 9780429329067,” The Serials Librarian, vol. 85, no. 5–6, pp. 169–171, July 2024, doi: 10.1080/0361526X.2024.2427948.

[8]          J. Guo, J. Chen, and S. Cheng, “Perception of Ethical Risks of Artificial Intelligence Technology in the Context of Individual Cultural Values and Intergenerational Differences: The Case of China,” Feb. 15, 2024, In Review. doi: 10.21203/rs.3.rs-3901913/v1.

 

Developin a Project Idea

Introduction

As Artificial intelligence (AI) grows rapidly, transforming public services, complex ethical challenges are rising and requiring more governance. Many organisations thrive on having more AI embedded in their decision-making processes, which emphasises how crucial it is to have AI ethical deployment. In this growing domain, selecting a well-defined data and AI ethics project is challenging due to the many issues involved.

AI Ethics Interventions

There are many areas in mind where data and AI ethics interventions are needed, but two are particularly important for my research:

  • AI Bias and Fairness: AI systems have shown societal biases; however, they are increasingly used in hiring processes and customer service applications. Lack of governance here may lead to discrimination against marginalised groups, impacting employment opportunities and consumer experiences.
  • AI and Privacy: The increasing use of AI for data analysis and automation raises serious privacy concerns, especially regarding personally identifiable information (PII). With the force of competition, governments and corporations process vast amounts of sensitive data, requiring robust data protection measures. For that, AI governance must be present to address these concerns and prevent the misuse of personal data.

Ethical Interventions

To address these challenges, I will explore multiple ethical interventions. First, I will provide some policy recommendations that would help establish ethical guidelines and regulatory policies to ensure fair AI practices. I will also explore how to strengthen oversight and auditing processes to hold AI systems accountable. Besides, I will provide AI developers with ethical AI design frameworks that would help incorporating fairness and transparency into AI systems.

Project Focus

While AI ethics research offers many possible directions, tackling broad ethical challenges may lead to issues like scope creep, data accessibility, or missing timelines. Avoiding these issues, I will keep the following questions, as a framework, in mind, aiming to have a project that is both meaningful and achievable within the timeframe:

  • Is the issue pressing and impactful in the industry I prefer?
  • Can it be addressed within three months using available resources?
  • Can my project build upon and enhance existing AI governance models?
  • Who benefits from the research, policymakers, AI developers, or the public?

Final Project Idea

Based on the framework, and my preference to study the public sector in the UAE, my research will focus on how to bridge the governance gap in the UAE’s public sector through evaluating AI governance frameworks and AI Ethics tools use, mainly red teaming and auditing. I aim assessing whether existing AI governance frameworks effectively address ethical risks associated with AI adoption.

If I succeed in structuring this project very well, it will be feasible within three months. My focus will be on developing a framework, analysing policies,  reviewing case studies, gathering expert insights, and exploring how AI ethics tools can be integrated into UAE government AI governance strategies.

Conclusion

I can’t say that I have developed a well-defined AI ethics project; however, I think I’m on the right track to do that soon. Until now, I have identified key ethical concerns in AI governance and narrowed my focus to a structured research project that can contribute meaningful insights to responsible AI policymaking. However that might require another round of refinement, which I will do in my next steps hoping it will contribute to the foundation that will strengthen AI governance in the UAE.

Ethics in Action: Principles Guiding AI and Data Governance in my Project

Regarding the ethical dimensions of my project, my plan is to consider privacy as one key ethical dimension. I’m a strong supporter of adopting a responsible way of data collection and management. This starts with making sure that participants’ personal information is anonymized and securely stored and does not end with using it only for the purpose that was stated. I think that with this approach, my project will be in line with data protection laws and ethical standards.

Another important aspect is inclusivity. In my view, ensuring that different perspectives are represented is important, and this can be achieved by incorporating input from different stakeholders. for that purpose, my project is willing to include different opinions from different groups, avoiding the barriers faced by underrepresented communities. This, I think,  helps create a solution that is both fair and effective.

In addition, I think that transparency and trust are also essential. Being transparent with participants and stakeholders about project goals, processes, and outcomes is a prerequisite to establish trust. My objective is to ensure that everyone involved feels valued and informed, by clearly communicating these elements.

With respect to addressing ethics in the review process, I believe it is important to align the project with ethical norms. For that, my approach is adopting the following components. First, I will obtain informed consent, whereby I will detail the procedures for obtaining informed consent to ensure that participants understand their rights and how their data will be used. This includes providing easy-to-understand explanations of complex processes.

Next, it is important for my project to assess and mitigate risks to participants. By practising risk management, I will identify and minimize risks, such as securing data and addressing privacy concerns that are outlined in the ethical review form.

I will also focus on inclusivity and equity as key focuses in the review process. I will explain how the intervention engages diverse communities and avoids reinforcing systemic barriers. Additionally, I plan to regularly document decisions and project updates to ensure accountability. I believe that this record is crucial for reflecting on the project and maintaining ethical standards throughout its implementation.

In conclusion, the ethical dimensions of my project will serve as guiding principles that enhance its value and impact, and I believe by embedding ethics into every stage, from planning to execution, I will ensure that the intervention respects human dignity, promotes fairness, and achieves meaningful outcomes for society. Incorporating ethical norms into projects helps balance innovation with responsibility, ensuring that our work contributes positively to the world. Through the ethical review process, I plan to establish a transparent, inclusive, and accountable framework for the intervention’s success.

Understanding the Implications and Anticipating Objections to Methodological Choices in Research

 

Methodological decisions are at the core of any research proposal, as it defines its structure and dictates the quality of its outcomes. However, researchers must be carefully upon making these decisions, as they come with significant implications and limitations that must be critically examined to ensure a robust and ethically sound approach. This blog discusses these implications and how I plan to manage and minimize their effects on my research project.

Implications of Methodological Choices

Every methodological decision involves compromises between different priorities. For example, choosing qualitative methods like interviews can provide in-depth insights but may lack generalizability. Quantitative approaches, such as interviews, allows for detailed exploration of a topic, however, it has limitations in terms of generalizability, while,  quantitative methods that prioritize may fail to capture details and have oversimplification issues, despite its wide adoption .

The ethical considerations also need to be prioritized upon deciding the methodological approach. For instance, predictive modelling raises critical ethical concerns in case the project is AI-based, may face ethical challenges, such as the risk of biased results, a lack of inclusivity, and unrepresentative data, which require attention to ensure that these technologies do not unintentionally support systemic inequalities.

Anticipating and Addressing Objections

It is anticipated in my project that stakeholders are likely to raise questions about the methodologies used, especially when dealing with ethical sensitivities or in case high resource demands are involved. Some of the main concerns may include:

  1. Resource Intensity: In my project, decision-makers may view hybrid methodologies requiring significant time and resources. This perception could result in hesitation to adopt these approaches and may affect the scope of my project.
  2. Ethical Concerns: My project may involve collecting information from a wide group of people, and in this case, I must pay attention to different aspects, such as issues related to privacy and confidentiality, ensuring informed consent, addressing potential biases in data collection, maintaining cultural sensitivity, and safeguarding the inclusivity and representativeness of the sample.

In my response to these issues, I will be addressing these concerns through the following strategies:

  • Ensure Transparency: Regularly communicate with stakeholders about the rationale behind methodological choices, demonstrating how they align with ethical principles such as beneficence, non-maleficence, and accountability.
  • Engage Stakeholders Early: Actively involve stakeholders during the initial proposal and planning phases to build trust, encourage collaboration, and anticipate potential objections or concerns.
  • Implement Feedback Mechanisms: Establish iterative feedback processes that enable continuous refinement of methodologies and ensure alignment with stakeholder expectations throughout the project.

Limitations and Self-Critique

I believe that there is no perfect methodology, and I think adopting a reflective approach will help identify and address potential limitations. For instance:

  • Bias Risks: Automated tools and AI methods may unintentionally introduce bias. Addressing this requires implementing bias mitigation strategies and involving diverse review teams to ensure fairness.
  • Scalability Challenges: Certain methods may not perform effectively with larger datasets or participant groups. Testing techniques through pilot studies before full-scale implementation is a practical way to manage this issue.
  • Ethical Complexity: Following ethical guidelines, such as those from the Edinburgh Futures Institute, involves balancing potential harm and benefits, safeguarding data privacy, and promoting equity throughout the research process.

Final Thoughts

From my perspective, being transparent about the limitations and ethical implications of methodologies strengthens the overall quality and ethical foundation of research. With this reflective approach, applying actively critiquing methods and being open to stakeholder feedback, I ensure stronger commitment to integrity and ethical research practices.

Welcome!

Welcome to your new blog site!

In this fast-evolving digital era, data and artificial intelligence are reshaping nearly every aspect of our lives. From banking, healthcare, education, and event governmental services, these technologies are making decisions that impact individuals, communities, and society at large. However, with that great power comes an even greater responsibility: how do we ensure that these technologies serve us fairly, transparently, and ethically?

This blog is dedicated to exploring the complexities, discussing challenges, coining ideas and suggesting solutions at the intersection of data, AI, and ethics. Here, I’ll delve into very important questions around data privacy, AI bias, accountability, and the very nature of decision-making. I’ll discuss the implications of AI-driven systems in areas like justice, equality, and autonomy, ask some difficult questions and look at how organizations can embed ethical practices into their data policies.

As we navigate this journey, the aim is not only to understand how these technologies work, I will try to uncover how they should work for the good of people. I’m targeting wide spectrum of audience like tech enthusiasts, policy makers, and anyone curious about the ethical dilemmas of our digital age.

In simple terms, this blog will be your guide to understanding the role of AI and data in our world.

Ready to take off!

Amer Maithalouni

css.php

Report this page

To report inappropriate content on this page, please use the form below. Upon receiving your report, we will be in touch as per the Take Down Policy of the service.

Please note that personal data collected through this form is used and stored for the purposes of processing this report and communication with you.

If you are unable to report a concern about content via this form please contact the Service Owner.

Please enter an email address you wish to be contacted on. Please describe the unacceptable content in sufficient detail to allow us to locate it, and why you consider it to be unacceptable.
By submitting this report, you accept that it is accurate and that fraudulent or nuisance complaints may result in action by the University.

  Cancel