You may have noticed how quickly AI is moving in the public sector. Have you checked to see if culture is keeping up? If you agree with me that “the systems aren’t the problem; organisational culture and mindsets are,” then you probably also agree that even advanced government AI can fail the people it’s meant to protect if it doesn’t have good ethical habits.
Why Culture Matters for Ethical
Culture is what makes or breaks AI ethics. How can that be true? If an organisation cares about transparency and fairness, consequently teams are far more likely to spot bias and question risky automation. Culture-change needs leaders, so now imagine when leaders show the organisation that ethics is important for them, probably through actions and not only slogans, then employees will follow their lead. Also, incentives can play a crucial role in reinforcing the same message by rewarding responsible decisions. As behaviours in AI ethics count, open conversations about risk and accountability should be promoted and normalised, and not just occasionally. It is pretty important to mention that organisations with strong governance and ethical discipline are more capable of handling bias, privacy issues, and transparency challenges with confidence. If you decide to bring this to your organisation, the bottom line would be: build a culture that empowers people to ask hard questions, and your AI will be safer, smarter, and more trusted.
Cultural Norms That Support Ethical AI
The main idea of this blog is that ethical AI isn’t just about technology; culture comes at its core. Imagine an organisation where teams can challenge assumptions without fear; this would surface biases early, whether in procurement algorithms or in chatbot design for citizen services. In fact, effective collaboration keeps blind spots in check, and at the same time, strong data-transparency norms make it easier to trust AI tool, e.g., policing AI systems. For instance, adopting a slow-down mindset that enables testing before deployment prevents high-risk mistakes in public health and other critical domains. It is like proactive risk awareness, giving urban planners a chance to fix long-term equity issues before they harden into policy. The adoption of such cultural traits helps in reducing harm. It creates a public-sector environment where AI is safer, smarter, and aligned with societal values.
Cultural Norms That Stop Ethical AI
Based on what we’ve talked about so far, ethical AI fails quickly when culture gets in the way, and in many public-sector settings, strict hierarchy stops honest debate. For example, if junior staff can not question flawed procurement algorithms, would spreading bias unnoticed be surprising? Workplaces where fear is in control make the situation even worse, as people will stop raising concerns, and risky AI tools slide through unchallenged. Another issue is what I call the “Checkbox Compliance”, which adds another layer of danger, and this happens when hitting regulatory minimum requirements becomes more important than questioning, “Is this actually fair?” Another growing issue now is the blind trust in AI that really amplifies mistakes. For instance, blind trust in AI in urban planning can lead to poor models that can completely skew investment decisions. This is especially true when teams work in silos: each person focuses on their part, while no one sees the full picture. The issue of such norms is not just about slowing progress; they erode public confidence and compromise ethical standards, and in that sense, fixing culture is the first real step toward responsible AI.
Behaviours of leaders that affect how ethical AI is used
You don’t need a magician to make Ethical AI possible, and there is no chance for it to happen by accident; it must be driven by leadership that sets the tone from day one. Strong leaders must set expectations and make it clear that fairness, transparency, and accountability are mandatory. Words are not enough as leaders must back those words with action, from ordering ethical audits to explaining exactly how decisions are made and what data is being used. Leaders must also fund governance, tools, and training so teams know how to spot risks before they escalate into crises. Leaders also bring in diverse perspectives because real innovation doesn’t come from a single viewpoint, whether it’s technical or business. And in situations when something doesn’t feel right, they slow things down, push for deeper analysis, and refuse to rush untested AI into public use, avoiding any potential harm. These leadership behaviours do more than just guide projects; they, in fact, shape culture and highlight their priorities. Such leaders believe that, in organisations aiming to use AI responsibly, culture is the real safeguard that keeps technology aligned with human values, and they adopt this notion.
Practical Steps to Create a Culture for Ethical AI
Creating a culture that genuinely supports ethical AI starts with everyday habits inside the organisation; and the very first step in that direction is to inject ethics into procurement and project gates so every AI tool is checked for fairness and transparency before it goes live. Also, ethical AI practice requires a multidisciplinary governance team (e.g., policy, law, technology, community engagement) to ensure decisions aren’t made in a vacuum. Also, equipping staff with practical training to recognise bias and risk quickly must be prioritised and cannot be a one-time activity. Besides, empowering people and giving them safe ways to challenge algorithmic decisions with no fear, no stigma, and just clarity is a must. And when teams show ethical leadership, rewarding them becomes a requirement. The main target of such measures is to help create a culture where responsible AI is more than a slogan; it is a standard that keeps organisations credible, trusted, and ready for the future.
Final Thoughts
In the end, ethical AI isn’t a technical matter; it’s a cultural one. The AI systems you build will mirror the behaviours you have in your organisation. So before you proceed in your AI implementations, ask yourself: does your organisation push for transparency, courage, and accountability, or just delivery? The future of ethical AI in your organisation starts with the mindset you promote right now.
References
[1] M. Rezaei, M. Pironti, and R. Quaglia, “AI in knowledge sharing, which ethical challenges are raised in decision-making processes for organisations?,” MD, Apr. 2024, doi: 10.1108/MD-10-2023-2023.
[2] B. Diab and M. El Hajj, “Ethics in the age of artificial intelligence: unveiling challenges and opportunities in business culture,” Cogent Business & Management, vol. 11, no. 1, p. 2408440, Dec. 2024, doi: 10.1080/23311975.2024.2408440.
[3] C. Cannavale, L. Claudio, and D. Koroleva, “Digitalisation and artificial intelligence development. A cross-country analysis,” European Journal of Innovation Management, vol. 28, no. 11, pp. 112–130, Dec. 2025, doi: 10.1108/EJIM-07-2024-0828.
[4] E. Özkan, “The importance of cultural diversity in Artificial Intelligence systems,” 2025.
[5] A. Tlili, M. Denden, M. Abed, and R. Huang, “Artificial intelligence ethics in services: are we paying attention to that?!,” The Service Industries Journal, vol. 44, no. 15–16, pp. 1093–1116, Dec. 2024, doi: 10.1080/02642069.2024.2369322.
[6] H. Wang and V. Blok, “Why putting artificial intelligence ethics into practice is not enough: Towards a multi-level framework,” Big Data & Society, vol. 12, no. 2, p. 20539517251340620, June 2025, doi: 10.1177/20539517251340620.
[7] J. Rose, “The Ethics of Artificial Intelligence in Education: Practices, Challenges, and Debates: edited by Wayne Holmes and Kaśka Porayska-Pomsta, New York, NY, Routledge, 2023, 288 pp., $52.95 (paperback), $180 (hardback), $47.69 (ebook), 9780367349721, 9780367349714, 9780429329067,” The Serials Librarian, vol. 85, no. 5–6, pp. 169–171, July 2024, doi: 10.1080/0361526X.2024.2427948.
[8] J. Guo, J. Chen, and S. Cheng, “Perception of Ethical Risks of Artificial Intelligence Technology in the Context of Individual Cultural Values and Intergenerational Differences: The Case of China,” Feb. 15, 2024, In Review. doi: 10.21203/rs.3.rs-3901913/v1.




