When people talk about the future of technology, they often focus on innovation and efficiency. But in the world of data and AI, the real question is not just what we can build, but how we choose to build itâand with whom. More and more, Iâve come to see that creating ethical data futures isn’t a solo task. Itâs something we have to do together. Working with others helps uncover blind spots, surface disagreements, and force us to ask uncomfortable but necessary questions about fairness, privacy, and power. As Shannon Vallor (2018) points out, ethics in the data age isn’t just about following rulesâit’s about cultivating habits of reflection and responsibility in complex social contexts. And as Ruha Benjamin (2019) warns, without critical collaboration, we risk using new technologies to reinforce old injustices.
In this essay, I explore how collaboration helps us engage more deeply with data ethics, drawing from two learning group discussions: one on driving data and insurance fairness (Week 4), and another on the University of Edinburghâs release of the ELM AI tool (Week 6). I show how working with peers allowed for ethical deliberation, disagreement, and joint evaluationâcore practices that make data ethics more grounded, human, and realistic.
One of the most important reasons to collaborate in data ethics is that no single person sees the full picture. Technologies are designed by some, used by many, and affect almost everyone. That means ethical decisions about data often require input from multiple people with different experiences and values. Vallor (2018) calls this âethical deliberationââthe process of weighing competing interests and making choices that reflect shared responsibility rather than individual opinion.
In our Week 4 discussion about the sale of driving data by car companies to insurance providers, I saw ethical deliberation in action. Some people argued that this kind of data sharing could lead to more accurate, fairer pricing. Othersâincluding myselfâpushed back, pointing out that what seems fair in theory can be very different in practice. I wrote that the idea of personalized insurance based on driving behavior âmight punish night-shift workers, people who drive long distances for work, or anyone whose habits donât match the âideal driver profileââ (LG19.4). What struck me in our discussion was how others added real-life examples I hadnât consideredâlike how driving late at night might automatically be flagged as risky, even if someone has never had an accident. This back-and-forth helped us uncover not just privacy concerns, but deeper issues of structural inequality.
This connects directly to Ruha Benjaminâs (2019) idea of the âNew Jim Code,â where new technologies reinforce old patterns of discrimination under the guise of objectivity. She argues that algorithmic systems often carry forward racial, class, and gender biasesânot because of malicious intent, but because those systems are trained on biased data and used without enough reflection. The driving data example showed exactly that: by using driving behavior as a proxy for risk, insurers could end up penalizing people simply for living or working under different conditions. Collaboration helped us recognize this; without hearing others’ experiences and perspectives, I might have missed the extent to which seemingly neutral data practices can be discriminatory.
Another ethical skill that came up during this group work was ethical contestationâthe ability to challenge each other when values or assumptions conflict. Vallor (2018) emphasizes this as part of growing moral capability: it’s not enough to agree, we also need to respectfully disagree. During the same Week 4 discussion, a peer suggested that data sharing might be acceptable with stronger consent mechanisms. I responded by questioning whether real consent was even possible in such situations, where people often âclick agreeâ without understanding the consequences. I wrote that âthis isnât real consentâitâs coercion disguised as choiceâ (LG19.4). That kind of pushback might seem confrontational, but in our group it helped clarify where the ethical boundaries were for different people. It also made me realize that not everyone sees the same risks or priorities, and thatâs exactly why we need each other.
In our Week 6 discussion about the Universityâs launch of the ELM AI tool, the collaboration took a slightly different form: we were engaging in ethical evaluation, trying to judge whether the University made the right call. This required balancing different kinds of ethical concernsâaccess, fairness, innovation, and risk. Some peers were optimistic, highlighting the benefits of making AI tools freely available to students. Others, including myself, questioned the timing and depth of stakeholder engagement. I wrote that âthe University seemed to focus more on rollout speed than inclusive consultationâ and that this limited trust (LG9.6). A classmate added that while ELM may be safer than commercial tools, it still relies on models from OpenAI, which introduces questions about transparency and academic integrity (LG9.6).
What I learned here is that collaboration doesnât always lead to consensusâbut it does lead to a richer understanding of whatâs at stake. Ben Green (2019) argues that ethical evaluation is a political actâitâs about negotiating competing goals and deciding which values to prioritize. In our group, we didnât all agree on whether ELMâs release was justified, but we identified both strengths and gaps. This kind of discussion shows that ethics is not about having the perfect answerâit’s about being willing to ask better questions, together.
Before this course, I often thought of ethics as a set of rules: do this, donât do that. But through our group discussions, Iâve started to see ethics as a processâsomething we build together through dialogue, disagreement, and reflection. The most memorable experience for me was Week 4, when we debated whether car companies should be allowed to sell drivers’ behavioral data. Going into the discussion, I had a clear opinion: this kind of data use was invasive and unfair. But hearing others explain how data could be used to improve safety or help price insurance more accurately made me pause. I didnât fully change my mind, but I did start thinking in more nuanced terms. Thatâs one of the biggest things collaboration gave me: depth.
I also became more aware of how positionality shapes ethical perspectives. A classmate pointed out that working-class drivers or night-shift workers might be unfairly penalized by âneutralâ data models. That comment hit me, because I realized I hadnât considered how class or job type could affect someoneâs risk profile in such systems. It reminded me of what Ruha Benjamin (2019) calls âdiscriminatory designââwhen tools that look neutral are actually structured in ways that reinforce inequality. It also made me more cautious about assuming that any one design or system is truly fair. Ethics isnât just about what the technology doesâitâs about who it helps and who it leaves out.
The Week 6 ELM case was a different kind of experience. It wasnât about corporate tech, but something close to home: a decision by our own University. I appreciated how the discussion forced me to take a broader view. On one hand, I understood why the University wanted to act quickly and give students access to AI tools in a safe environment. On the other, I couldnât help but feel that the lack of prior consultation weakened the ethical foundation of the launch. What made this conversation especially meaningful was how different people interpreted the same action in different ways. One person saw it as proactive leadership; another saw it as a top-down decision that ignored student voices. This divergence helped me understand that collaboration isnât always about agreement. Sometimes, itâs about learning how to live with ethical ambiguity.
Overall, these group experiences have shifted the way I approach data ethics. Iâve learned that working with others means accepting discomfort, being open to being wrong, and taking the time to listen. Vallor (2016) argues that ethical reasoning in tech must be guided by virtues like humility, empathy, and courage. I didnât understand what that meant until I was in a group trying to wrestle with questions that didnât have easy answers. In the end, Iâve come to see ethical collaboration not just as a method, but as a mindsetâone that I hope to carry forward into future work with data and technology.
So, how can we build more ethical data futures together? Based on what Iâve learned, the answer isnât about having perfect knowledge or fixed rules. Itâs about showing upâwith our uncertainties, our disagreements, and our willingness to listenâand doing the work together. Collaboration allows us to uncover hidden harms, question dominant assumptions, and evaluate choices from multiple perspectives. It doesnât guarantee that weâll make the ârightâ decision every time, but it increases the chances that our decisions will be thoughtful, inclusive, and grounded in real human concerns.
The theories of Shannon Vallor and Ruha Benjamin helped me make sense of these group experiences. Vallor reminds us that ethics is a habit, something we must practice in real life with real peopleânot just talk about in theory. Benjamin urges us to pay attention to the structural forces that shape technology, warning that innovation without reflection can reproduce old injustices. Both scholars show that data ethics isnât about individual brillianceâitâs about shared responsibility.
If thereâs one thing Iâm taking away from this course, itâs that building ethical futures requires both values and voices. We need designers, policymakers, users, and communities to work togetherânot just to critique systems, but to reimagine them. As our world becomes more data-driven, this kind of collaboration is not optionalâitâs essential. Itâs what turns technical decisions into moral ones. And itâs what helps us ensure that the future we build is one we can all live with.