When people talk about the future of technology, they often focus on innovation and efficiency. But in the world of data and AI, the real question is not just what we can build, but how we choose to build it—and with whom. More and more, I’ve come to see that creating ethical data futures isn’t a solo task. It’s something we have to do together. Working with others helps uncover blind spots, surface disagreements, and force us to ask uncomfortable but necessary questions about fairness, privacy, and power. As Shannon Vallor (2018) points out, ethics in the data age isn’t just about following rules—it’s about cultivating habits of reflection and responsibility in complex social contexts. And as Ruha Benjamin (2019) warns, without critical collaboration, we risk using new technologies to reinforce old injustices.

In this essay, I explore how collaboration helps us engage more deeply with data ethics, drawing from two learning group discussions: one on driving data and insurance fairness (Week 4), and another on the University of Edinburgh’s release of the ELM AI tool (Week 6). I show how working with peers allowed for ethical deliberation, disagreement, and joint evaluation—core practices that make data ethics more grounded, human, and realistic.

One of the most important reasons to collaborate in data ethics is that no single person sees the full picture. Technologies are designed by some, used by many, and affect almost everyone. That means ethical decisions about data often require input from multiple people with different experiences and values. Vallor (2018) calls this “ethical deliberation”—the process of weighing competing interests and making choices that reflect shared responsibility rather than individual opinion.

In our Week 4 discussion about the sale of driving data by car companies to insurance providers, I saw ethical deliberation in action. Some people argued that this kind of data sharing could lead to more accurate, fairer pricing. Others—including myself—pushed back, pointing out that what seems fair in theory can be very different in practice. I wrote that the idea of personalized insurance based on driving behavior “might punish night-shift workers, people who drive long distances for work, or anyone whose habits don’t match the ‘ideal driver profile’” (LG19.4). What struck me in our discussion was how others added real-life examples I hadn’t considered—like how driving late at night might automatically be flagged as risky, even if someone has never had an accident. This back-and-forth helped us uncover not just privacy concerns, but deeper issues of structural inequality.

This connects directly to Ruha Benjamin’s (2019) idea of the “New Jim Code,” where new technologies reinforce old patterns of discrimination under the guise of objectivity. She argues that algorithmic systems often carry forward racial, class, and gender biases—not because of malicious intent, but because those systems are trained on biased data and used without enough reflection. The driving data example showed exactly that: by using driving behavior as a proxy for risk, insurers could end up penalizing people simply for living or working under different conditions. Collaboration helped us recognize this; without hearing others’ experiences and perspectives, I might have missed the extent to which seemingly neutral data practices can be discriminatory.

Another ethical skill that came up during this group work was ethical contestation—the ability to challenge each other when values or assumptions conflict. Vallor (2018) emphasizes this as part of growing moral capability: it’s not enough to agree, we also need to respectfully disagree. During the same Week 4 discussion, a peer suggested that data sharing might be acceptable with stronger consent mechanisms. I responded by questioning whether real consent was even possible in such situations, where people often “click agree” without understanding the consequences. I wrote that “this isn’t real consent—it’s coercion disguised as choice” (LG19.4). That kind of pushback might seem confrontational, but in our group it helped clarify where the ethical boundaries were for different people. It also made me realize that not everyone sees the same risks or priorities, and that’s exactly why we need each other.

In our Week 6 discussion about the University’s launch of the ELM AI tool, the collaboration took a slightly different form: we were engaging in ethical evaluation, trying to judge whether the University made the right call. This required balancing different kinds of ethical concerns—access, fairness, innovation, and risk. Some peers were optimistic, highlighting the benefits of making AI tools freely available to students. Others, including myself, questioned the timing and depth of stakeholder engagement. I wrote that “the University seemed to focus more on rollout speed than inclusive consultation” and that this limited trust (LG9.6). A classmate added that while ELM may be safer than commercial tools, it still relies on models from OpenAI, which introduces questions about transparency and academic integrity (LG9.6).

What I learned here is that collaboration doesn’t always lead to consensus—but it does lead to a richer understanding of what’s at stake. Ben Green (2019) argues that ethical evaluation is a political act—it’s about negotiating competing goals and deciding which values to prioritize. In our group, we didn’t all agree on whether ELM’s release was justified, but we identified both strengths and gaps. This kind of discussion shows that ethics is not about having the perfect answer—it’s about being willing to ask better questions, together.

Before this course, I often thought of ethics as a set of rules: do this, don’t do that. But through our group discussions, I’ve started to see ethics as a process—something we build together through dialogue, disagreement, and reflection. The most memorable experience for me was Week 4, when we debated whether car companies should be allowed to sell drivers’ behavioral data. Going into the discussion, I had a clear opinion: this kind of data use was invasive and unfair. But hearing others explain how data could be used to improve safety or help price insurance more accurately made me pause. I didn’t fully change my mind, but I did start thinking in more nuanced terms. That’s one of the biggest things collaboration gave me: depth.

I also became more aware of how positionality shapes ethical perspectives. A classmate pointed out that working-class drivers or night-shift workers might be unfairly penalized by “neutral” data models. That comment hit me, because I realized I hadn’t considered how class or job type could affect someone’s risk profile in such systems. It reminded me of what Ruha Benjamin (2019) calls “discriminatory design”—when tools that look neutral are actually structured in ways that reinforce inequality. It also made me more cautious about assuming that any one design or system is truly fair. Ethics isn’t just about what the technology does—it’s about who it helps and who it leaves out.

The Week 6 ELM case was a different kind of experience. It wasn’t about corporate tech, but something close to home: a decision by our own University. I appreciated how the discussion forced me to take a broader view. On one hand, I understood why the University wanted to act quickly and give students access to AI tools in a safe environment. On the other, I couldn’t help but feel that the lack of prior consultation weakened the ethical foundation of the launch. What made this conversation especially meaningful was how different people interpreted the same action in different ways. One person saw it as proactive leadership; another saw it as a top-down decision that ignored student voices. This divergence helped me understand that collaboration isn’t always about agreement. Sometimes, it’s about learning how to live with ethical ambiguity.

Overall, these group experiences have shifted the way I approach data ethics. I’ve learned that working with others means accepting discomfort, being open to being wrong, and taking the time to listen. Vallor (2016) argues that ethical reasoning in tech must be guided by virtues like humility, empathy, and courage. I didn’t understand what that meant until I was in a group trying to wrestle with questions that didn’t have easy answers. In the end, I’ve come to see ethical collaboration not just as a method, but as a mindset—one that I hope to carry forward into future work with data and technology.

So, how can we build more ethical data futures together? Based on what I’ve learned, the answer isn’t about having perfect knowledge or fixed rules. It’s about showing up—with our uncertainties, our disagreements, and our willingness to listen—and doing the work together. Collaboration allows us to uncover hidden harms, question dominant assumptions, and evaluate choices from multiple perspectives. It doesn’t guarantee that we’ll make the “right” decision every time, but it increases the chances that our decisions will be thoughtful, inclusive, and grounded in real human concerns.

The theories of Shannon Vallor and Ruha Benjamin helped me make sense of these group experiences. Vallor reminds us that ethics is a habit, something we must practice in real life with real people—not just talk about in theory. Benjamin urges us to pay attention to the structural forces that shape technology, warning that innovation without reflection can reproduce old injustices. Both scholars show that data ethics isn’t about individual brilliance—it’s about shared responsibility.

If there’s one thing I’m taking away from this course, it’s that building ethical futures requires both values and voices. We need designers, policymakers, users, and communities to work together—not just to critique systems, but to reimagine them. As our world becomes more data-driven, this kind of collaboration is not optional—it’s essential. It’s what turns technical decisions into moral ones. And it’s what helps us ensure that the future we build is one we can all live with.