From my personal work experience

I remember working at a hotel front desk. Every day, I would receive a large number of travellers and collect their personal information during check-in: identity documents, contact details, car registration numbers, and even flight information. It was all carried out as if it were a matter of course. We didn’t explain to our guests how this information would be used; they never seemed to question it.

In retrospect, this ‘default consent’ is a very risky social phenomenon. In a subtle culture, we are becoming accustomed to not asking where our data is going, and this involves a weakening of the ethical foundation of ‘informed consent’.

Insight from a course of ethical data futures

In my course, Ethical Data Futures, we discussed in depth the centrality of ‘informed consent’ in data ethics, and Shannon Vallor (2018) pointed out that algorithms are not neutral tools, but rather products that reflect human value judgements. Therefore, the process of data collection, use and algorithms are essentially social practices.

In the book Voices in the Code, there is an impressive case of a woman who was denied pain relief treatment because her system scored too high, surprisingly because the medication she had prescribed for her own pet was counted against her. What is even more shocking is that she was not even given the opportunity to know how she was scored or the path to file a complaint. Algorithms have become a hidden force in the denial of human rights, and all of this is happening without any real opportunity for ‘informed consent’.

In my essay, I examined the use of data in the car insurance industry. In the US, General Motors (GM) collects data on users’ driving behaviour through its On-Star system and quietly resells it to third-party insurers for increasing premiums. The informed consent signed by users are often buried in vague privacy policies, making it difficult for users to really understand what they are saying.

In the UK, a similar ‘pay-as-you-drive’ model, while seemingly fairer, still suffers from opaque pricing algorithms. Clicking the ‘agree’ button does not mean that you really understand. This consent ultimately undermines the public’s ability to protect their rights, and is especially unfair to less data-literate groups.

In my reflections, ‘public participation’ must not stop at consultation processes. The most important step is to help people realise that they have the right to informed consent.

Back to my experience working in hotels, I have witnessed this lack of sense of entitlement among the customers. They usually don’t question, and they don’t feel they have the right to question. Just like in the algorithmic society, users are used to clicking YES but never really realise that they can and should refuse unreasonable data usage.

Empowering the public to ‘say no’ is the foundation for ethical co-construction.

While public participation is extremely critical, as Robinson (2020) points out, the population may not always be active or knowledgeable enough to participate in systemic reforms. Therefore, institutional government involvement is indispensable.

In the case of General Motors, the US Federal Trade Commission (FTC) intervened to stop the misuse of its data after the media exposure, prohibiting it from selling user data to credit agencies for five years. Although this is a form of post-mortem supervision, it also demonstrates the need for government intervention.

However, a complete ban could have negative effects. For example, a complete ban on data sales could hinder the evaluation mechanisms that companies need to optimise their algorithms. Therefore, rather than prohibition, reasonable restriction would be more effective. The government should set clear boundaries that protect the public interest and allow for technological advancement.

Back to that familiar hotel front desk scenario. If we could seriously tell our customers, ‘This information will be used at ……, can we use it this way?’ Maybe they’d feel respected, maybe they’d ask more questions, maybe they’d actually think about their right to choose.

In a data-driven society, building an ethical future is not just a task for programmers or policymakers, but a responsibility and a right for every ordinary person who clicks the YES button.

What we need to do is not to reject technology, but to raise awareness.
The next time you see that ‘I agree’ button – do you really understand it?