I want to engage with a type of argument that is self-undermining. The months after Al-Qaeda’s attacks on New York and Washington in 2001 birthed an immoral decision. The USA at the highest level began publicly legitimising torture as a means to an end in the War on Terror. As opposed to having the decency to pretend not to. Opponents would often use a two step argument. First they would lay out why torture was repugnant. It harms the tortured, stains the torturer, and degrades the society that permits it. Second, they would add as a final flourish ‘torture doesn’t even work!’. Torture produces bad, unreliable intel. That’s lucky, I thought. What if it did work? Any argument where morality is reliant on pragmatism is going to risk the equation changing against it. Where do you stand then?
The same argument tends to bubble up over AI. In my work outlandish claims are often made about police AI by those developing the systems. These are private solutions offered by companies promising to use data integration to be able to predict where crime will happen so that police resources can be distributed accordingly. In response critics rehearse the damage to democracy when decisions are taken beyond any possible oversight, the danger to human autonomy, and then we say, ‘And it doesn’t even work’! Critics are usually right. These systems over promise and deliver little more than a beat cop’s common sense (crime happens where lots of young males hang out, where there are targets, and little oversight). We also point to the biases being worked in here so that systems without any explicit racial categorisation element will still faithfully reproduce racial biases in their effects.
If someone then designed a workable pre-crime system, one that did not simply replicate existing biases in the datasets, then what? These arguments do not go away, and we should continue to critique the effect as well as the principle, but the problem is that the pragmatic argument is a dodge. It is a way of avoiding the claim morality places on us, that we have to take decisions against our own interests and convenience. That there are paths we should not follow. A workable pre-crime system is something that should never be developed because it removes any notion of culpability and effectively end runs due process. Systems cease to have any hope of having democratic oversight, and they shift power from communities to states and remote techbrotopia corporations.
On the other hand that argument might be too purist. It is through their practical effects that we approach technologies and policies, and therefore leading with the in-context pragmatism is not a bad move to make. Further, we should not assume pragmatism is downstream of morality. Mostly we reason in the other direction. We say ‘how does this affect me … how does it affect my context … and maybe after a bit, how does it affect the world I will come to live in’. The two are more closely bound in than presented as we see in eg the ‘effects’ of bitcoin. Therefore the role of sociology is to interject at that point, where human visions of a moral life intersect with human realities about how once can live well.