Tech doesn’t kill people, over capitalised techbros kill people.
We are in the tail end of the AI hype/panic cycle. We are all well beyond noting that LLMs can reproduce human failings. It is vital that if we are to use a tool like ChatGPT or any generative AI to note some assumptions or myths that go along with it. These are:
The assumption of neutrality (responses do not reflect an identifiable ideology)
The assumption of ethics (response are tailored to do no harm, for example, not affirming suicidal ideation)
The assumption of stability (answers are consistent)
The assumption of competence and understanding (responses draw on a curated store of human knowledge and it will not for example misadvise on an interlocutor’s rights and liabilities)
The conversational assumption (interactions are private and responsive).
Underlying that is an anthropomorphic bias. On both sides, techno panic and technical love, there is a tendency to impute anthropomorphised qualities or to get frustrated when those qualities are not there. When the machine reveals itself as it is, not as we were all like it. Personally I prefer it when the machine is clearly a machine, clearly a tool that I can use. What I mean by a tool is a system with clearly defined characteristics that are reasonably predictable. I found it much better to use AI as a device in that way. So even if I use it as my robot companion at least I know what it is being used for. If I use it to create a digest of reports on a topic and I know its limits and that is an effective use case. Any critical perspective on technology is going to have to interrogate these assumptions. But that also means interrogating the reverse. For example any assumption that particular technology is inherently harmful.