Just adding to the rash of ‘stop thinking disinformation is the problem’ takes out there. The high cultural capital crew like to imagine there to be a bright line between illegal and illegal, organised and opportunistic, rational and emotional. And also that they know which side of the line they would be on. Likewise, information manipulation. Disinformation is the cosy mug of horlicks that the information saturation set reach for to explain democratic events they do not like. Why, if only those damned twitter bots did not spin constant lies and rile up the little people then we wouldn’t have to deal with events, which are inherently tiresome. On the other hand, maybe lots of people just do not want to lie in the bed you have created for them.
Challenge: what’s the difference between propaganda, political campaigning, advertising, and information operations, if there is one? Why is it okay when our side does it? There is a long history of fake news and election manipulation. In an early example of cancel culture. the ostrakon was used to expel citizens from Athens and was highly open to manipulation. The Soviet Union placed a fake news story in an Indian newspaper about AIDS being a US developed bioweapon. Many governments engage in shaping public opinion, from the health (cigarettes be bad) to the attitudinal. Plenty of institutions engage in astroturfing, other manipulation of civil society. What is new in the current climate is the global reach of nationally motivated hacker groups, not all of which are or need to be centrally coordinated.
It is a powerful propaganda tool used by nationalist regimes to suppress civil liberties, undermine opposition and simply swamp out opposition or independent information and as a tool of international relations. But also itself become a go-to tool for strategic de-legitimation (you used the fakes!). There’s a fear of institutional fragility which speaks of a moral uncertainty in liberal elites. More neutrally, an information economy and polity is exposed to attempts to manipulate information for strategic ends as opposed to everyday annoyances.
The narrative around disinformation feeds into the narrative that certain choices can be objectively deemed false. Discussions of Brexit always implies there is something false to it: Cameron called the vote for cynical party management reasons and Johnson falsely posed as an anti-EU populist. Falsity lets us off the hook of doing proper material, historical analysis.
Disinformation strikes at a number of question at the intersection of information science, sociology of markets, sociology of technology and the philosophy of knowledge: how can disinformation be defined, recognised and how can systems be made resilient against it. There are several thorny ontological and epistemological questions between the politics of knowledge, preference falsification, technical and social verification and conceptual space theory. We don’t easily know what disinformation is when we see it so we need agreement that we are in fact talking about the same thing.
There are a number of developments in the organisation of information markets that are live right now and which mean the problem isn’t just open to immediate technical or oversight solutions: financialisation of disinformation, the vertical integration of political campaigns with new media, and the development of a distributed labour infrastructure which is agile and available. There is also a collective effervescence to disinformation action. When Russian hackers take out Estonia’s infrastructure or Chinese internet activists DDOS US government websites, this is partly for the joy. Therefore we should consider this as a type of national political action and participation, not just a wily propaganda ploy.