Any views expressed within media held on this service are those of the contributors, should not be taken as approved or endorsed by the University, and do not necessarily reflect the views of the University in respect of any particular issue.

Influence Government

Influence Government

Research into the practices and impacts of government use of online targeted advertising for behavioural change

Thinking about the ethics of Police use of targeted online advertising 1 –

There are a number of ethical issues which we believe emerging from public sector, and in particular police use of online targeted advertising techniques, and which we argue will be central concerns for any future incorporation of them into democratic policing in the UK. It is important to note that police communications is not new, and neither is the existence of audience segmentation or targeting. Communications have been a core part of policing practice, both in their ‘PR’ mode and wider ‘awareness’ campaigns for many decades. However, these new forms of campaign explicitly go beyond the provision of information to the public, seeking to directly shape their behaviour.

The obvious starting point for ethical discussion are the debates around and behavioural policy interventions, particularly consent and manipulation. However, the ethics of behavioural interventions is not the only area of ethics at play in our case. Advertising itself is an ethically fraught domain, particularly when ads are targeted at the more vulnerable, especially children. More particularly there are debates on the design, regulation and use of targeted advertising systems that depend on very large scale collection of information about individuals, and the use of that direct targeted messages in a way that is hidden from all those who are not exposed to it. T

Our main concern in doing this research was to identify how various ethical issues are managed within campaigns and services – what processes are in place, what training is provided, where responsibility lies, and what red lines exist. Because these are innovative practices, we expected to find a range of places and moments where participants are not aware of ethical issues, where ethical responsibility is concentrated in an individual (and thus not in the ‘system’), where ethical dilemmas present themselves in ways that leave room for discretion, and examples of problematic decisions and practice that are only appreciated after campaigns were run, or not at all. 

In this post we reflect on the dates about ‘manipulation’ of individual. Other issues such as democratic transparency and accountability, blowback and unintended consequences are discussed in the latest report.

Behavioural policy – manipulation

The development of policy interventions using behavioural influence approaches, including ‘nudge’ has been the subject of considerable ethical debate during the 2010s especially in fields such as “behavioural economics” and “behavioural law” and political philosophy  (see for example Sunstein 2016, Lepenies and Małecka, 2018; Schmidt and Engelen, 2020; White 2016). These type of interventions are seen as taking a middle road between merely providing information and outright coercion, while remaining cost-effective. Nudges are meant to work via cognitive biases in the so-called ‘System 2’ thinking (Kahneman 2002), rather than stimulating considered reflection and choice.  ‘Boosts’ on the other hand seek to give a quick skill ‘upgrade’ to boost more reflective agency (Hertwig, 2017).  A great many of the public policy interventions discussed in this literature involve much more direct manipulation of the citizen environment or ‘choice architecture’ than the messaging we see in these cases – such as changing tax levels, or urban planning – but other equally controversial initiatives include various sorts of food labelling and warnings. Ethical arguments are made for both intervening and not intervening, around hidden manipulation, over whether the state can identify the best interest of citizens (paternalism), and the obligation of the state to enter and rebalance the choice environment already heavily manipulated by commercial and other political interests etc. Others would argue that all policy interventions involve changing the ‘choice architecture’, often in disorganised and problematic ways – and it is better that this be well-designed, ‘scientific’, and transparent. It is the transparency of intention, means and origin, for democracy scrutiny and accountability that would seem more relevant here.

Behavioural policy – consent to receive messages from data-driven targeting

 

When we receive an online ad, we know it has been targeted at us because of who we are, or something we have done, and generally have no way either to avoid the message, or understand why we have received it. First, we receive these messages without giving our explicit consent, but is that a problem? We expect to receive messages from the government, just as from non-government organisations – and maybe it is our responsibility to do so in order to live as a responsible citizen who takes part in democratic processes.  However, do we, and can we, consent to those that attempt to manipulate our behaviour?  The literature on Public relations and strategic communications, propaganda, and behavioural nudging (e.g. Bakir et al 2019; Sunstein, 2016a)  explores what ‘true’ consent is, what might be considered manipulation (Sunstein, 2016b),  the information and social environment in to which governments are trying to bring influence, and the special case of government, compared to non-state actors, including commercial, civil society and political parties. This is all within the context of contemporary information environments, where deliberate of mis- and dis-information campaigns through the internet are more sophisticated and diverse, as well as  facilitating all manner of peer-to-peer cultures and mis-information flow.. While the ideal of interactive, two way consensual communication is still persuasive in some situations, the cases in which we are looking at predominantly one-way (police -> individual citizen on their personal screen). However they do not reach the other end of the spectrum identified by (Bakir et al 2019; Moloney (2006: xiiii)) – that of outright coercion. 

The campaigns we see are not obviously trying to incentivise via specific threat of force, nor via other mechanisms such as individual rewards, although a more general threat of force is clearly behind all messages from the police – it is the police that will catch up with you, not cancer or your failure to save for old age. They are not manipulative by using untruths, or deceptive messages, indeed great efforts generally go towards ensuring this, but in some cases there is deception, when it is not clear who is behind the advert (such as Home Office funding of campaigns by charities), and perhaps more importantly, it is not made clear why someone has received a message.  However some adverts do imply threat  (for example, the anti-grooming campaign) both in the messaging that if you don’t get help we will find you, and the the fact that the message has arrived through a known targeting mechanism – which might imply that the receiver of the message  thinks they have personally been identified and targeted because of actual suspicions – and therefore open to actual State threat.

There is an open debate is whether the sorts of targeting deployed in online advertising are ethical – do they respect the dignity and privacy of those who receive these messages.  Data Protection Law outlaws direct personal messaging without consent (using collected targeted information like address). The current configuration of targeted advertising, which allows quite narrow targeting of people as part of groups is hotly debated in policy and law. Recent decisions in the EU have pushed Meta to start to introduce Consent as the necessary basis for receiving customised ads for example. One reason is the lack of transparency in how and why an individual is addressed, or how an audience is constructed in the online platform. (It could technically be set up to allow you to see at least how you were targeted, the current configuration is a commercial choice). Lack of messages such as – ‘you have seen this ad because you have liked the following things’ is removing the ability to rationally consent to receiving the message.  Not having a message that makes it clear an individual was not individually targeted could be deceptive, and similarly a message that clicking on a link will not identify to the advertiser of platform would be important.

A second case aspect of informed consent is knowing who the messages come from. Most of the adverts in these cases are clearly branded, not only in the use of text and logos, but also in the whole look and feel of the creative content. This was largely seen as the default, and in some cases imperative approach. However there are some examples when ‘unbranded’ content was used. Under political advertising rules the closed platforms have tried to suppress  unbranded content, and will remove it if no disclaimer and original label is made on the ad, or an unverified account is used.  However on the open web we have at least one case where the decision not to brand (and thus to deceive).  Again, this balance of public interest and ‘acceptable’ deception was debated, but clearly needs stronger guidance and transparent process about when it is an acceptable approach and what extra layers of oversight might be needed..

Overall the use of targeted online advertising could be considered deceptive at a societal level, since some of the campaigns are not visible to the public – there is a lack of transparency that they are occurring, since only a narrow group of people ever knows about them. However for the PS situation there appears to be no systematic attempts to hide campaigns; indeed they are prompted as success stories, and their integration with mass media campaigns mean they are visible. Other forces however are clearly operating below the radar – and may argue that this is the same for many police activities – operations become visible on request, even if all the evaluations and details are not.

In summary, there are risks of a degree of deception by omission; of coercion by omission; and coercion and deception via targeting. The sorts of disclaimer strings (Edelston et al 2020) that are imposed by Meta, for example, and more comprehensive cross-platform ad libraries should be used to increase transparency as to why someone has been shown an ad.

 

Bakir, V., Herring, E., Miller, D., & Robinson, P. (2019). Organized Persuasive Communication: A new conceptual framework for research on public relations, propaganda and promotional culture. Critical Sociology45(3), 311-328

Edelson L, 2020. Publishing Facebook ad data (redux). In: Online Political Transparency Project. Available at: https://medium.com/online-political-transparency-project/publishing-facebook-ad-data-redux-ff071c41c12e  (accessed 8 March 2023).

Hertwig, R. (2017). When to consider boosting: some rules for policy-makers. Behavioural Public Policy1(2), 143-161.

Kahneman, D. (2002). Maps of bounded rationality: A perspective on intuitive judgement and choice.

Lepenies R and Małecka M (2018) The ethics of behavioural public policy. In: Lever A and Poama A (eds) The Routledge Handbook of Ethics and Public Policy. 1st ed. Abingdon, Oxon ; New York, NY : Routledge, 2019. | Series: Routledge handbooks in applied ethics: Routledge, pp. 513–525. DOI: 10.4324/9781315461731-41.

Schmidt AT and Engelen B (2020) The ethics of nudging: An overview. Philosophy Compass 15(4). John Wiley & Sons, Ltd: e12658. DOI: 10.1111/phc3.12658.

Sunstein C R (2016a) The Ethics of Influence: Government in the Age of Behavioral Science. Cambridge Studies in Economics, Choice, and Society. Cambridge: Cambridge University Press. DOI: 10.1017/

Sunstein, C. R. (2016b). People prefer system 2 nudges (kind of). Duke LJ66, 121.

Leave a reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

css.php

Report this page

To report inappropriate content on this page, please use the form below. Upon receiving your report, we will be in touch as per the Take Down Policy of the service.

Please note that personal data collected through this form is used and stored for the purposes of processing this report and communication with you.

If you are unable to report a concern about content via this form please contact the Service Owner.

Please enter an email address you wish to be contacted on. Please describe the unacceptable content in sufficient detail to allow us to locate it, and why you consider it to be unacceptable.
By submitting this report, you accept that it is accurate and that fraudulent or nuisance complaints may result in action by the University.

  Cancel