Any views expressed within media held on this service are those of the contributors, should not be taken as approved or endorsed by the University, and do not necessarily reflect the views of the University in respect of any particular issue.

Magnus Hagdorn

Magnus Hagdorn

Research Software Engineer

Machines with Error

Surgeons Hall

I went to an excellent panel discussion on AI biases, failures and fairness hosted by the University of Edinburgh. This presentations and the following discussion was very interesting, although I think it didn’t quite hit the spot I was looking for.

A lot of the discussion was on biases. Biases are human, they are a way of us dealing with too much information. And even though we might sometimes get it catastrophically wrong they serve us reasonably well. These biases are culture specific. They can therefore creep into data sets used to train AIs or influence the sort of questions we ask the AI. The AI then also becomes biased. The trick is then to be aware of these biases and to ensure teams are not homogeneous (ie not all white, male).

One defence of AI was that it is just a tool. The failures that we are currently seeing (eg miscategorising particular images) could be avoided by properly testing.

One story that keeps coming up in various guises is that a trained and well behaved AI falls apart when confronted with a new environment: tonight’s story was of a self-driving car made by a Swedish luxury car manufacturer that failed to deal with kangaroos in Australia. So the AIs are still very literal and can only deal with things they have been explicitly trained to deal with. They cannot abstract and apply knowledge to other similar situations.

Some bias was defended. One of the presenters admitted, with tongue in cheek, that he was biased against bad people. An example of desired bias was a system for a blind woman to go safely home, detecting potentially threatening groups of males. The other example was an armed military robot that can discriminate between combatants (of a particular flavour) and civilians; and the same robot in a paramedic version that must not be biased towards a particular flavour of soldier. I would call these features targets rather than biases.

The American anti-discrimination laws were mentioned. They are monitored by looking at the aggregate statistics rather than individual decisions. A similar system could be applied to AIs: do they behave statistically better, worse or the same as human decision makers. There is an issue, however, an erratically behaving entity (both human or AI) could have better statistics than a mediocre but consistent entity. The mediocre and consistent behaviour might well be preferable.

Loosing broad categorisations might well be problematic in itself. Individualised health risk assessments remove the basis for health insurance where the risks are socialised. I think risk in particular is something we don’t really know how to deal with on a personal level since it applies to a population. An individual might well be fine even though they are in a high risk category. We are getting closer and closer to the world of Gattaca.

I think the behaviour of the individual is interesting, we all have our biases and potentially behave differently in the same situation as someone else. Averaged over an entire culture we can see cultural biases. An AI codifies behaviour and always decides in the same way. This may be good if the AI is assumed to do exactly what we intend to. However, it will be biased as discussed earlier. And because it is a digital system that can be copied as many times as we want to, these (possibly biased) decisions can be applied uniformly to a huge population potentially encompassing different cultures. I think even if the AI was not biased in some way we might be optimising for the wrong thing and get stuck in a particular cultural outlook because individual variations are lacking. I am thinking here of recommendation systems, you liked film A – you probably will like films B, C and D as well. This system can change the outlook of an entire population and make it more homogeneous.

I do agree that AI is just a tool and not intrinsically unethical. The question is what are we going to use this tool for and to what end. I have serious doubts if it is to optimise away jobs that so far are being done by humans in order for a small number of people to get even richer. Especially if these tools then also allow the manipulation of entire societies.

Leave a reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

This site uses Akismet to reduce spam. Learn how your comment data is processed.

css.php

Report this page

To report inappropriate content on this page, please use the form below. Upon receiving your report, we will be in touch as per the Take Down Policy of the service.

Please note that personal data collected through this form is used and stored for the purposes of processing this report and communication with you.

If you are unable to report a concern about content via this form please contact the Service Owner.

Please enter an email address you wish to be contacted on. Please describe the unacceptable content in sufficient detail to allow us to locate it, and why you consider it to be unacceptable.
By submitting this report, you accept that it is accurate and that fraudulent or nuisance complaints may result in action by the University.

  Cancel