Any views expressed within media held on this service are those of the contributors, should not be taken as approved or endorsed by the University, and do not necessarily reflect the views of the University in respect of any particular issue.

[10] Hope and The Future of AI

This post isn’t going to sound like it, but I’ve taken some downtime over the last few weeks. I’ve used the time to read and have some interesting conversations. Most of it has been thinking about what a future with AI will look like and how we can maintain control over our destiny as individuals and humans.

Cash & cybersecurity

Through the ECA I have met a wonderful lady called Caroline Helbing who is a communications specialist and publisher. We had a great virtual coffee where we spoke about the future of money as well as cybersecurity. She has been working with author Shermin Voshmgir on her book Token Economy – Money & DeFi which I am working through. We also discussed our common interest in cybercrime and espionage. Off the back of her recommendation I read Sandworm by Andy Greenberg. It was a good look at the North Korean element of cybercrime and goes well with This is how they tell me the world ends by Nicole Perlroth and the BBC podcast series Lazarus. There is some repetition between all three but what it boils down to is that our personal data and our sovereign institutions are now ‘fair game’ not just for criminals looking to make money, but also for states as part of espionage and cyber warfare. It is no longer just the remit of software companies who have to secure their products, but also all organisations that collect data. I know a few people at the BBC who recently were caught up in the data breach of the payroll system, it’s devastating to be part of a breach because you feel vulnerable and unsure of what might happen next. Knowing how many companies just don’t understand cybersecurity when they implement systems or collect data is quite scary and I do think that there needs to be more general education on the matter. I was talking to my grandparents about this and my grandfather said ‘I’m so glad I’m almost dead so I don’t have to worry about these things…’ lol.

 

The future with AI

There have been some big events in the legislation of AI over the last few weeks. The EU has taken a step further with its ‘AI bill’ and the US has held a roundtable on AI governance.

I’ve been re-reading Human Compatible by Stuart Russel as it’s a nice hopeful book – to be honest it was triggered by a few things: the latest series of Black Mirror, TikTok releasing it’s new set of beauty filters (scarily good) and a conversation with an industry colleague where they asked me what I think the future of AI and marketing is. My answer to this was that I saw two paths:

  • either we will use AI to generate so much content (and the AI will learn what content ‘works’) that we will just saturate ourselves with too much homogeneous information that requires endless checking dulling creativity
  • OR we will use it to automate low level tasks and free up marketers to get back to being truly creative.

After reflection I wondered if that was too binary a vision. The reality is that a lot of companies don’t even have centralised data capabilities or good data organisation – so using anything with ‘AI’ is not going to be very good unless it is specifically task orientated, such as finding patterns in specific data sets.

I also attended a talk by a company called The IO Foundation who aim to create a charter of Digital Rights and support it with software that can check AI for harms. It was interesting, I had a bit of a debate with the speaker over bias and how we can’t just check the output for harms but have to check the input as well – he dismissed bias as a real challenge ‘because its always existed’, which did not sit well with me. But at the same time I have to appreciate not everyone can solve all problems and he was quite clear that he wanted to focus on one very specific challenge – checking the outcome for potential harms.

I also came across this organisation as part of the talk: https://dataethics.eu/ 

There is a data festival being put on this week by the Alan Turing Institute in London and I’m going to go see one of the exhibitions on Friday ‘AI- who is looking after us’, which I’m hoping is uplifting otherwise I might just have to accelerate my plans to go live in the woods.

 

A little bit of hope

It’s also been a bit of a culturally weird few weeks – we’ve got a literal fight happening between Elon Musk & Mark Zuckerberg, we lost a few billionaires and everyone laughed, we’ve had riots and attempted coups and endless stories of companies profiteering. It feels like constant division and anger and endless misery. So…I read How to Stay Sane in an Age of Division by Elif Shafak. It’s a short essay but a powerful exploration of the current state of ‘angst’ that we seem to be living through. I particularly loved her observation: “If wanting to be heard is one side of the coin, the other side is being willing to listen”.  This really resonates – I’ve been becoming more and more frustrated with what seems like people just screaming over each other or talking for the sake of soundbites. I watched a parliamentary session the other day after a serious incident in the UK where people were stabbed. Every politician got up to say the same thing over and over again so they could have their little clip for their social channels – it was such a waste of oxygen.


No real mullings here except a note that I keep gravitating towards AI, data ethics and cybersecurity in my personal readings and conversations. It’s clearly an area I’m very interested in but this might be because it is the hot topics of my industry at the moment…

 

Leave a reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

css.php

Report this page

To report inappropriate content on this page, please use the form below. Upon receiving your report, we will be in touch as per the Take Down Policy of the service.

Please note that personal data collected through this form is used and stored for the purposes of processing this report and communication with you.

If you are unable to report a concern about content via this form please contact the Service Owner.

Please enter an email address you wish to be contacted on. Please describe the unacceptable content in sufficient detail to allow us to locate it, and why you consider it to be unacceptable.
By submitting this report, you accept that it is accurate and that fraudulent or nuisance complaints may result in action by the University.

  Cancel