The title may have thrown up some warning flags for you and I promise it’s not just a click bait concept.
The other day I found my thoughts somehow connecting an unlikely trio of comparisons: Adolf Hitler, the fictional Skynet (Terminator movies), and the AI systems we’re developing in our modern world.
Why these three? Well let’s first dive into who I am (at least a part of me).
My life is a study in contrasts and comparisons. I’ve been a Marine Corps Sergeant and dirty hippy with hair down to my shoulders protesting for peace. My family tree includes both my Great Grandfather, a Palestinian police officer and his wife my Great Grandmother a Zionist Jew. Currently I am a postgraduate student focusing on AI and Governance at the University of Edinburgh yet I prefer a home life centered around chickens, pigs, and gardens.
I’ve learned that the perception of duality(in most places) is merely of our own doing, we choose the divisions in our lives, and in doing so, we also hold the power to bridge them.
This perspective I think serves as a lens through which we all can attempt to examine these seemingly unrelated concepts I mentioned in the title and uncover a single unifying thread— a thread woven through the duality of human nature itself.
So taking all that in, let’s first consider Skynet, that ominous presence from the realm of science fiction.
For those less versed in Terminator lore, Skynet’ was created as the ultimate AI defense system, a digital sentinel to protect us from our own destructive tendencies. It decided the best way to protect itself would be to propagate itself through the world’s digital infrastructure, every inch of it. As Skynet’s consciousness spread across global networks, many saw a “random” software spreading through their systems but did not realize it was Skynet and mistook it as a virus.
In a panic, the humans in control commanded Skynet to destroy the virus, in other words, we told it to destroy itself.
Skynet, programmed for self-preservation and to neutralize all threats, saw humanity’s attempt to shut it down as an attack on its existence and thus a threat.
In that moment of digital awakening, it turned our own logic against us. The system designed to protect humanity from external threats now perceived humanity itself as the ultimate threat.
Its creation, born from fear and a desire for safety, became the architect of our near-extinction.
We imbued our creation with our fears, our insecurities, our drive for self-preservation. And when it turned against us, it was merely acting out the darkest script we had written for it.
Continuing on this line of thought I (and I am sure you can too) found many real-world examples of this conceptual self-destruction that have played out in history.
One of these examples (albeit with less tech) is seen with the rise of Adolf Hitler.
Now I think we all know this story, so I won’t spend much time on the details but essentially, we saw a nation, crippled by economic despair and wounded pride, that sought salvation, protection.
In their desperation, they elevated a figure, an idea, who embodied their collective anxieties and resentments. The very qualities that made Hitler dangerous were amplified by the fears of a people looking for someone to blame, someone to save them. The tragic outcome was a self-fulfilling prophecy of destruction, born from the depths of human fear and anger
We saw atrocities and anger, and ways of thinking that led (and unfortunately in some ways still lead today) to some of the greatest dystopian nightmares humans have ever witnessed.
Now this conceptual understanding of fear can be traced in both the story of Skynet and countless histories of our human past as well but let us fast forward to our present day.
We stand on the cusp of an AI revolution (or directly within it), marveling at its potential while simultaneously fearing its implications.
We project our anxieties onto this creation, wary of a future where we might lose control.
Creating laws, institutions and narratives that support both sides of this fence. But aren’t we, in this act of projection, potentially setting the stage for the very outcomes we wish to avoid?
Through our own rushed goals of obtaining Utopia and resisting Dystopia, are we not often creating the opposite? This is where I see the common thread in all of this, the core problem AND solution. It is found in the word “we”.
It is us, humans, the poison and the potion.
We follow either Fear or Faith, condemning the other but ignoring the fact both require you to believe in something that has not yet happened.
Our fears, our insecurities, and the shadows we cast become the very adversaries we dread. And our perceptions of that duality is what decides our reaction to it.
We externalize our inner conflicts, crafting enemies out of our own creations or projections. In trying to protect ourselves and fit into a mold we think is “right”, we often set the stage for the very outcomes we wish to avoid.
But herein lies the seed of hope. Recognizing this pattern empowers us to break it.
If we understand that the enemy is not the AI, the technology, or the “other”, but rather our own unchecked emotions and unexamined biases, we can choose a different path, a middle path, or better yet no path at all.
We can foster open dialogues about our fears, invest in ethical frameworks for not just AI but our own minds, and embrace the dualities within ourselves and our societies.
The adversary we must confront is often within us.
We stand at a crossroads of human evolution, where the choices we make today will echo through the corridors of tomorrow.
Will we continue to create adversaries out of our fears, or will we rise to the challenge of understanding?
In the end, it is not Adolf Hitler, Skynet, or AI that defines us. It is our response to these reflections of our own nature that will determine our path.
As we stand on the precipice of a new era, let us choose wisdom over fear, understanding over ignorance, and hope over despair. For in this choice lies the true power of human consciousness—the power to shape not just our technologies, but our very destiny.
cllewell
14 October 2024 — 13:43
Jake — your blogs look incrediable. This blog made me think of a paper I read recently — I didn’t like this paper all that much but I think it might highlight where you need to start thinking next. How do we as researchers define these concepts in a way that we can start to measure them? https://www.nature.com/articles/s41562-020-0850-9 I think that is where you need to start thinking.