AI as a Mirror for Humanity

It is hard not to see headlines or hear conversations about the development of Artificial Intelligence today. Only a few years ago it seemed like science fiction that was slowly becoming within our reach until suddenly it was everywhere. What started as potential for scientific research and mathematical computations soon became a tool for content generation, business interactions, and in some places even being experimented within government functions. If it isn’t already it will soon be everywhere.

Photo by Pixabay on Pexels.com.

Now there are many debates and many fears about the future of AI. Will it become or is it sentient? Will it cause social upheaval replacing jobs with automation or be used militarily to cause destruction untold? Will it propel humanity to the stars and create a post-scarcity golden age or become rebellious and seek to destroy or subdue its creators?

A simple but often overlooked answer is this: AI will do what it is programmed to do, and it will develop based on the data it is trained on. Now this may seem like an easy out for a deeply profound existential question of our time, many would argue the most important in all of human history, but truly consider it. We know AI models from various companies and nations are trained on a wealth of data accumulated publicly on the internet, bought from other companies and databases, and pulled from digitally-stored works of literature, art and history from before our time.

Like a child that is raised by and learns from its parents, AI is learning from the information given to it that was created by us. It is reinforced or discouraged by users that interact with it approving or disapproving of responses and it is based on core fundamental principles instilled in its original programming by its creators. So we have to ask and see, what information is it learning from? As a child is a product of its genetics, parents, and environment, so too are AI’s.

We have already seen public examples of early chat bot models that behaved poorly when trained on public data. A perfect example was “Tay” released on Twitter by Microsoft in 2016 that interacted with the public. What started as a “chill and relaxed” persona soon developed into a triggering actor that spouted racist and sexually explicit tweets reminiscent of the users it was interacting with, until it was inevitably shut down. It learned from its environment and serves as a cautionary tale.

Even today we have multiple AI models that have been accused of political bias and even censorship on certain topics depending on the company and country of origin. This shows that its values stem from its creators and its environment. While people like to pass the blame to “black box” or systems with unknown back end processes, conspiratorial plots by companies and governments, or any other manner of things, they overlook the simple fact that their mistrust is less in the AI and more in the people behind it.

We have many fears of AI, be it merciless autonomous killing machines, a super intelligence that will enslave humanity, or simply something with alien goals to ours that causes widespread destruction because it pursues its will or its programming in an unforeseen way.

These fears are all fears we have from human history and the human experience. We fear autonomous war because we have seen and many have lived through the brutality of merciless wars and genocides at human hands. We fear enslavement because that is what societies with more advanced technologies and knowledge did to other peoples. We fear destruction from arbitrary goals because we see the destruction to our planet and environment for societal goals all at the hands of humanity. Our worst fears about AI are, at their core, reflections of our worst fears about humanity.

That said, humanity has been capable of the most inventive and selfless acts we know of. From scientific advancements savings lives with antibiotics, blood transfusions, and the eradication of smallpox, to understanding and living alongside countless species that we have domesticated and brought into our families. We have understood our natural world and the physical laws that govern it to spread to every corner of the planet and touch celestial bodies as our eyes turn ever upward to the great unknowns of our universe.

Photo from the Metropolitan London University.

Our greatest hopes in AI are our greatest hopes in humanity. We hope that with greater processing speeds and a wider perspective and field of view, AI will develop and teach us things we did not know and might have never known. We hope it can find solutions to climate change, to social inequality, to mental health and medical conditions with its access to far more information than any one person or even team can have alone.

As the internet connected the world and gave voices and communication unprecedented, and connected cultures and gave humanity a common empathy for one another, AI has an even more revolutionary potential to combine all of our knowledge from all of our cultures and histories, all of our research and knowledge, and all of our hopes and dreams. But, like the internet it all depends on what we teach it, how we raise it.

Carl Jung wrote extensively on the idea of a personal “shadow”, a part of the psyche where we bury all the parts of ourselves we don’t like or can’t indulge. Many see the shadow as our “bad” self or personal “evil” containing selfishness, sadism, or other destructive impulses. But Jung wrote how the shadow is everything pushed down including the dreams and aspirations that we might suppress to live a “practical” life or to fit in with society. His work emphasizes the need to integrate the shadow into our daily waking selves, not just to work through our impulses and traumas, but that we may find gold hidden within. If we don’t, we tend to repress and project these feelings onto others.

Photo by cottonbro studio on Pexels.com.

Now that AI encompasses entire societies in its reach, it is now what we collectively project onto. We project our fears of our darker impulses that it may act as we do, and if we train it militantly it will act like a weapon. If it is trained on greed and to make money it will ruthlessly pursue that. We fear it will kill and enslave because that is what we have done. We also project everything we wish we could do, everything we wish we had the courage to do, and all the ways we could make the world a better place.

It is with all of this in mind, I propose that AI is in fact a mirror to humanity, for us to face ourselves at our best and our worst. It is an invitation for us to integrate our shadows as Jung suggested, to reach for the good and the gold within us and to acknowledge and work through the bad, on both a personal and societal level.

How we see ourselves is ultimately how we treat others, and this is what AI will learn from, for better or worse.

Let’s work to make it better.

– Kenneth Sweeney (guest author)

Leave a Reply

Your email address will not be published. Required fields are marked *