Search

Allegory of the Cave: The Dark Shadows of Artificial Intelligence

Updated: Jul 14

Before we begin to fear the change and attribute too much importance to it, we need to understand that it is we who control it, not the other way around


World AI conference in Shanghai, China, 2018. Photo credit: Oriental Image via Reuters connect
World AI conference in Shanghai, China, 2018. Photo credit: Oriental Image via Reuters connect

“Video Killed the Radio Star,” sung by the British new wave band, The Buggles, back in 1979, became an iconic hit after it was selected to be the first song to launch the MTV music channel.

The song describes how a radio star is soon to be eclipsed due to the global rise of television and its subsequent success turned the band into an integral part of the cultural history of the world of video clips.


This pop song reflects a profound social fear of change, one that has always existed and still does today, but if we are clever enough to adopt the change and use it intelligently, we should be able to guarantee a brighter future.

About 12 thousand years ago, the agricultural revolution permanently changed the lives of hunter-gatherers. As a result, people were able to cease their constant migration in the search for food, to establish communities, and develop civilizations, even though it exposed them to many dangers, including climate change.


The industrial revolution replaced numerous workmen with machines, but, at the same time, it substantially increased manufacturing productivity, thus helping to reduce poverty and social and economic gaps. These revolutions led to significant changes in daily life, causing many professions to disappear, but they also created new opportunities and professions, while developing human society.


The Atom Bomb


In recent years, the term Artificial Intelligence (AI) has generated a flurry of mixed feelings from polar opposites. On the one hand, it is evident that AI makes life much easier for us; it has the potential to bring a substantial improvement in many spheres of daily life, as all of us already use AI when we navigate using Waze, identify music with Shazam, or scroll through clips on TikTok.


Nevertheless, we have recently begun to hear people voicing their fear of technology.


In a recent article featured in the Guardian, Professor Stuart Russell, a computer expert from the University of California who wrote a leading textbook on artificial intelligence, said that experts are “spooked” by their own success in the field, and compared the advance of AI to the development of the atom bomb.


This comparison raises concern and fear - the atom bomb is associated with historic destruction and death. We should remember, though, that the main application of nuclear energy these days is not for military purposes, but rather for substantial civilian benefit: the production of electricity in nuclear power stations.


Moreover, if we use nuclear energy in a controlled manner, it can help both us and the environment (since nuclear reactors do not emit harmful emissions into the atmosphere).

There are additional allegations against the use of AI models, claiming that they discriminate in their decision-making process between gender and race, etc. In a different article published in the Guardian, it was alleged that AI-based bots used in worker recruitment processes tend to filter through candidate resumes with an inbuilt bias.


But the question we should really ask is, who is to blame when this is all simply based on lines of code that make up mathematical models? AI’s main engine is the data on which it is based, including pictures, figures, audio files, text and others – this is human data drawn from selfie photos, share prices, songs, and books.


This data is no more than a mirror image of what is going on in our society in a specific field. If the results of the algorithm are “racially” or “gender” biased, this does not necessarily mean that the model itself is prejudiced, but rather it has reached its conclusions based on the data given to it. The human factor is the most significant one in guiding the model and it should be the one to answer for any bias in the data.


Fear of Change


The rising power of AI is fast becoming an issue of concern featured in many articles. The allegations include that its capabilities exceed those of the human brain in a variety of fields, so much so that it actually makes humans over-dependent, lazy, and almost helpless.


However, Artificial Intelligence cannot, nor should it, take over the lives of humans. It is humans who build and program the AI, adapting it to whatever function they see fit.


It can genuinely help us in decision-making by saving time, money, resources, and eradicating human errors. For example, when AI is incorporated in medical decision-making processes, it can help doctors diagnose ailments based on CT or MRI imagery.


Instead of a doctor having to look at medical imagery and risk missing an important detail due to the limitations of the human eye, an AI model is able to make diagnoses, give recommendations, or provide requisite focus for the more challenging, unusual parts of the imagery and providing the physician with more real-time information.


Yes, we need to face the truth. Artificial Intelligence will apparently replace people in a number of professions, but let's not forget that human intervention is still required in order to collect data, analyze it, design the model, monitor decisions and the model itself, upgrade versions, and so on.


In other words, AI will be able to help us create new jobs and develop better skill sets, which, to date, might not even have been in demand. Almost every profound change has its pros and cons and AI is no different in that respect.


Before we begin to fear the change and attribute too much importance to it, we really need to understand that it is we who control it, and not the other way around. Ultimately, Artificial Intelligence is no more than a multi-layered tool with features designed by man.


If we are wise enough to use it correctly, it can be an extremely useful tool to improve our quality of life. If we fail to do so, it will be devoid of any value or benefit. It is easy for us to complain that Waze has caused us to forget our most basic spatial skills of getting from A to B and that Tinder has destroyed the fun of random acquaintances on the street, but, overall, there is consensus that life is much simpler with these apps.


We will only really be able to fully understand the long-term impact of AI technology and its genuine importance in the future. The introduction of regulation to determine the limits of Artificial Intelligence, so that it is used both ethically and judiciously, will enable it to be put to proper use for purposes that will serve to develop both us and our surroundings.


Ruby Melody Simply is a senior data scientist at Data Science Group, a biomedical engineer and an algorithm developer. Simply’s photo is courtesy of Data Science Group.

91 views0 comments