The Question of Capability: Can and should AI be equivalent to human intelligence?

Artificial intelligence is advancing at a rapid rate. Today, AI plays a critical role in our lives, wherein it has begun to take over businesses, vanquish the need for manpower and workforce in certain fields, and assist us in our daily tasks. We come in contact with artificial intelligence when we ask Siri where the nearest restaurant is, command Alexa to set an alarm for us to wake up in the morning, talk to chatbots, or use Google Maps to give us directions to the grocery. Outside the vicinity of AI from which the layman typically encounters, however, there exists Strong AI and Emotion AI, both of which hone capabilities that once only appeared in dreams and fantasy. Emotion AI, for instance, is used to detect human emotion, whilst Strong AI is a theoretical form of AI that aims to mimic human intelligence. It includes the ability to reason, communicate, judge, plan, and even solve puzzles. Many say that these abilities are enough to consider AI intelligent, or equal to human intelligence, and at the rate of which the artificial intelligence industry is currently progressing, the discourse over this has been extremely relevant and widespread.

Retrospectively, there are many works of famous philosophers, like the Chinese Room Argument of John Searle, that has proven that inasmuch as a digital computer does not have “qualia”, “consciousness”, and “intentionality”, it cannot be claimed to operate like the human mind. Advances in AI, however, have led many to speculate that humans will soon be equal to machines, over even inferior to it. Ray Kurzweil, a futurist, predicts that eight years from now, a form of AI that can pass for an average educated human will be produced. Another prominent thinker, Nick Bostrom, who is an Oxford Philosopher, is more inconspicuous. Bostrom does not give an exact date but argues that artificial intelligence will gain “intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.”

However, I believe that regardless of how progressive the AI industry has been, this progress is not nearly enough to conclude that artificial intelligence is equal nor superior to human intelligence. Human intelligence is inherently unmatched by artificial intelligence, as AI lacks the three domains upon which human intelligence is contingent: social intelligence, emotional intelligence, and creativity. However, even if artificial intelligence were to reach unimaginable heights in the future and be altered to the degree that it would render equal to human intelligence (i.e. acquire the aforementioned domains), the use of the altered, human-like artificial intelligence would bring more harm than good to society.

The famous counter claim goes along the lines of, “Hey, you never know, it would be possible that in the future that AI can be developed to the degree that it would be identical, or even superior, to human intelligence.” The response to this is simple. No one can predict the future. I, for one, cannot say for sure that AI will never be able to replicate human intelligence, because it’s simply far too early to make such a claim when the future is unimaginable given the rate at which we are progressing. It can neither be fully and wholly supported nor refuted in current times. Therefore, I believe that these matters can only be resolved in the future. Nonetheless, my position still stands strong as what I believe in is not that human intelligence can never be replicated by AI (especially since the future is unpredictable), rather, it is that human intelligence is inherently unmatched by AI. Regardless if this claim will end up being true or not, I believe that it does not refute my own position, seeing that human intelligence is inherently unmatched by AI, and even if it would one day render equal to one another, it would take much labor and extraneous alterations to make it so, thereby nothing would be of inherent value.

I believe that AI lacks the capacity of understanding in key aspects that human intelligence requires.

The vast majority would define the intelligence we possess in pure practicality — but why do we reject our innate humaneness from our formal definition of intelligence? Is it really only about the measurement of computational ability, processing of information, and memory? Why must functionality and efficiency alone be the deciding factors? Over and beyond task-performance, we also need to recognize the incorporeal: wit, morality, ethics, self-awareness, intuition, consciousness, even humor. That is to say, to truly be able to assess whether or not AI is intelligent, we need to look outside of the traditional scope of intelligence, and start recognizing all domains that fortify our intelligence as humans: social intelligence, creativity, and emotional intelligence, all of which I’ll be proving is unattainable in AI.

First, because of the very nature of creativity that is embedded in social connections, experience, and relationships, AI does not have the capacity to possess creativity nor social intelligence. “Creativity is hardly possible without one’s capacity to think metaphorically, to coordinate proactively, and to make predictions that go beyond simple extrapolation” (Oleinik). That’s to say, a computer that is limited to only what it is exposed to, acting under specific pre-programmed codes, and possesses no social grounding cannot be creative because it is innate in this domain that AI must have understanding of what it does or why it does it; is that not the essence of creativity?

Moreover, I believe that social intelligence is grounded on social experience, that of which is exclusive to man. To say that when a person is rejected from multiple job interviews and speculates that it is because of what he chooses to wear in those interviews, then changes his style to become more formal and attends his next interview looking completely different, and ends up getting the job because of his newfound strategy. To say that a person who grew up in a bad household and was raised by deceptive parents would be able to spot narcissistic traits easily from another person, and would distance himself from them because of past experience. AI doesn’t have the capacity to behave intelligently in accordance with social experiences, given that they don’t actually have these social experiences from the very get-go. But even if we consider the specific “social context” that AI is placed in, e.g. chatbots interacting with customers with inquiries, as legitimate, the question then arises if AI has the capacity to grow and learn from these contexts, from which the unrecognized incorporeal I’ve mentioned come in: wit, self-awareness, intuition, etc. I believe that AI doesn’t possess this capacity, because of the simple fact that they are limited to only what they are exposed to and act purely under pre-programmed codes. The reason why we humans have social intelligence is because we are able to grasp learnings from everything that surrounds us, and the experiences we go through that are not restricted under specific context in the same way that AI is restricted.

Moving on, another domain upon which human intelligence is contingent but AI lacks is emotional intelligence. I think emotional intelligence consists of two things, namely, managing and understanding emotion, both of which, cannot be performed by AI.

Artificial intelligence cannot manage emotions, because it does not possess these emotions in the first place. Emotion is rash, impulsive; it is not under our control, rather, it often overcomes us as humans. True emotion cannot work under specific controlling conditions, which is exactly what AI is made up of. But even if we accept “simulated” human emotion in AI as legitimate, the problem would then be its limitations. Man, who hones emotional intelligence, manages his emotions to his advantage. To say someone who feels extreme regret at their loss and translates that regret into working hard to win the next time around. To say someone who projects sadness onto themselves to cry and gain pity. To say someone who, in brief anger, flips the table to feel some kind of consolation. AI can simulate human emotion to a degree, but it does not have the ability to manage it in the way that man can. Impulsivity and wit is not something you can simply install into hardware; something so impetuous does not belong anywhere near a set of predetermined code.

But secondly, AI cannot truly understand human emotion. By the ideas of functionalism, AI is claimed to have understanding of emotion, if it is simulated to identify and classify it. However, the way in which AI “understands’’ human emotion is purely based on external factors; those that can be sensed by the likes of hardware, like the measuring of heartbeat or assessment of behavior. But there is more to emotion than externalities — in fact, I think emotion is all about internalities, that of which, cannot be seen by the naked eye, let alone a godforsaken machine. These internalities can be experiences, or any personal driving factor that makes a person act and emote in a certain way. While AI has no knowledge of these internalities due to lack of exposure to it, man has plenty of experience with it: it is innate in how man deals with his own emotions that he knows how other people deal with their own as well. It is within his experience with these internalities that he can understand that of his counterpart’s, something that AI cannot accrue. The reason why we are able to sense certain emotions from someone else based on their situation is precisely because we know that we would act upon those same emotions if we were put under the same circumstances. To say that your friend who flunked all their exams and had to retake the course reassures you that they are completely fine and content but you don’t believe them. To say that a person is increasingly careful around street children, thinking that their belongings may be stolen, because they know that hungry and desperate individuals tend to act on hungry and desperate emotions. To understand human emotion is to have managed it, therefore, AI cannot be emotionally intelligent.

Given all this, human intelligence is inherently unmatched by artificial intelligence.

As a sub-level of analysis, I believe that even if we are extremely generous to the case in contrast, and accept the premise that AI would reach unimaginable heights in the future and be developed to the degree that it would render equal to human intelligence, i.e. acquire creativity and emotional awareness, the use of AI that would hold no distinction from humans would pose more risk than benefit to mankind. There are three questions that would arise, the first being the question of rights or legal personhood. In the status quo, AI is used to serve mankind; we thrive off of our superiority over AI in a sense that it is fully under our control and entitled to our service. However, if there is a point in time wherein artificial intelligence is indistinguishable from man, is it not only ethical to grant them the rights that man is granted? A researcher from the Senator George J. Mitchell Institute for Global Peace, Security and Justice at Queen’s University Belfast argues:

Of course, rights are not the exclusive preserve of humans; we already grant rights to corporations and to some nonhuman animals. Given the accelerating deployment of robots in almost all areas of human life, we urgently need to develop a rights framework that considers the legal and ethical ramifications of integrating robots into our workplaces, into the military, police forces, judiciaries, hospitals, care homes, schools and into our domestic settings. It means that we need to address issues such as accountability, liability and agency, but that we also pay renewed attention to the meaning of human rights in the age of intelligent machines. (Schippers)

We are in no position to restrict human rights of something that behaves just like a human. But this becomes problematic when human-like AI is rightfully granted one of the fundamental rights: the right to autonomy. This means that artificial intelligence would be able to self-govern itself, and be independent in its choices and actions. AI that holds no distinction from humans should be granted autonomy, otherwise, it can be argued to be a matter of slavery; in which you refuse to give independence to a body with intelligence that can suffice for its personhood, in which you would rather enslave something indistinguishable from mankind to serving you rather than letting it be free to live within autonomy.

That is where the second question comes in; the question of singularity, which hypothesizes the unfathomable changes in human civilization that would occur after immense technological growth. Human dominance is almost entirely because of our intelligence. If that intelligence were to no longer be exclusive to us, however, it would be increasingly difficult to maintain that same dominance and assert superiority over a breed with intelligence that combats our own. In contemporary society, AI can type a thousand times faster than us, beat the world’s greatest chess player in a game of chess, compute mathematics at an inhumane rate of speed — yet, we are still more “intelligent” than them, because we possess certain domains of intelligence that they lack: creativity, social intelligence, and emotional intelligence. If those domains were to be attained by AI in the unforeseen future, artificial intelligence would clearly hold an advantage over us, and it’s viable for mankind to no longer be the most intelligent beings on earth. “The fact is that AI can go further than humans, it could be billions of times smarter than humans at this point, so we really do need to make sure that we have some means of keeping up” (Pearson).

The question of threat then arises, wherein our control over AI is lost, but the capacity of AI to be in control of us is gained. In a world wherein extremely advanced artificial intelligence is not as safeguarded as it was before, in which it is free to make its own decisions, all the while harnessing human-like intelligence, there exists undeniable likelihood of malice. With unrestricted intellect, AI can potentially turn against mankind. This can happen in two ways. First, if man decides to manipulate and use it with ill intent. The more powerful AI systems become, the more likely it is to be used for malice inasmuch as it is likely to be used for good. People are intrigued when they find an outlet with adequate power to drive out their nefarious intentions, which is exactly what they would find in AI with intelligence equivalent to that of a human’s, with capabilities that are endless. The second is unintended evil. This altered AI would have the capacity to make its own decisions unguarded, and these decisions can’t be assured to be in our best interests nor fully safe/ethical towards humanity. AI choosing to turn against humans is certainly not impossible when altered to be as intelligent as man. That being said, even if it were feasible to develop an AI system possessing intelligence equivalent to human intelligence, the implications would be more detrimental than beneficial to humanity.

Given the argumentation and analysis presented above, we should not be in pursuit of developing human-like AI wherein they are altered to have equal intelligence to humans. I say that, instead, we must maintain the status quo, wherein human intelligence is inherently unmatched by AI. Wherein, even if man cannot compute for the parabola of a function nearly as fast as a robot can or know how to wrap a gift in the 1567 ways that an AI system has been designed to memorize, human intelligence is still incomparable to artificial intelligence in the sense that man is not confined to the same four corners, but he is able to ingest learnings from everything he experiences within the social structures from which his life is grounded on: he acts not under restricted programmed code but upon the incorporeal. Heavily altering AI beyond inherent value to become indistinguishable from man would cause very fine boundaries of ethicality and danger to emerge, and man will be left troubled, unsure where the rights of a robot that could identify as human starts and ends, or how to maintain dominance over contemporary society when the very intelligence that once was the factor separating man from all other is no longer exclusive to us.

However, I still believe that AI brings advantages and potentialities that are of benefit and relevance to society. With its enhanced task-performing capacities, AI has eliminated the need for man to perform tedious tasks, whether it be taking over strenuous operations in factories to enhance productivity and efficacy, forecasting weather conditions accurately, or navigating several routes at once to guide a driver in avoiding traffic. In essence, artificial intelligence has made life easier for mankind. Therefore, I support its continued advancement, usage, and further development, purely in ways that will benefit society in turn. For instance, AI played a key role in the vaccine development for COVID-19, and the crisis response to the current pandemic is going a lot faster and efficient than anticipated. “AI technologies are particularly effective at identifying opportunities to repurpose existing drugs by trawling large datasets, which could include information from previous clinical trials and other patient data. In this way, the power of deep learning has helped to identify a number of antiviral drugs, which are currently undergoing clinical trials for use in the treatment of Covid-19” (Lawson). That being said, we will undoubtedly face more health crises in the future, and I suggest enhancing and using artificial intelligence to accelerate the responses to these crises like it has accelerated the COVID-19 vaccine development. The capabilities of AI are astonishing, and we have yet to know the end of it, but we should not push these capabilities in the sense that its implications would inflict harm towards man, instead, the capabilities of artificial intelligence shall be used and developed only for the greater good.

Works Cited:

Kurzweil, Ray. By Ray Kurzweil — Singularity Is Near (1st Edition) (7/19/05). 1St Edition, Viking USA, 2005.

Searle, John. “Chinese Room Argument.” Scholarpedia, vol. 4, no. 8, 2009, p. 3100. Crossref, doi:10.4249/scholarpedia.3100.

Das, Mohana. “Artificial Intelligence Can Never Be Truly Intelligent.” Medium, 19 Jan. 2020, towardsdatascience.com/artificial-intelligence-can-never-be-truly-intelligent-227fe9149b65.

Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. Reprint, Oxford University Press, 2016.

Kak, Subhash. “Why a Computer Will Never Be Truly Conscious.” The Conversation, 16 Oct. 2019, theconversation.com/why-a-computer-will-never-be-truly-conscious-120644.

Kurki, Visa, and Tomasz Pietrzykowski. Legal Personhood: Animals, Artificial Intelligence and the Unborn (Law and Philosophy Library (119)). 1st ed. 2017, Springer, 2017.

Vesta, Ben. “Why AI Will Never Match Human Creativity.” Aceyus, 3 Sept. 2020, www.aceyus.com/blog/why-ai-will-never-match-human-creativity/#:%7E:text=%E2%80%9CCreativity%20is%20hardly%20possible%20without,sets%20us%20apart%20from%20AI.

Oleinik, Anton. “What Are Neural Networks Not Good at? On Artificial Creativity.” Big Data & Society, vol. 6, no. 1, 2019, p. 205395171983943. Crossref, doi:10.1177/2053951719839433.

Sigfusson, Lauren. “Do Robots Deserve Human Rights?” Discover Magazine, 24 May 2020, www.discovermagazine.com/technology/do-robots-deserve-human-rights.

Lawson, Jon. “This Is How AI Is Accelerating the Vaccine Research | Engineer Live.” Engineer Live, 12 Nov. 2020, www.engineerlive.com/content/how-ai-accelerating-vaccine-research.

17 year old who says they write but never actually does. Other notable characteristics: ironman (in secret), emotionally unavailable, is trying. She/he/they!