Artificial Intelligence: The Modern Frankenstein’s Monster

“Beware; for I am fearless, and therefore powerful.” –Frankenstein’s monster 

As technology advances at an exponential rate, a pressing concern is Artificial Intelligence (AI). Mary W. Shelley’s Frankenstein is an appropriate example. In the beginning, the tale is fraught with optimism, but once Victor Frankenstein brings his creation to life, his ideals are shattered: he has unintentionally created a monster. We can take Frankenstein, in this case, to symbolize those who are building AIs; the monster, of course, stands for the AIs themselves. This is not to say that AIs are monstrous in any capacity; I am only setting the stage for illustrations I will make throughout this essay. Specifically, we see the possibility of parallel consequence(s) between Frankenstein and our modern world.  

Overlaying a triangle shape are four gray text boxes. From top to bottom they read, Trust, Accountability, Autonomy, and Power.
ID: Overlaying a triangle shape are four gray text boxes. From top to bottom they read, Trust, Accountability, Autonomy, and Power.

The ethics of AIs is complex and multifaceted, especially considering that we, the collective human race, cannot agree on any one thing—that is, we don’t all adhere to a single moral system. Ethics are generally relativistic, and many moral reasoning processes cannot be reconciled; however, most moral systems appear to be built on a similar hierarchy:  

In regards to a person, power necessitates the existence of the higher concepts. Without power, it is difficult to be autonomous and to make one’s own choices. It is possible to have power and autonomy, but if the person never takes accountability for his or her own actions, it is nearly impossible to achieve trust. As for AIs, they would need to have some independent source of power before they could become effectively autonomous. Frankenstein’s monster’s power lay in his living strength of body and mind, which was sufficient for the development of his autonomy. Throughout the novel, the monster both takes accountability and presses it upon Frankenstein, who accepts it. This is what builds our trust of Frankenstein as a narrator, and never allows the monster to achieve that trust in the eyes of the readers.  

But there are other issues to be addressed in this debate—specifically, the potential long-term consequences of AIs.  

Consequences, as we learn from Frankenstein’s tale, are crucial. But this lesson through comparison between Frankenstein and our modern world can only take us so far in discourse concerning: ethics, ideals and values, and the subject of sentience. Arguably, AIs are a far more complicated matter than Frankenstein’s monster in that we are dealing with a much larger scale (i.e., the planet and its population’s myriad of perspectives and rationalizations) and with AIs’ prospective roles in society and impacts on environment. We will first discuss how people conceive of morals and ethics, as well as defend their own ideas.  

On the whole, humanity seems to value five social ideals above most other desires:  

  • freedom; 
  • equality;  
  • peace; 
  • health; and 
  • knowledge and education.  

Morals and ethics generally propagate these ideals. For instance, a governmental sanction granting free higher education to its citizens obviously values an educated public. If AIs are meant to improve quality of life for humans, then their purposes should somehow reflect our ideals. Frankenstein claims that he believes his creation will benefit humankind, though he never truly considers that aspect; instead, he desires his creation to benefit himself through fame, fortune, and worship (that is, Frankenstein is to the monster as God is to man). Supposing the creators of AIs are not seeking glory or creating merely because they can, why create an AI if it will not support one or more human ideal or value?  

A general response to this question is a cut-and-dry: “AIs are good/bad.” But when pressed for a reason for whichever belief they have, people’s rationalizations mostly look like a conglomeration of answers from several different sources by a student who thinks it isn’t plagiarism. For instance, in one case a person might say “AIs benefit more people than they harm,” which is clearly a utilitarian view. But this same person might also say, “The programmers should all be virtuous people,” which is a reference to Aristotelian virtue ethics. Then this same person might say, “The world is changing, so why not?”—a pragmatic perspective. These three types of ethics are not mutually exclusive; it does work. People have the tendency to use whichever system(s) suit their goals and beliefs without much regard as to where their ideas are coming from, whether they logically follow, etc. So although ethics are relativistic, there does not seem to be any formal upbringing in any specific moral system at large. 

More importantly, these rationalizations don’t answer the original question: Why create an AI if it will not support one or more human ideal or value? (Believe it or not, “why not?” is not a valid response.) Is it possible to find an answer, or at least a compromise? I think so, if we can induce everyone to put down their pencils and explore what other things, aside from the do-goodery of machines, a future with AIs might hold. 

Many negative possibilities have been brought up by objectors time and time again: human displacement in the economy; machinated and biological warfare; usurpation of effective communication (e.g., “fake news” bots or “troll” accounts online). In other words, they worry over the power and autonomy of AIs. Still others question accountability: who is held responsible if something goes wrong? The creators or programmers, the commissioner, perhaps someone in charge of maintenance of the AI—or the AI itself? That is, if Sophia, the AI with Saudi-Arabian citizenship, goes on a killing spree, who is responsible for Sophia’s actions? Following that, another important consideration is the idea of sentience, or consciousness. This does affect or alter the outcome of the moral decision-making process. Look at the history of animals, whose sentience is still debated worldwide, and their rights. Do some animals have rights that others do not? It would appear so, given that some are set aside to butcher and consume, others are lavishly spoiled by owners, and still others are put to work (e.g., therapy pets or K-9 units). Who decides which animals have rights, and why? In the same strain, who decides whether AIs should have rights? 

We could conjecture that these undertones are looming in Frankenstein. If Frankenstein had created the monster’s companion rather than destroying it, he could have created another race that might procreate and eventually displace humans—AIs could push some humans out of the workforce. If Frankenstein had been evil-natured, he could very well have created an army of monsters and used them to conquer peoples and nations—as we very well could do with AIs if we put our minds to it. As for communication, we see that the monster has the ability to speak, and is eloquent—and as is already happening, people are using AI systems to spread lies and to harm others, and are creating AIs which can, for all intents and purposes, pass the Turing test. It’s not so far a stretch to say that words can start wars (after all, the popes excommunicated each other over a disagreement over a single word in the Creed). Finally, sentience: Frankenstein’s monster is fairly violent when he chooses to be—and it’s no surprise that many “harmless” machines we have now can be harnessed as weapons (e.g., cell phone bombs).  

All of these considerations are major ones and should be enough to give anyone pause, but they are not the only possible consequences by any means.  

Firstly, how will the production and maintenance of AIs affect the environment? The Industrial Revolution is not too far behind us, and although technological advancements have, so to speak, cleaned up a bit since then, any non-endorsed scientist could outline the negative impact of humans’ inventions. Regardless of how “green” AI implements are or may be, the Earth will still take a hit or two. 

Secondly, it is apparent that the countries which will benefit most from AIs would be first world, further widening the technology gap. We see this today with nuclear weapons: those who have the resources for them guard their arsenals jealously (even those who have signed the Treaty); meanwhile, those countries which have no nuclear technology would be practically powerless in the face of looming disaster. Granted, weapons of mass destruction are a far cry from AIs, but it does highlight the argument that not all countries will have the means to benefit from them.  

Thirdly, aside from “taking human jobs,” what roles will AIs play in the workforce? Supposing AIs are sentient, will they earn a living wage that might go towards their own maintenance, or will they be used, essentially, as slave labor? Slaves do have a tendency to revolt, and there are plenty of fictional speculations along those lines (e.g., I, Robot). Supposing AIs are not sentient, which jobs will they perform, how many, and for how long? This latter question is listed as a major consideration: human displacement. 

Overall, given these considerations, how can we trust that AIs will improve humanity in the long-term future? Supposing that the moral system hierarchy as modeled above is totally realized, what would constitute its breakdown? It must be built from the bottom upwards, but its disintegration need not follow that same formula. For instance, by refusing to take accountability for one’s actions, trust can fall through. If a people were to discover their leader is a puppet king (i.e., he has no autonomy), both accountability and trust would fall. If a war chief contracts a debilitating illness and loses his power, then the hierarchy topples. The same holds true for AIs, regardless of their (non)sentience. If a gun jams and loses its power, then the autonomy of its wielder is subverted, and whether the accountability falls to its owner or creator, trust in the weapon itself has been effectively removed. That is, if it cannot do its job, then it is untrustworthy and powerless.  

All things considered, it hardly seems ethical to push forward with something as ambiguous as AIs without weighing the consequences. It’s hardly fair to others, of course, that Frankenstein created his monster. Not everyone has the resources (education, innovation, and motivation aside) to follow suit, even if they wanted to do so. In light of that, had they the choice, the majority of people might vote no on the assembling and reanimation of dead flesh—in the real world, the majority of us might vote no on the assembling and manipulation of AIs.  

But we must also consider: what are the goal(s) of the majority?  

We fear that which we do not understand; but we should be wary of creating things we may not understand ourselves, and examine the consequences before setting events into motion. Although Frankenstein’s monster helped the impoverished French family, he also learned quickly from them—their language, their actions—and became intelligent and self-aware; and with that humanness came desires such as anger and revenge. We know Frankenstein was wrong because of following events his creation; but AIs need not be monsters. Concrete consequences seem impossible to foretell. If there were a right or wrong answer or a universal moral system we would have found it. In that light, the ethics of AIs cannot be answered for; only consequences—and by then it could be late.  

Published by modcasters

We’re a group of graduate students studying English Literature and Language on a mission to discuss literature, provide access to those on the deafness and/or blindness spectrum, and rock mustachios.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: