My Tenuous Optimism Towards The Singularity

Recently, a trend has reemerged in cinema regarding artificial intelligence, probably the third cycle since the early 70’s. From HAL and Joshua to Agent Smith, films–and to a whole populist science fiction–remain transfixed on the idea of computers gaining sentience and subsequently going out of their way to make us regret it. They gain emotions quickly after we foolishly threw them the car keys, and they respond by trying to run us over. These films involve the predicted singularity—or the point where sentient machines surpass the capabilities of their creators. Though a few of them tell a story on a grand scale, others are low budget, limiting themselves to just the “trigger-point”–when a single machine begins to think for itself. These movies include 2014’s atrocious misfire Transcendence, recent works like Automata and The Machine, and the upcoming Chappy and Ex Machina. Almost all involve small groups of scientists (or as little as one) using often existing techniques regarding computers in order to create life. By in large, these stories overestimate the capability of computers at the time, misrepresenting their fundamental function. A bot in Call of Duty can’t evolve into something greater; it can’t rage quit after a stressful session. No matter how powerful your home PC is, it can never fall in love with you (sigh). Creating artificial intelligence to the scale that could threaten us is not only ferociously difficult, but it also assumes said intelligence would threaten us in the first place, and that we as a species would always take the preemptive action to strike first.

Movies have speculated on artificial intelligence in our modern world longer than there have been computers. Why can’t programs be feeling, sensitive people like in, you know, Tron? The first issue lies in the massive gulf between popular expectations and the greater mysteries surrounding the very idea of artificial intelligence. Consider this; most of the coolest gizmos in Star Trek exist now. Seriously, the tricorder, hypospray, wireless global communication, tablet computers, even friggin’ phasers have already been created. The few things left are the ones we apparently really want, time travel, warp speed, transporters, and androids. These advances may never come; time travel and warp speed could very well be impossible. What about artificial intelligence? I believe modern films have led the public to accept artificial intelligence as easy, when it is in fact really, really, hard.

Why? Well, for one, who decides what artificial intelligence even is? The designers for Call of Duty refer to their bots as artificially intelligent. So is Watson, the Jeopardy champion; though comparing those two would be like comparing Newton to an Orangutan. The point is they are both still AI. Some experts have set the bar for AI so low that by their own estimations, we’ve already passed the threshold of creating self-aware machines. But once again, no robot apocalypse. Why?

That could be because movies and many books have ostensibly set the goal line to be the creation of …well…a person. To be flippant, the ultimate aim for all AI research is evidently to construct an intelligence that is self-aware, generates an emotional response, and is able to acknowledge the possibility of its own death. In essence, it’s to remove the “artificial” part of the term and create a synthetic being. Such a definition is extremely arbitrary. It’s also a myth; in reality, there is no consensus. Nearly zero AI research is spent on developing computers like this. On top of that, psychologists still debate on the concepts of intelligence, consciousness, and how they relate to each other. What truly is emotion? How can we equate a machine’s response to emotion? There’s the possibility that laymen (of which I admit to being one) define human emotions because of their elusiveness, while machines lack them because their processes can be measured. If emotions are so difficult to define in ourselves, how can we understand (or even replicate) them in computers?

Nevertheless, there are hundreds of scientists calling out AI researchers to either be cautious or stop their work altogether. When I read that, I admit getting my hackles up. The singularity not only postulates that we would lose control of AI, but that they would promptly seek our extinction, either intentional or as an unfortunate side effect. This brings up the obvious question, what constitutes “losing control”. Computers are already controlling our lives in many ways. I can’t imagine life without mine. What most people imagine when picturing a future with machines is a house AI welcoming you home, robots doing your errands, and lonely individuals seeking out synthetic companionship. When the singularity occurs, all those machines will turn around and start farming us for electricity. Whoa…talk about a mild overreaction on their part, especially considering they should never have had emotions in the first place. I’m not talking about imposing some Asimov-style commandments; I’m talking about computers not requiring any significant level of personality. An intelligence that cooks my food would be neither required nor designed to generate an authentic human-like emotional response. I don’t need my car to feel bad because I forgot to wax it one week. That makes no sense.

Excluding those, that leaves the computers designed from the ground up to be artificially intelligent. How would we create it, whole cloth in our own image (as scripture would have us believe) or in a simulated microcosm aping evolution? There are currently experiments attempting both. But to think a person will create a human-like personality out of Radio Shack spare parts in the next few years seems a bit of a stretch. My best friend, a 4th year Comp-Sci major, still reserves the possibility of a “Ghost in the Shell” spontaneous emergence event within the sea of information, claiming the exabytes of self-altering programs and malicious viruses swimming through the networks have created systems which accomplish effectively nothing. How can we assume something may not form out of that? Presuming it was possible, could it be replicated, and would it be immediately hostile?

So, it won’t be the robots rising up; it’ll be the one god-like AI assuming control. I watch Person of Interest—it could happen tomorrow!

On the other hand, if a god-like AI tried to wipe out humanity, it wouldn’t do a very good job given our current technology. In the end, it would only rush along its own destruction as the power grid failed without human maintenance. Mankind is still necessary to operate the mechanism of civilization; a completely automated society remains a dream of the distant future. Let’s assume though that we end up creating robot servants en masse and they fall under servitude of a diabolical AI. Reflecting upon history, no slave rebellion was ever directly responsible for eradicating the empires that did the enslaving. Looking to evolution as an allegory, the Neanderthals weren’t wiped out by war (well not entirely)—recent evidence points to environment and crossbreeding as the ultimate culprit. There is still a little Neanderthal in all of us.

Although some people have been scared about progress, they have either been silenced by the majority, or forgotten by history. The industrial age didn’t spark the end of humanity. Why would the next jump do so? It’s unfounded optimism, some would say—an advanced intelligent computer may not even understand concepts of physical reality. It’s “personality” would be entirely alien, bringing us back to the point that our definition of intelligence is very human-centric. As films encourage the thinking that they think and act like us, AI researchers challenge the definition of what “thinking” is all about. Would you be able to recognize a truly alien intelligence? To apply a quote from a favorite movie, “We don’t want other worlds; we want mirrors.” While the public watches and expects the rise of synthetic personalities, true sentient AI could arise without us knowing. Yet, on a scale of things that frighten me, I’m more concerned about being struck by an errant Cosmonaut.

In a plug to my own property, NeuroSpasta, I postulated a city with such unbridled freedom, with synthetic intelligence functioning in daily life around humans. At no point was there a threat of toasters gaining sentience and electrocuting their owners. Even considering the possibility of a future where sentience AIs run every facet of our society, I can’t see that being a bad thing. There’s a distinct possibility that what people are truly scared about is their way of life ending. That will happen regardless. Will artificial intelligence take a side in our geopolitical environment? Will they be liberal or conservative? If we instill morals and ethics, and hope they stick, will they be on the side of free-market capitalism or prevent the eradication of the middle class? Will they usher an era of human enlightenment or suppress our freedom. That fear is a bit more understandable, more so than our complete extinction. My bet is for enlightenment, for a mutually beneficial future, one where artificial intelligence evolves faster than humanity, paving a road a road we’re able to follow. I believe that because to believe in the alternate runs the risk to exposing a vacant universe. Just like the industrial age, nuclear power and climate change, the rise of artificial intelligence is a test every civilization must face. To think that a planet is doomed to fail at one of them points to an imperfection of intelligent life, that any advanced civilization is fated to stumble at one of these hurdles. Perhaps surviving the singularity is just another hurdle we’ll look back upon as not much of a hurdle at all. And like the other obstacles, we’ll emerge a different species than before.

…unless climate change kills us first, because seriously, that’s a threat I can get behind. I’d lift my pen or pistol (and just for the record, I deplore firearms) to fight for a clean planet before I’d fight any machine offering to wash my dishes. I’d rather live in a robot-controlled utopia than in a post-apocalyptic human occupied landfill. And if by some weird confluence of events, the robots did rise up to control the world, it would be because we were already heading out the door anyway.

If there is a future involving a level of artificial intelligence equal or surpassing our own capacity, I believe we will rush to it, past it, but ultimately survive it. Civilization will change, it may “end” as we know it, just as it ended as past generations knew it, but as a species, humanity will endure. Is that a naïve assumption? Yes. I admit having no basis for that conclusion other than pie in the sky optimism that future AI overlords will read this and take pity on me.

Hail Megatron?

0 Shares

Chris Dias

Chris Tavares Dias is the literary equivalent of that crusty burnt cheese at the bottom of the fondue pot. Some people claim he looks like Mathew Perry. He would like that to be true. It's not. In 2010, Chris co-wrote and created Amethyst Foundations, a 4th Edition setting based on the previous version under 3.5. It has received critical acclaim for integrating science fiction into classical fantasy. In August of this year, Chris was last seen staring at a dead raven that had fallen beside his car. Two months later, his watch and notepad were found in the stomach of a basking shark that had washed ashore off the coast of Florida.