Personal Essay

Artificial Intelligence, to a point of sentience, would be the greatest scientific accomplishment since sliced bread. It would be the most advanced, accurate simulation of humanity we could ever have. We would be able to learn so much from the process, as much as we would learn from them. Imagine a creation that can learn everything we know, to a precision that no single human being has ever been capable of in seconds, and then surpass us in every conceivable way. Or would this just be a culmination of humanities fervent determination to seek something greater than ourselves, become gods, or try to answer the questions of the universe, our existence, all in one fatal blow.

Throughout so many science fiction novels, comics, and movies artificial intelligence has taken a front row of examination. Whether the whole narrative is devoted to the artificial intelligence, or if they are just a well-integrated part of society, narratives on the future of human civilization are almost none existent without an inclusion of these robotic superpowers. I say almost, because off the top of my head I really can’t think of a single future set movie, or novel, or comic, of any kind that doesn’t include artificial intelligence. I especially can’t think that anyone will make something in the future that doesn’t since AI is such an important and prevalent part of our day to day lives.

Siri, Alexa, Google, Cortana, and Bixby are all examples of artificial intelligences that we all use on a daily basis. For better or for worse, all these artificial assistants exist in our pocket as long as we carry a smart phone, or wear a smart watch, or have smart earbuds in. And that’s the key. They’re all smart, they’re all capable of accessing almost anything in the world in just seconds, and pretty soon we’re going to be living in smart homes, driving smart cars. I’m sure the tesla drivers are smirking at how scared I sound while they sip their coffee and text while their car drives them to work, but it’s supremely terrifying stuff.

The question that always lingered in the back of my mind was what happens when on e of these artificial intelligences responsible for driving a car runs into the classic train scenario. Turn left and kill a group of children, but save the single passenger, or kill the passenger instead. Who is at fault? Surely you can’t sue an AI, because it’s not capable of owning anything. It’s just a car. I’m sure all of these questions will be answered in the future, when I’m old and ignorant, wondering why my grandson wants to marry a sentient robot instead of a person. This, of course, all while I’m busy having retractable sunglasses surgically implanted into the sides of my eyes.

It’s hard to tell if we should be concerned about AI or not, but it makes sense why we should. Microsoft tried to let an AI learn from the internet, but after taking a few to many trips to 4chan, Tay became the greatest, most successful, and probably first ever shockingly racist troll AI. But the future isn’t always conceived as evil. In the film iRobot, Sonny is a benevolent artificial intelligence, and Chappie is no different. We root for these artificial intelligences. I’m more interested in the evil ones though. The HAL9000’s, and the VIKI’s, and the T-1000’s. What do these creations tell us about ourselves? What does it mean to want to create something that can make our lives so easy, but so difficult at the same time? Perhaps it’s the idea of high risk high reward. Perhaps it’s the idea that humanity is no longer deserving of this world. Perhaps the AI are made to show us just how evil we are. Maybe it’s just really freaking cool.