Newspaper Response #4 “Humans May Not Always Grasp Why AIs Act. Don’t Panic”

This article takes an interesting and valid approach to a societal fear that about artificial intelligence (AI). Since machine learning gives artificial intelligence the capability to train themselves, we have had a recent boom in what computers can do, such as drive cars; however, humans do not exactly know how AI that utilizes machine learning make decisions, and this transformation brings technologies into spaces that are problematic or dangerous. For example, if a self-driving car in a crowded city made a mistake, someone could get seriously injured. While this obstacle drives some individuals to distrust artificial intelligence completely, this author takes a more open-minded approach, and uses an analogy to compare new AI systems to humans. Like AI in some ways, neuroscientists often do not even understand the depth of why humans do what they do, yet we have created societal ‘coping mechanisms’ in an effort to control these uncertainties with laws, rules, and regulations. The author believes that we can apply many of these rules alongside new ones to control for unknowns with machines as well. If we understand that we will have growing pains with AI and have people monitoring machines and keeping them accountable just like humans keep humans accountable, then we will be able to benefit from revolutionary technology while managing risks. Personally, I agree with many of these points, but feel as if accountability will be a difficult gray area to manage. Often times our justice system faults humans with the decisions they make, but it will certainly be harder to directly fault individuals or organizations when computers make errors. We will find liability in “whoever sold the product or runs the system,” but, this oversimplification of this gray area does not address the  many case-by-case complications that will arise. Further, since machine learners are deeply influenced by their creator or system manager, aren’t humans also deeply conditioned by their environment? When individuals make more severe mistakes, justice systems naturally administer consequences, yet our society swings a swift and heavy gavel of blame. Sometimes it could be more effective to look at societal interconnectedness to find the root of certain issues.

Source:

https://www.economist.com/news/leaders/21737033-humans-are-inscrutable-too-existing-rules-and-regulations-can-apply-artificial