New Response 4

The New York Times article “To Give A.I. the Gift of Gab, Silicon Valley Needs to Offend You” talks about the problems and benefits to AI and conversational software. Microsoft had released a software “Tay” that ended up saying offensive things. The problem didn’t lie in the software being programmed to be inherently offensive, but rather it was that it only spewed out what was being put into it. Within the short time it was released, Microsoft had to shut it down for saying things like the Holocaust didn’t exist. In developing more systems, “the system learns to perform well even in the face of poor spelling and grammar”. The software is able to respond to basic things speakers would say. For example, if you exclaimed to the software that you were feeling hurt it would say, “I hope you feel better”. It’s able to adapt and understand basic irregularities in human functions. The conclusion made by software developers is that these AI conversational robots should only be able to perform limited tasks, “conversational systems will be most effective if they are limited to particular tasks, like asking for IT help or getting medical advice”.

I feel very strongly about AI. I don’t like it. I have an inherent fear that an AI robot will get out of hand and take over the world. That being said, I know that many of the websites I use somehow involve AI. Ultimately, if they aren’t given the capacity to do more than simple tasks I am not too bothered by it.