In January, a 60-year-old lady was rescued by the medical team from committing suicide regarding her Facebook post. She posted on Facebook saying that “This can’t go on. From here I say goodbye.” The AI detected this activity and reported to related departments. Thanks to artificial intelligence, a life was saved.
In the news report regarding this issue, Aili McConnon has stated in her article AI Helps Identify People at Risk for Suicide that Facebook has long been using users’ information on suicide prediction. Facebook’s AI system can “spot potential suicidal language”, which reminds me of the program Eliza introduced by Janet Murray in Hamlet on the Holodeck. The AI in both Facebook and Eliza react to keywords as well as other evidence, which is a part of the procedural affordance. The system could analyze electronic health records, social media posts, and audio and video recordings to find common threads among people who attempted suicide. Then algorithms start the predicting process and assess who is more likely to be at sick.
This suicide detecting system is considered to have great significance. The traditional approach, which is basically doctor’s diagnosis is considered less efficient. Moreover, the accuracy of AI prediction is expected to rise to 80% – 90% in the next two years. Despite the advantages and benefits of this suicide prediction AI system, critics also arise concerning the privacy of users. But since that communication companies and digital media companies have been using user data is no longer a secret, why not using it to benefit us?