Digital Phenotyping

NYT-wordmark

“How Companies Scour Our Digital Lives for Clues to Our Health”

An emerging field, digital phenotyping, tries to assess people’s
well-being based on their interactions with digital devices.

https://www.nytimes.com/2018/02/25/technology/smartphones-mental-health.html?rref=collection%2Fsectioncollection%2Ftechnology&action=click&contentCollection=technology&region=rank&module=package&version=highlights&contentPlacement=1&pgtype=sectionfront

 

In her New York Times Article, Natasha Singer discusses the new field of digital phenotyping, which is attempting to evaluate people’s well-being based on the interactions they are having with digital devices. Researchers and tech companies have begun tracking users’ social media posts, calls, scrolls, and clicks in the hopes of connecting the dots between behavior changes and their potential correlation with disease symptoms. Singer writes, importantly, that while some of these services are “opt-in,” at least one is not. In her article, Singer quotes Dr. Sachin H. Jain, CEO of CareMore Health, which is a health system that helped to study Twitter posts for indications of sleep problems. According to Dr. Jain, “our interactions with the digital world could actually unlock secrets of disease,” and that approaches similar to CareMore’s Twitter study might one day be able to help whether patients’ medicines are working and in understanding the effectiveness of treatments. The field is new and relatively unknown, and even those in support of its development maintain that digital phenotyping could ultimately be no better at detecting health problems “than a crystal ball.” Notably, however, Singer reports that these claims are not stopping the rush to get into the field, both by start-ups and giants such as, interestingly, Facebook—despite questions about efficacy of the programs and, as one begins to suspect, data privacy.

Facebook is actually conducting one of the “most ambitious” efforts in this area, using artificial intelligence to scan posts and video streams for signs of possible suicidal thoughts and analyzing language patterns. However, not only is Facebook doing this without giving users a choice of opting out of scans, but it is also, according to another doctor, walking the line of practicing medicine both without a license and without proof that the benefits of what they are doing outweighs the harm. Other, smaller, tech companies such as Sharecare are also rolling out apps and features within this area. Sharecare uses a voice-scanning feature to analyze tone and even characterize its users’ relationships with those they are talking to—without informing the people on the other end of the phone who are not users of the app, a notable observation within the vein of information privacy.

 

facebook-logo

 

While it is not yet apparent whether these scans and other tech-based healthcare developments provide more benefit than harm, what is more important for discussion is just how closely Singer’s article is tied to our discussion of tracking and privacy, as well as the ethical implications of these technologies. From a personal standpoint, I can say without pause that I would feel extremely uncomfortable with this type of activity and behavior being tracked without my knowledge. Additionally, in the case of apps like Sharecare’s, to be speaking with a person who uses the app and not be aware of the fact that an algorithm was actively working to feed that person information about our relationship is highly disconcerting. The debate becomes most complex at precisely this juncture of consent and effectiveness. If people are given the option to consent or deny, for example, Facebook’s language analysis, those who deny it may not be able to glean the benefits of a service that could be proven as one beneficial to society. The question, then, that arises is one of Facebook’s ethical obligations. Does Facebook have a responsibility, if the technology is available, to monitor its network for suicidal indicators? Because of its placement as a social network, Facebook has access to information about human behavior and activity that its users’ closest friends or family may not even be able to discern. Thus, if Facebook is able to identify indicators of mental health problems based on its users’ activity, is Facebook obligated to do something about it? Are people obligated to submit to this tracking to benefit the greater good of mental health in this country, or the world? If this is a global mental health issue, does Facebook, then, even need to ask?

 

Sharecare-logo

SINGER ARTICLE
Photo: NYTimes

 

While Singer’s article delves more deeply into the specifics of some of these technologies, reading her article inspires a discussion that is vital within both the current landscape of internet privacy and tracking, and the complex realm of corporate responsibility—and the various fine lines that seem to be growingly increasingly difficult to identify.