Skip to main contentSkip to navigationSkip to navigation
Facebook’s suicide prediction software, which affects thousands of lives, is dangerous in untrained hands, yet regulators allow it to operate without oversight.
Facebook’s suicide prediction software, which affects thousands of lives, is dangerous in untrained hands, yet regulators allow it to operate without oversight. Photograph: Dado Ruvić/Reuters
Facebook’s suicide prediction software, which affects thousands of lives, is dangerous in untrained hands, yet regulators allow it to operate without oversight. Photograph: Dado Ruvić/Reuters

Facebook is predicting if you'll kill yourself. That's wrong

This article is more than 5 years old

Facebook says its suicide prevention software is not a health screening tool. But regulators must take notice

In 2017, Facebook started using artificial intelligence to predict when users might kill themselves. The program was limited to the US, but Facebook has expanded it globally. It scans nearly all user-generated content in most regions where Facebook operates. When it identifies users at high risk for suicide, Facebook’s team notifies police and helps them locate users. It has initiated more than 3,500 of these “wellness checks” in the US and abroad.

Though Facebook’s data practices have come under scrutiny from governments around the world, its suicide prediction program has flown under the radar, escaping the notice of lawmakers and public health agencies such as the Food and Drug Administration (FDA). By collecting data from users, calculating personalized suicide risk scores, and intervening in high-risk cases, Facebook is taking on the role of a healthcare provider; the suicide predictions are its diagnoses and the wellness checks are its treatments. But unlike healthcare providers, which are heavily regulated, Facebook’s program operates in a legal grey area with almost no oversight.

Though the program may be well intentioned, there are many associated risks, which I describe in an upcoming article in the Yale Journal of Law and Technology. Some of those risks include high false positive rates leading to unnecessary hospitalization and forced medication, potentially violent confrontations with police, warrantless searches of one’s home, and stigmatization and discrimination against people labeled high risk for suicide.

Facebook says its suicide prediction technology is not a health screening tool, and it merely provides resources for people in need. But the evidence suggests otherwise. Facebook assigns users risk scores ranging from zero to one where higher scores reflect greater perceived suicide risk. This practice is comparable to a suicide prediction program at the Department of Veterans’ Affairs called the Durkheim project. The program, which ran from 2011 to 2015, analyzed veterans’ social media activity to calculate suicide risk. However, unlike Facebook’s system, which is unregulated, the Durkheim project was heavily regulated by state and federal laws designed to protect patients and research subjects.

How can the same technology be regulated in one context and almost completely unregulated in another? Most tech companies need not comply with laws such as the Health Information Portability and Accountability Act (HIPAA), which protects patient privacy, and the Federal Common Rule, which safeguards human research subjects. Thus, current health laws are inadequate to protect consumers because Facebook’s efforts are part of a trend in which tech companies are assuming roles historically reserved for doctors and medical device companies. For instance, Apple’s new smartwatch monitors people’s hearts for signs of arrythmia, but Apple need not comply with HIPAA. Similarly, Google recently patented a smarthome that could potentially identify substance use disorders and early signs of Alzheimer’s disease based on video and audio cues.

Suggesting that these companies are not venturing into medical practice is like saying the use of an X-ray machine constitutes the practice of medicine when operated by doctors, but if used by a Silicon Valley startup, its users are not practicing medicine. That wouldn’t make sense because the machine poses similar risks to people regardless of the context, and it would be unsafe to operate without proper training and certification.

Similarly, Facebook’s suicide prediction software, which affects thousands of lives, is dangerous in untrained hands, yet regulators allow it to operate without oversight.

Most US states have laws that discourage the practice of medicine by corporations and unlicensed individuals. In California, practicing without a license consists of unlicensed diagnosis or treatment, and violations are punishable by fines of up to $10,000 and imprisonment for up to one year. The state defines “diagnosis” as “any undertaking by any method, device, or procedure whatsoever … to ascertain or establish whether a person is suffering from any physical or mental disorder.”

Critics may argue that Facebook is not practicing medicine because it’s not making diagnoses. But suicidal thought is a recognized diagnosis in the International Classification of Diseases (ICD-10), a system created by the World Health Organization and used by healthcare providers worldwide. Critics might also contend that Facebook’s risk scores are not diagnoses because they are merely statistical inferences, and they are not 100% certain. But diagnoses made by physicians are rarely certain, and they are often expressed as percentages or probabilities.

In the diagnostic process, doctors collect data from patients and make inferences based on that information while drawing on their training and experience. Similarly, Facebook’s prediction algorithms make inferences, expressed as probabilities, based on their training (in this case machine learning) and data collected from Facebook users. In both cases, the result is a label, a categorization, a diagnosis.

Facebook and other social media platforms have been described as the new governors. They influence speech, the democratic process, and social norms. All the while, they are quietly becoming the new health regulators. Creating, testing, and implementing health technologies with no outside oversight or accountability. For everyone’s safety, state and federal regulators should take notice.

  • Mason Marks is a research scholar at the Information Law Institute at NYU, a visiting fellow at the Information Society Project at Yale Law School, and a doctoral researcher at the Center for Law and Digital Technologies at Leiden Law School

  • In the UK, Samaritans can be contacted on 116 123. In the US, the National Suicide Prevention Lifeline is 1-800-273-8255. In Australia, the crisis support service Lifeline is 13 11 14. Other international suicide helplines can be found at befrienders.org

More on this story

More on this story

  • The political jostling behind Tory internet crackdown

  • Konnie Huq turns dramatist to help teach children internet safety

  • Yes, hold tech giant bosses to account, but this is just the easy bit

  • Internet crackdown raises fears for free speech in Britain

  • Social media firms to be penalised for not removing child abuse

  • Plan to crack down on social media firms is 'massive step', say MPs

  • Social media bosses could be liable for harmful content, leaked UK plan reveals

  • Online harms white paper: could regulation kill innovation?

  • Closure of Google+: everything you need to know

  • Angela Merkel posts video on giving up her 2.5m Facebook followers

Most viewed

Most viewed