“I don’t want to live no more.” These were the final words posted by a 14-year-old from Miami, a year before she launched a Facebook live stream and hung herself in front of her webcam. Her two-hour live stream was reportedly viewed by thousands of people, including a friend who called the police. Unfortunately, the authorities arrived too late to save her.
While this story is a particularly alarming one, it’s sadly not unique — suicide is the second leading cause of death among teenagers in the US, Europe and South-East Asia. Every day, millions of social media posts, chats, and queries on Facebook, Snapchat, Google, Siri and Alexa relate to mental health. These posts could act as a trail of breadcrumbs toward people most at risk of suicide. But is there a responsible way for technology companies to use this information to intervene? How should Google or Siri respond to someone searching for information about depression or suicide? How should Snapchat react when two teens talk secretly about cutting themselves?
These questions aren’t just hypothetical; some tech companies are already taking action. In the US, any Google search for clinical depression symptoms now launches a knowledge panel and private screening test for depression (the PHQ-9) along with verified educational and referral resources (“knowledge panel”). Google has stated that it will not link a person’s identity to their test answers but will collect anonymized data to improve the user experience. As millions of people take such online tests, a vast treasure trove of data is generated that may eventually be useful, in combination with other information on each user, to generate a digital fingerprint of depression.
In response to live-streamed suicides, Facebook launched an artificial intelligence (AI) algorithm that scans people’s posts (in some countries) for images or words that may signal self-harm. If they spot such behaviour, resources are provided to the user and an internal “Empathy Team” is alerted. First responders may be notified if the first two actions don’t avert the self-harming behaviour. People cannot opt out of this Facebook initiative. In the first month after its launch, Facebook CEO Mark Zuckerberg says the algorithm has helped more than 100 people.
These initial efforts, based on our conversation with team leaders at Facebook and Google, seem sincere and well thought out. Kudos to both companies for jump-starting the conversation. But, as in any new frontier, innovation raises questions and challenges:
- Is separate consent needed for social media companies to monitor our mental health?
- If companies are monitoring our mental health, do their algorithms need to be regulated and studied to show efficacy?
- Should Facebook allow a live stream of a suicide to proceed or cut it off when it’s clear what is happening?
- Can people’s mental health data be used to target advertisements, for example, for antidepressant pills?
- Will people in countries with weak data privacy rules be more subject to such monitoring? Already, Facebook’s AI is not available in Europe since it does not comply with the continent’s stricter privacy rules.
- Last, but not least, is the solution to teenage mental health problems more AI-monitored social media or less?
Today, the ability of AI algorithms to analyse the moods of social media posts is still imperfect. There is wide cultural variation in how mental illnesses are expressed and — as most clinicians are aware — many people who plan suicide deny it, complicating studies. One analysis of 55 million text messages by the text-based mental health service Crisis Text Line found that people considering suicide are more likely to use words like “bridge” or “pills” than the word suicide.
Despite these challenges, Zuckerberg was right when he foresaw that AI could spot online suicidal behaviour faster than a friend. A combination of AI with trained counsellors to respond to risky posts would surely improve the current status quo. It is time now for a multistakeholder, public-private partnership to rapidly scale this innovation to help all of society. To succeed, such a partnership must revolve around responsible research guidelines, make algorithms transparent and ensure the results of studies are open access. This will guarantee that researchers around the world can contribute and help prevent the erosion of the public trust in all the technology companies involved.
Advances in brain science and AI have huge potential to enhance human well-being and mental health provided they are set within the right ethical and scientific framework.
Also published on Medium.