Our friends over at the Daily Dot have reported on, Thea, aged 25. Thea shared a post on Facebook expressing she felt like dying. It is here where Facebook, for lack of a better term, ‘stepped in’ to prevent her suicide.
Writing that she felt “tired of living” and “exhausted” her post was flagged for feelings of suicide, within 20 minutes.
When Thea returned to Facebook, it displayed a ‘resources’ message, suggesting that she reach out to a friend or contact a helpline. Thea also felt that Facebook made her feel embarrassed. It notified people whom she didn’t interact with anymore.
What flagged Thea’s post was in fact part of a Facebook machine learning tool. Developed by AI, the tool identifies posts which express thoughts of suicide.
The tool works by identifying a range of concerning words of phrases in posts, comments that show concern for the poster of content. Consequently multiple comments such as “Are you ok” or anything of that sort will trigger the tool. This will then flag it to Facebook’s Community Operations team. Suicide prevention trained employees will then review the content and determine how to help them. Such as contacting paramedics (if a user needs immediate attention) Regardless if a user has flagged a piece of content or the AI has picked it up.
Though the tool may have some slight flaws in how users feel that Facebook use their content. Thea spoke to Daily Dot and originally shared the post as she knew others on her feed could relate. Henceforth they will take comfort knowing that they weren’t alone. But after her experience with the tool, she feels anxious. She’s not entirely sure if she will express her feelings on Facebook:
“I post these things because…it makes it easier for those people to talk with you and help, because they know how to handle it,” Thea said. “With the way the AI functions, it feels like we can’t trust anything anymore, and now less people are going to speak up about their suicidal thoughts, which is more dangerous to that person.”