Posts

Facebook Suicide Prevention

Facebook Suicide Prevention

Our friends over at the Daily Dot have reported on, Thea, aged 25. Thea shared a post on Facebook expressing she felt like dying. It is here where Facebook, for lack of a better term, ‘stepped in’ to prevent her suicide.

Writing that she felt “tired of living” and “exhausted” her post was flagged for feelings of suicide, within 20 minutes.

When Thea returned to Facebook, it displayed a ‘resources’ message, suggesting that she reach out to a friend or contact a helpline. Thea also felt that Facebook made her feel embarrassed. It notified people whom she didn’t interact with anymore.

What flagged Thea’s post was in fact part of a Facebook machine learning tool. Developed by AI, the tool identifies posts which express thoughts of suicide.

A spokesperson for Facebook informed Daily Dot that disagree’s which parts of Thea’s account. Stating that they don’t contact friends of people contemplating suicide. Unless they are the users who have flagged posts to Facebook.While Thea maintains that her Facebook friends were indeed contacted and to contact paramedics were also suggested.

The tool works by identifying a range of concerning words of phrases in posts, comments that show concern for the poster of content. Consequently multiple comments such as “Are you ok” or anything of that sort will trigger the tool. This will then flag it to Facebook’s Community Operations team.  Suicide prevention trained employees will then review the content and determine how to help them. Such as contacting paramedics (if a user needs immediate attention) Regardless if a user has flagged a piece of content or the AI has picked it up.

Uncertainty

Though the tool may have some slight flaws in how users feel that Facebook use their content. Thea spoke to Daily Dot and originally shared the post as she knew others on her feed could relate. Henceforth they will take comfort knowing that they weren’t alone. But after her experience with the tool, she feels anxious. She’s not entirely sure if she will express her feelings on Facebook:

“I post these things because…it makes it easier for those people to talk with you and help, because they know how to handle it,” Thea said. “With the way the AI functions, it feels like we can’t trust anything anymore, and now less people are going to speak up about their suicidal thoughts, which is more dangerous to that person.”

 

 

project maven

Project Maven

Google have announced that they have a part in ‘Project Maven’. A project in which as Gizmodo are reporting, ‘helping the Pentagon build AI for drones’ CRAZY NEWS!

The project began in 2017 as a pilot program to simply speed up the United States military’s implementation of the most up to date technology of artificial intelligence.

It’s a relatively cheap programme too! Project Maven is said to cost less than $70M in it’s first year. Focusing solely on integrating machine learning in it’s drones. Why would the Pentagon want to invest in improving AI in drones? To improve it’s intelligence when it comes to targeting drone strikes.

Project Maven: Peace

Google released a statement on Tuesday of last week, claiming it’s part in Project Maven. Stating that it’s involvement in the project was simply a peaceful one. Stating that Project Maven was “Specifically scoped to be for non-offensive purposes.”

Since then many sectors of Google itself have called upon it’s CEO to pull them out of the project. With the Verge reporting that 3,100 employees have signed a letter urging the CEO of Google, Sundar Pichai, to re-evaluate it’s involvement. Stating that Google should not be in ‘the business of war’ – With Diane Green head of Google’s cloud operations also emphasizing Google’s peaceful stance. Stating that their tech will not be used to fly drones, nor will it launch any weapons.

Details on what Google are actually doing working on Project Maven aren’t very clear. A Google spokesperson did confirm to the Verge that they have given the department of defence access to software (TensorFlow) that is used to help machine learning apps understand contents of photos.

To listen to the guys chatting about this please click here! 

To keep up on all things How To Kill An Hour by signing up to our newsletter, simply click here and you’re on the list!