March 28, 2023

Has The New Facebook AI To Prevent Suicide Attempts Gone Too Far?

Suicide is devastating, leaving behind loved ones who are hurt, confused, and sometimes angry. The wake it leaves behind can be felt for years after everything has settled down. The worst part is that suicide is hard to predict; at least, it was hard to predict. Facebook has put together an AI algorithm that they believe will help them identify the possible signs of someone who is suicidal and then enact steps to prevent this from happening. The biggest question then is, has the new Facebook AI to prevent suicide attempts gone too far?

Suicide Broadcasting

Recently, suicide broadcasting, or the act of committing suicide while live streaming, has become exceptionally popular. The reasoning appears to be garnering attention in their final moments or for pure shock value. Ultimately, the end result is the same. Someone has died an early, unnecessary death and others find themselves grieving after it was too late.

Facebook appears to be one of the most actively targeted platforms when it comes to suicide broadcasting. And now, Facebook is trying to stop this new trend in its tracks. Facebook has developed an AI algorithm that is able to find these high-risk users and save their lives. So far, the AI has done a decent job. The AI searches through users posts, looking for specific terms, changing emotional statuses, posting frequency, likes and many other factors not identified. In order to do this, the users of Facebook are giving up quite a bit of privacy. Again, the question then is, has the new Facebook AI to prevent suicide attempts gone too far

How Does The Facebook AI Work?

Whenever a user posts a certain phrase, or likes a flagged post, or shares a flagged image, the AI will be alerted. From there, it can mark the user for enhanced security as a possible risk for suicide broadcasting. Additionally, if the user is live streaming and the feed is filling up with identifiable phrases such as don’t do it, please don’t, it’s not worth it, etc., then it can escalate a warning to the local authorities with the information to their home or current GPS location. From there, the authorities can attempt to stop something before it is too late.

This has not been something that Facebook just threw together either. They have been working on this for quite some time, working to perfect it, to maximize the potential to save lives. In fact, it has been constantly evolving, learning what a true flag is and what is nothing more than general ramblings. Even more impressive is the fact that Facebook has hired thousands of new employees to handle this life-saving task. These new employees are supposed to, in real time, look through the escalated posts and determine if someone needs to be contacted or not.

Even with the thousands of added employees, the burden is more than they can handle. There is a lot to watch, to process; that is why Facebook also relies heavily on the AI algorithm to help expedite the flagging process.

The Good

If one life is saved, then everything up to this point was worth it. Facebook was never built with the intention that anyone would take their own life, especially in front of millions of viewers. The AI was designed because of an increasing trend of broadcast suicides.

Since the inception of the AI, Facebook has been able to stop numerous suicides already, with no intention to stop using the AI. Dispatching the authorities may seem like a bad idea, but ultimately, if it saves even one life, then the hassle is worth it. Even if the person who was flagged isn’t in the process of committing suicide, they may need counseling and this system will help identify those users too. Every little step in saving a life is good … or is it?

The Bad (And The Too Far)

Now, for all the good that the Facebook AI provides, there is a big issue here that has to be addressed; how much privacy invasion is too much? While the AI was built on good intentions, is it going too far in invading a user’s privacy? Since the AI scans through every single post, like, share, it probably knows more about a child than the parent does and in the wrong hands, this could be problematic (i.e. data breaches, online predators, etc.).

Even so, that is not the worst of it. The fact that it contacts the authorities is absolutely frightening. And while dispatching help is absolutely necessary, it can potentially send out false positives all based on an AI that is not human in any capacity. Facebook does defend this as they use employees to help determine the final decision, but we all know humans sometimes take the easy way out.

For a person not intending on committing suicide, now that have to deal with authorities, possible unwarranted costs from city services, breaking into the home if necessary and such. While the AI is getting better, it is not perfect, and for that, it can cause possible, irrevocable harm.

The premise behind the AI makes it appear on the surface to be a great idea. It can help reduce the number of broadcast suicides and help to save lives, alerting the right people to individuals who may need counseling. And while this is all great, it does so at the sacrifice of personal privacy. Ultimately, the AI is a great tool, and when used properly, can save lives. As this trend has continued in popularity, we feel that this is a tool that is necessary, even if it means giving up more privacy in the process.