If you’ve been on Facebook for any amount of time (for business or personal use), it’s likely that you’ve had a post flagged for some sort of violation of Facebook’s community standards. As Facebook comes under more scrutiny for the content posted on their platform, they have increased the automated detection of said content. For the most part, this is a good thing, but there is a dark side to this increase in automated standards enforcement: misidentification.
In a recent report, Facebook gave an overview of how their automated detection is working. For the purposes of our post, we’re only going to focus on the spam numbers in the report. The report covers the first quarter of 2019 along with a broader view of the past year. Specifically, it shows the amount of content that Facebook has “taken action on” during that time period (meaning they removed the content, applied a warning screen to the content, or they disabled the account). According to the report, the amount of spam content that has had an action taken on it has more than doubled in the past year, going from 836 million in Q1 of 2018 to 1.76 billion in Q1 of 2019.
For the first quarter of 2019, they’ve added a new metric – they are now showing how much of the content that had action taken against it was restored, either by their own accord or because a user requested an appeal. They don’t have that information for any other time periods, so everyone is just going to have to wait and see how that metric increases or decreases in relation to the amount of spam identified.
To give Facebook some credit where credit is due, the report shows that in Q1 of 2019 only about 3% of the posts Facebook took action on were misidentified. That low of a percentage is great, but it still equals out to 44.2 million posts that were restored after being misidentified as spam. If one (or more) of your posts was part of that 3%, I’m sure you’d agree that it’s frustrating and troubling, to say the least, to have your content misidentified as spam. According to some users, having a post identified as spam can also cause problems in other areas, such as having your comments on other content flagged and locking you out of your account.
Overall, it appears that Facebook’s automated standards enforcement is doing its job, but because of the incredible amount of content shared on the platform every day, we feel like they might be struggling with getting it right. Regardless of our opinions though, one thing is certain, as Facebook continues to increase their automated standards enforcement, it’s likely that you will see an increase of spam notifications for content that you post on the platform.
What are your thoughts on Facebook’s automated standards enforcement? Have you seen an increase in reported posts on your profile or pages?