The Downfall of Facebook’s Online Content Moderation
In the middle of 2017, a documentary broadcasted by British Channel 4 exposed how Facebook online content moderation personnel are trained and the process by which they ultimately uphold or reject a Facebook post. A reporter went undercover as an intern and potential staffer who will be tasked to do social media moderation if he passes the training. What the reporter unveiled was nothing short of explosive. Apparently, Facebook put its interests above that of its social media public. One trainer emphasized, for example, that censorship must not be exercised excessively, because this can turn off members or lead to complaints from them. Another trainer admitted to the undercover reporter that bottom line, decisions about online content moderation boil down to money, alluding perhaps to how much people will pay to stay on the site, and how much companies will advertise to get their posts seen.
Online Content Moderation of Offensive and insulting posts
That veiled indictment of Facebook’s de-facto content moderation policy was made more damning because of the posts that their actual moderators did allow to go online. The undercover reporter mentioned at least three. The first was a video showing a child being beaten up by an adult. The second was a post judging a girl as a candidate for drowning if she chooses a black as her first boyfriend. The third was another post ranting at Muslim immigrants to pack up and leave the country.
All three can be obviously categorized as falling into content that Facebook had previously said many times it would not allow: violence, child abuse, racism, incitement to violence and murder, and misogyny.
After the airing of the documentary, Facebook executives did damage control. High-ranking executives apologized for the controversial statements and the videos and posts that the public found offensive. On the other hand, while they took down the video showing child abuse, they let the other posts remain, reasoning that context had to be considered. The remark about Muslim immigrants going home could very well be part of an ongoing debate that was airing all sides of the issue.
However, Facebook’s online content moderation took another hit in 2018. This time, an independent social media content moderator employed by The British Globe and Mail did her own autonomous study of Facebook’s online content moderation policies and their method of applying them. The writer reported that Facebook did do a form of censorship — but this time it was in favor of existing ruling structures that do not allow for dissent. She cites as examples of posts taken down as those from women who had been protesting against sexual harassment or the glass ceiling that prevented them from professional advancement in a white man’s world.
Taken together, both reports can raise the allegations against Facebook’s content moderation to an alarming level, perhaps at least as far as the public is concerned. Offensive and insulting posts are allowed in the name of freedom of expression, while posts that protect an oppressive status quo are allowed. The damage that this has done to the social media site’s public relations is still being assessed but it is far from insignificant.
As a business owner, you can learn a lot from Facebook’s mistakes as you are just launching your own social media site. First set up your own policy, defining concretely what is offensive, insulting, discriminatory, scandalous, and libelous. Second, hire a competent social media moderator who can execute those policies without fanfare but with great effectivity. Third, protect your brand and stay away from the fray; answer issues immediately but in a manner that addresses your client’s concerns. If you need assistance in outsourcing content moderation area, call us at HOPLA now. We have solid and proven experience in social media management, supported by a team of remote workers who know the job, display integrity, and will safeguard your enterprise.