Social Media Platforms and Hate Speech
The role of social media platforms in censoring content continues to be an evolving conversation. Especially as tensions over hate speech and misinformation heighten in the States, platforms have taken varying positions, ranging from appeals to free speech to stricter guidelines on what is and is not appropriate. However, despite efforts from multiple platforms to combat misinformation and fake news, there is one area where the needle has not moved: hate speech.
Pressure is mounting for platforms to censor hate speech
Inevitably, hate speech is extremely difficult to regulate under the First Amendment. However, that hasn’t stopped thousands of users from calling out platforms for their inaction and laying the ethical burden squarely at their feet. Such calls for action are underscored by evidence of a link between hate speech and violence--on top of the common-sense understanding that hate-speech has debilitating psychological effects on the victim as well as the aggressor. It doesn’t take a scientific study to prove that users see hate speech as well as its consequences in their feeds every single day. As racial tensions boil over in cities throughout the States, firings over hateful social media posts have spiked significantly.
The unfortunate advantage to a prolific amount of hate speech floating around on social media platforms is that it is easier than ever to spot potentially harmful or dangerous individuals. Individuals willingly expose themselves publicly, further contributing to an ever-growing toxic pool of content that has some parties calling for an expanded definition of censorship.
Should platforms be calling the balls and strikes?
As a social media screening firm, we process hate-filled content every day and can attest to its ubiquitousness. In ten years of screening social media platforms for, among other things, intolerance and hate speech, we’ve flagged thousands of posts that have the potential to contribute to hostility both on platforms and in real life. Cries for censorship are understandable from our perspective; they come from a very real concern for communal well-being that forms the foundation of our business. However, the inconvenient truth is that policing free speech on a platform is a matter of business ethics, which vary widely from culture to culture, market to market. To that end, with such wide variability of social norms, should social media platforms be calling the shots on what is and is not culturally appropriate to begin with?
Perhaps not. Perhaps this type of content doesn’t need to be “managed” by a platform. Either way, what remains is a call for accountability. What should the remedy be?
What about public accountability?
The beginnings of the answer lie partially in a culture’s natural responses to social behavior. Public espousing of hate speech already does have a variety of social consequences. As cultures continue to adapt and develop social norms for the digital era, these natural social consequences may end up being a path to self-censorship that, ideally, would let platforms off the hook for anything more than imminent danger.
For example, an individual who posts intolerant content (i.e. hate speech) may experience social consequences on a number of levels. They might experience difficulty or accountability in their home life from various family members. They might receive pushback from friends in person or online. And, as has been increasingly reported in the last few months, they may face discipline or termination from their workplace, depending on the nature of the offense. Online, the former two might be managed by simple mute or unfollow features already in place on most platforms. However, where does that leave the workplace, online or in person?
Concerned parties may be getting at something useful--there is an upside to knowing who is intolerant or dangerous. On a corporate level, being able to weed out intolerant employees who poison morale or pose threats has an overall benefit to workplace culture (and the bottom line). Censorship on a platform removes the possibility of exposure, thus removing the opportunity for both individuals and businesses to hold people accountable. The tremendous amount of public data that exists, uncensored, on social media platforms has the potential to play a role in holding hate speech to account. Under the right circumstances, these public data pools can be leveraged to weed out potentially harmful, threatening, or dangerous individuals from the workplace.
The answer may be a third-party system
As it stands today, over 80% of employers do some sort of informal social media “check” on their candidates. Unfortunately, this type of DIY search is potentially dangerous as it risks compliance violations and blurs professional and personal boundaries. To ensure that a candidate’s privacy is respected under the FCRA, a third party is necessary, even when accessing publicly available information. As a CRA, a third-party social media firm will be able to redact protected-class information (thus delivering an unbiased report) as well as ensure that an employer only receives relevant content on business-related behavior. With these safeguards in place, social media screening becomes both a solution for privatizing hate speech monitoring as well as the next standardized pre-employment practice.
More on the topic
When it comes to the relationship between freedom of speech, social media, and employment it's important to know your rights.
Looking to implement changes in your organization?
Schedule a consultation to learn more about social media screening or check out our ebook 5 Ways to Amplify Company Culture.