A hate crime occurs nearly every hour in the U.S. It’s a growing problem that’s been fueled by hate-filled internet posts on social media and other internet platforms. Many of us have seen news headlines about extremist attacks that were fueled by online hate speech—such as the mass shootings at Emanuel African Methodist Episcopal Church in Charleston, South Carolina in 2015; a Walmart in El Paso, Texas in 2019; and a nightclub in Colorado Springs, Colorado in 2022.
In a new report, we looked at the connection between hate crimes and online hate speech, and how internet-based companies and law enforcement are combatting these problems. Today’s WatchBlog post looks at our work.
What do we know about the connection between online hate and extremist acts?
Online hate speech is widespread. It includes prejudiced comments about race, national origin, ethnicity, gender, gender identification, religion, disability, or sexual orientation. Research indicated up to a third of internet users have experienced hate speech online. That number is even higher when looking at just the online gaming community—where about 50% have experienced hate speech.
Those who post hateful or extremist speech online may do so in an effort to spread their ideologies.
Extremist attacks—such as those in Charleston, El Paso, and Colorado Springs—illustrate how exposure to hate speech online may have contributed to the attackers’ biases against people based on race, national origin, and sexual orientation. Additionally, these attacks showed how the internet has offered the perpetrators of such attacks a vehicle for disseminating hateful materials—such as manifestos containing disparaging and racist rhetoric prior to the attacks. The perpetrators of these three attacks were convicted of, or pled guilty to, federal or state hate crimes.
In response to the rise of hate crimes, the FBI has elevated such acts to its highest-level national threat priority. FBI’s designation placed hate crimes at the same priority level as preventing domestic violent extremism. But the government and others are also taking steps to respond to online hate crimes specifically.
What’s being done to combat online hate?
In our new report, we looked at how internet companies and the federal government are trying to combat online hate crimes.
We looked at six companies that run online forums and platforms—including social media, livestreaming, and crowdfunding platforms—that are tackling this issue in different ways. Each company has its own definition of content that violates the platform’s terms of use. But every definition prohibited hateful content related to disability, ethnicity, race, and religion.
We also found that each company had a different way of flagging hateful posts. All of the companies used algorithms, to varying degrees, to flag content and remove it. Some also relied on users to identify harmful content.
We also reviewed what the federal government is doing to address hate crimes that may be linked to online hate speech. For example, federal law enforcement agencies have used online hate posts as evidence during prosecutions of those who commit domestic violent extremism incidents and other hate crimes.
The Department of Justice is also collecting data from law enforcement agencies and the public about hate crimes to better understand their prevalence. One way it does this is through an annual national survey of about 150,000 households, which asks questions about potentially underreported crimes like hate crimes. While the survey can help estimate the prevalence of hate crimes, it doesn’t ask specifically about online hate crimes. Having this information could greatly inform federal law enforcement’s efforts, including putting resources where they are needed most.
Because of this, we recommended that the Department of Justice consider methods to collect information in the annual survey about hate crimes that occur on the internet.
Learn more about our work about the connection between online hate speech and violent extremism by reading our new report.