The Technical Challenge of Hate Speech, Incitement and Extremism in Social Media
The primary challenge is working out how to identify incitement and hate speech given: (a) the volume of content creation in social media (b) the use of videos, images, coded language, local references etc (c) the changing nature of the expression over time (d) limitations that prevent governments demanding access to non-public data
Further, without knowing what the public is reporting to the social media platforms, how can a governments judge if social media platforms are responding adequately? This has come up in cases like the murder of Leigh Rigby (the Telegraph reports: "Facebook 'could have prevented Lee Rigby murder'", Sky News "Facebook Failed To Flag Up Rigby Killer's Message") it's also been a hot topic in the US Congress e.g. ABC News reports, "Officials: Facebook, Twitter Not Reporting ISIS Messages". The latest, is from Israel where Internal Security Minister Gilad Erdan said Facebook has blood on its hands for not preventing recent killings. He is quoted by Al-Monitor as saying, "[The Facebook posts] should have been monitored in time, and [the murder] should have been averted. Facebook has all the tools to do this. It is the only entity that, at this stage, can monitor such a tremendous quantity of materials. It does it all the time for marketing purposes. The time has come for Facebook to do the same thing to save lives."
The approach my organisation uses relies on crowd sourcing, artificial intelligence and cloud computing. It enables content to be evaluated by people, but then quality controls the response of the crowd through AI. It allow empirical results to be gathered, such as those reflected in this report we produced for the Israeli Government on antisemitism in social media: