Recent research has revealed that it would now be possible for Artificial Intelligence algorithms to monitor hateful speeches and content of various social media platforms, just the way a human reviewer would do it. Giant tech executives claimed as there are torrents of data being uploaded to the internet every day, it would now be simpler to moderate and flag the violent and hateful content with the help of artificial intelligence.
There have also been two new conclusions that these Artificial Intelligence programs are actually contributing in boosting the radicalism on social media platforms. One study revealed that an artificial Intelligence program was 2.2 times likelier to flag a comment written in English variations used by African American as inappropriate. This also increases the likelihood of tweets made by the black residents of the United States of being removed on grounds of being offensive or violent.
Another study has shown that there was a bias radical in nature towards the speech used by the African Americas and more than 155,000 tweets were flagged as inappropriate by the program. There has been extensive debate over the fact that the matter of hate speech was a slippery are as the variations and the context under with the comment is mate matters significantly. This also depends on the country where such content is posted and the nature of the speaker, culture etc. There are words that have different meanings in different languages and hence it is necessary to check it thoroughly before flagging it.
It has been established that it is difficult for a machine to monitor so many variables at the same time. There have been activists groups that have said that social media platforms like Facebook and Twitter monitor the content posted by black people more thoroughly than other. Twitter has been contacted for further details in this context.