Youtube to expand work force to keep inappropriate content in check

Youtube has decided to expand its work force to ensure that all forms of extremist content undergo review. Reuters reports that more employees will be hired in 2018 so that all kinds of unsuitable content on the website can be monitored better and taken down promptly. Susan Wojcicki’s, CEO, Youtube in a blog post says that Youtube is attempting to take strict action to protect their users against inappropriate content. This is being done by enforcing stricter policies and bigger review teams.

 The ultimate aim for Google is to increase the total number of people working on the content that violates their policies to 10,000. They want to achieve this by 2018, according to Susan Wojcicki’s. She also mentions that “aggressive action” is being taken on comments and new comment moderation tools will soon be launched.

Last week, Youtube rolled out a new update on their recommendation feature which spotlights videos that users will find most interesting and gratifying. They overlooked concerns that this update might make users fall prey to misinformation and limit dissent since they will be exposed to too many like-minded opinions.

In the recent past, Youtube has faced serious back-lash and criticism from multiple advertisers and advocacy groups because of their inability to monitor the content on their website and the fact that such content may have a lasting impact on the society at large.

Sources:

Reuters, NDTV

The process of removal of extremist content is progressing on Facebook

Facebook, the world’s largest social media website, has been facing constant pressure in the United States of America and the United Kingdom to intercept and remove extremist content on its enormous platform more effectively. On November 29, the social media giant announced that it had begun removing 99 per cent of the extremist content. As it prepared for a meeting on tackling ‘terror’ content with the European authorities, content related to Al Qaeda and other Islam militant groups was removed.

According to Reuters, Monika Bickert, head of global policy management and Brian Fishman, Facebook’s head of counter-terrorism policy, stated in a blog post that 83 per cent of the extremist content uploaded on Facebook is removed within an hour of their original upload time. In June, Facebook mentioned that it had significantly escalated its AI (artificial intelligence) that enabled swift recognition and removal of such content.

As per NDTV, Bickert and Fisherman stated, “It is still early, but the results are promising, and we are hopeful that AI will become a more important tool in the arsenal of protection and safety on the internet and on Facebook.” They also mentioned that Facebook’s advanced AI managed to remove 99 per cent of the extremist content related to ISIS and Al-Qaeda before it was flagged down by its users. In some cases, it was removed even before the content went live.

The blog post is right on mark with respect to timing, as it comes a week before Facebook and other social media’s meeting with the European Union government and EU executive. The agenda of the meeting will revolve around how to tackle such disturbing content.

Sources:

Reuters, NDTV