Facebook, Twitter and Google “dedicated” to user safety (Wired UK)

Facebook, Twitter and Google have rejected MPs accusations that they enable terrorism and serious organised crime by not effectively policing their own networks. 

Representatives from the three tech giants faced the Home Affairs select committee inquiry into counter-terrorism, detailing the processes put into place to ensure terrorism does not thrive on their platforms.

Twitter has more than 100 people monitoring reported content from over 320 million users, it said. Facebook and Google declined to give exact figures for community support networks, but both stated that their teams number “in the thousands”.

The companies contradicted suggestions from committee chairman Keith Vaz that the Internet was “instrumental” in promoting terrorist activities.

“Daesh are not a success because of our platform,” said Simon Milner, Facebook’s policy director for UK, the Middle East, Africa and Turkey. “We’ve worked very hard to disrupt what they do, and we’ll continue to strive to do better.”

Milner stressed that young people are not exclusively radicalised online, and that Facebook is working alongside communities to stop the spread of terrorist discourse. “Keeping people safe on Facebook is our number one priority,” he added.

Anthony House, senior manager of public policy at Google, said that the company was equally committed to removing terrorist activity from their platforms. “We are extremely effective at removing videos and accounts from foreign terrorist groups, and very successful at removing reuploads of it,” he said. “It’s a constant cat and mouse game, but it’s one we are well-equipped to meet. We’re working with young people to provide an online alternative We want to provide a community of hope, not a community of harm”.

Twitter was also forced to address long-running complaints about the platform — currently, Twitter users cannot specify terrorist activity when reporting accounts, with many appearing to take several weeks to be shut down after initial reports. Twitter has also come under fire for failing to proactively notify law enforcement of potential terrorist material — a policy it stands by.

“We don’t proactively notify law enforcement of potential terrorist material,” said Nick Pickles, UK public policy manager for Twitter. “If we’re taking down tens of thousands of accounts, we’re not in a place to judge the credibility of each of those threats. We don’t want to swamp law enforcement with false threats”. 

The three companies rejected claims from the committee that they might relent on opposing parts of the Investigatory Powers Bill on surveillance, as part of a “sweetheart deal” related to tax. “The two issues are separate,” said Milner. “One is a matter of policy, one is a matter of legislation. There is no discussion between them”. 

“We have no ambitions for the deal to be made better,” added Pickles.

“I think the committee — and government more generally — reach too quickly for technological solutions,” Jamie Bartlett, director of Centre for the Analysis of Social Media at Demos, told WIRED. “They assume there’s a magic algorithm that these firms can make to remove all unfavourable content. It’s far more difficult than they think to do this, as the witnesses said”. 

“There was also an assumption that the companies should be held responsible for what’s on their platform — contrary to the Communications Decency Act, which said that the person, not the platform, was responsible”. 

“With the growth of encrypted messaging apps, decentralised networks and block chain based databases, in 10 years the authorities might look back fondly at the time they could ask social media companies’ officials to appear at select committee hearings.”

The Investigatory Powers Bill has been extremely controversial among critics who perceive it as contradictory and unsafe. Security expert William Binney, who worked in senior positions at the NSA for 30 years, told WIRED that the legislation should be rewritten over concerns that it would be “a major impediment to success by analysts and law enforcement”.

The creators of Tor said that the bill would “significantly harm” the safety of UK citizens, and earlier this week members of the Science and Technology Committee warned that the bill is “ambiguous” and “confusing”, and that “there could be many unintended consequences for commerce arising from the current lack of clarity of the terms and scope of the legislation”.

If the article suppose to have a video or a photo gallery and it does not appear on your screen, please Click Here

2 February 2016 | 4:05 pm – Source: wired.co.uk


Leave a Reply

Your email address will not be published.