Why Facebook and Twitter Can’t Be Trusted to Police Themselves (Politico)

Why Facebook and Twitter Can’t Be Trusted to Police Themselves

http://politi.co/2z6PJt7

AP Images

Law And Order

Why Facebook and Twitter Can’t Be Trusted to Police Themselves

Google, Facebook and Twitter took a beating on Wednesday testifying in front of the Senate and House Intelligence Committees about their role enabling Russia’s interference in the 2016 election. “You bear this responsibility,” California’s Senator Dianne Feinstein lectured the three company lawyers in one heated exchange. “You’ve created these platforms. And now they are being misused. And you have to be the ones to do something about it. Or we will.”

The companies deserved the harsh treatment. As a former Google design ethicist and a social media disinformation researcher with Data for Democracy, we think technology platforms have a responsibility to shield their users from manipulation and propaganda. So far, they have done a terrible job doing that. Even worse, they have repeatedly covered up how much propaganda actually infiltrates their platforms.

Story Continued Below

Today, what we know about how disinformation spreads through social networks is due to the hard work of outside experts—researchers, journalists and think tanks—and no thanks to the tech companies themselves. In 2015, researchers were writing about ISIS bots spreading jihadi propaganda on Twitter, and posting recruiting videos on YouTube. Technology companies took the most egregious content seriously, but initially did little to disrupt the terrorist network. This year once again outsiders have taken the lead in exposing how Internet Research Agency, a Russian company that conducts information operations on behalf of the Kremlin, purchased and disseminated propaganda meant to exploit American societal divisions during the election. But the official responses from the platforms have come from the same playbook: They deny, they diminish, and they attempt to discredit the research.

In November 2016, Facebook denied that “fake news” and information operations were a problem on the platform. This denial was amended nine months later, with an admission that an operation had taken place around the election, but with a caveat that it involved “only $100,000 of ads” seen by 10 million people. That explanation was revised again on Monday to acknowledge that Facebook had been off by an order of magnitude: The real number of users who saw the content was closer to 120 million. We’ve seen Twitter attempt to discredit independent researchers investigating the severe bot problem on the platform, claiming that they aren’t using the right data. Both companies have made attempts to obfuscate, by deleting pages and posts as researchers identify them. And yet, every day brings new stories revealing additional Facebook Pages, Groups, YouTube channels, bot accounts and content caches. Even as Facebook has resisted calls to inform their users that they were served targeted propaganda, we have learned that the Internet Research Agency hired local activists under false pretenses to run events, and that they communicated with users over Facebook Messenger while in alias. In one notable instance, they created Facebook events for both an anti-Islam and a pro-Islam rally on the same date, at the same time and place. Real people showed up.

What these platforms fail to admit in their denials is that the problem lies with them. The reason that the Russian effort has been so successful isn’t that the Internet Research Agency is abusing fundamentally good platforms. Rather, it uses the platforms as they were designed to be used.

The reach of these platforms is hard to overstate. At any given time, two billion people’s thoughts and actions are steered by what they encounter on social media sites like Facebook, YouTube and Twitter. Increasingly, these sites are being used for news consumption: More than 60 percent of Americans now say they get their news on social media, according to Pew Internet Research. Which means that the design choices of a handful of social media engineers—choices about how information travels on their platforms—are influencing our political conversations, and shaping our democracy.

The line between paid ads, boosted posts and organic content is blurry on Facebook, where both appear in the same newsfeed and can be reshared in the same way—a design choice that gradually came about because in-feed placement increased the engagement on the ads, and thus revenue for Facebook. The self-serve ease and affordability of Facebook’s ads tool, and the fact that the platform can turn content viral quickly, is why both advertisers and manipulators love it. Twitter, similarly, is a high-speed platform that is a powerful tool for breaking news, and for citizen journalists who need to reach media or spread an important story. On the flip side, Twitter’s reluctance to acknowledge and police anonymous, automated bot hordes means that manufactured consensus and conspiracy theories spread in hashtags just as quickly and easily as real news.

Design ethics entails understanding and taking responsibility for the actions that users take because of product design. Facebook, Twitter and YouTube are not neutral tools. Their products actively harvest human attention and sell it to advertisers, which is both immensely profitable and vulnerable to manipulation by actors like Russia. Now, in these hearings this week, their executives must tell us, clearly, how they are going to fix this problem. How are they going to reconcile the tension between building monetizeable engagement machines, and ensuring that their users see authentic information and engage with real people?

We’re not optimistic. To date, the tech companies have appeared incapable or unwilling to self-police in a sweeping and major way. The limited self-regulatory bodies that do exist have sprung up largely in response to crises, and usually only after significant outside government pressure and public uproar. One notable example is the Global Internet Forum to Counter Terrorism, which the social network companies started in mid-2017. This was approximately two years after the peak of ISIS’ activities on social media, and after numerous recent terrorist attacks in Europe led to threats of regulation from European lawmakers. The sites moved faster after the 2016 election. When “Fake News” emerged as a major story, Facebook decided to partner with external fact-checking organizations. But reports are that the effort has largely failed, and there has been very little additional proactive searching for alternative solutions.

People have a right to know when they have been manipulated. It’s also reasonable for them to expect that social networks are going to protect them from manipulation in the first place. Which is why it’s time that we demand a stronger approach to addressing the problems that social technology has created. Platforms that sell ads must implement Know Your Customer procedures like those in the financial industry, to give them insight into who is paying to target their users. Platforms that decide to allow anonymous users should make it very clear when those anonymous users are automated. Register bots, or label them. We cannot afford to only know what’s happening on these platforms when external researchers happen to spot problems from the outside.

On top of this, we need to establish a third-party regulatory agency to make sure that social media platforms are actually following through on these policies—that they are being honest and proactive. When widespread manipulation of customers happens in other industries, regulatory bodies step in and mitigate the damage. The financial industry, for example, has the Financial Industry Regulatory Authority (FINRA), a private corporation, and the Securities and Exchange Commission (SEC), an independent government agency that was started after the market collapsed and caused the Great Depression. Even though social networks are, in a sense, the marketplace of ideas, no organization currently bears the responsibility for addressing large-scale systemic problems that affect them. It is no one’s responsibility to alert affected individuals to mass disinformation campaigns that target them with extraordinary precision.

Facebook, Twitter and YouTube aren’t just products people use, but media environments that billions of people live by. This is not a partisan issue; it’s a structural issue, and it affects everyone. It’s affecting society, and it’s impacting democracy. It’s time for a change from reactive analysis to proactive prevention with real oversight. Whether we like it or not, this small collection of platforms has become our new digital public square. It’s time to introduce governance, accountability and transparency.

Renee DiResta is an independent researcher with Data for Democracy.

Tristan Harris is a former Design Ethicist at Google.

More from POLITICO Magazine

Politics

via Politico http://politi.co/2lnbIsw

November 1, 2017 at 12:57PM