As special counsel Robert Mueller issues the first indictments in his investigation into Russian meddling in the 2016 election, executives of three technology titans will face questioning by Congress this week about Russian use of their platforms.
Representatives from Facebook, Twitter, and Google are set to testify before a Senate Judiciary subcommittee on Tuesday, then the Senate and House intelligence committees on Wednesday. The hearings begin just a day after Mueller charged former Trump campaign Chair Paul Manafort with 12 counts of money laundering, tax fraud, and foreign-lobbying violations, and former Trump foreign policy adviser George Papadopoulos admitted to lying to the FBI about meetings with Russians connected to the Kremlin. The charges have renewed public interest in an investigation that Republicans in Congress last week said they were close to concluding.
Now, Congress will turn its spotlight on some of the world’s most powerful companies to learn more about how Russians used their technology to influence American voters and what the companies are doing to stop it. The hearings may devolve into partisan squabbling, with Democrats seeking to amplify Russia’s impact on Trump’s election and Republicans downplaying Russia’s influence online. On the offchance that our elected representatives actually want to learn something, we offer this list of questions worth asking:
Are ads really the biggest problem? What about the ways propaganda can spread through ordinary posts?
Facebook’s announcement in September that a Russian propaganda group known as the Internet Research Agency had purchased $100,000 worth of election-related ads sparked additional revelations about how that same group had placed ads on other platforms designed to influence American voters. Since then, Twitter has disclosed that it discovered 200 accounts connected to the group, while Google discovered $4,700 worth of ads linked to Internet Research Agency on its platform.
But it’s now becoming clear those ads are only a small part of a much larger problem. Facebook Tuesday will tell Congress that as many as 126 million people may have been exposed to 80,000 posts from Internet Research Agency over a two-year period, compared with 11.4 million who saw its 3,000 ads, according to the company’s testimony, which was acquired by WIRED. A source familiar with Twitter’s testimony says the company plans to tell Congress it found 36,746 accounts that “generate automated, election-related content and had at least one characteristic we used to associate them with a Russian account.” In other words, they’re Russian bots. Between Sept. 1 and Nov. 15, 2016, those accounts generated 1.4 million “automated, election-related Tweets,” and an eye-popping 288 million impressions. Twitter also says it’s now identified 2,752 accounts liked to IRA, all of which have been suspended. And, in a blog post published before the hearing, Google executives said the company found 18 YouTube accounts connected to IRA that uploaded 1,108 videos, “representing 43 hours of content and totaling 309,000 views” in the US between June 2015 and November 2016.
So far, Congress has primarily focused on ways to crack down on digital election ads. But organic posts are far more vulnerable to infiltration.
When did these companies begin actively looking for clues related to Russia?
Cybersecurity researchers at Crowdstrike linked Russia to the hack of the Democratic National Committee as early as June 2016. According to Facebook’s testimony, in the summer of 2016, it noticed a cluster of accounts related to a Russian military intelligence group called APT28. Those accounts began creating fake personas with the explicit purpose of disseminating stolen information from a website called DC Leaks. It suspended those accounts and reported several “threats from actors with ties to Russia” to law enforcement before Election Day.
And yet, in July 2017, more than a year later, a Facebook spokesperson told WIRED that the company had found no evidence of Russian entities buying ads on its platform. By September 2017, that had changed. According to the testimony, Facebook didn’t begin investigating the use of its ad tools until after the election. As recently as early September 2017, Google also said it had found no evidence of Russian interference. Now, it too will tell Congress a different story. That suggests that while there were early indications of Russian interference on these platforms, the companies didn’t engage in a wholesale effort to root out all forms of Russian meddling until after much of the damage had been done.
How did Facebook identify Internet Research Agency?
When Facebook revealed the Internet Research Agency’s ads and fake accounts, a company spokesperson said researchers had looked for ads purchased by American internet addresses set to the Russian language and “fanned out” from there. But the specific digital bread crumbs led them to the Internet Research Agency are less clear. The ads were purchased in Russian rubles, but plenty of legitimate advertisers pay in rubles.
Understanding the criteria Facebook, Twitter, and Google used to identify Russia-linked accounts is crucial to assessing how thorough that search has been. Facebook acknowledges in its testimony that its tools for detecting fake accounts fell short of recognizing some 120 fake Facebook pages created by IRA. Twitter, meanwhile, has drastically updated the number of IRA-linked accounts it initially reported to Congress, backing up the idea that these numbers are all moving targets, fluctuating as companies modify their search criteria.
In a blog post, Facebook executive Elliot Schrage wrote that the company doesn’t want to disclose its methods for fear of giving “bad actors a roadmap for avoiding future detection.” But that’s unlikely to satisfy congressional investigators. On Friday, Sen. Dianne Feinstein of California, the top Democrat on the Senate Judiciary Committee, asked Facebook and Twitter for documents showing how they identified the Russia-linked accounts, and how they plan to identify them in the future.
Whom did the ads target and do those targeting categories resemble any used by the Trump campaign?
This one’s for the collusion conspiracists. If the Trump campaign or Cambridge Analytica, a data firm that worked inside the campaign’s San Antonio digital headquarters, were colluding with the Russians, the thinking goes, there might be an overlap between the audiences those groups targeted. So far, there’s little evidence of that. Facebook has said that the Internet Research Agency’s ads reached as many as 11 million people and that only 1 percent of the ads were targeted at so-called Custom Audiences, comprised of people who liked the advertiser’s page, or others who look like them. Still, it’d be worth addressing this issue in an open forum.
Same goes for Twitter. The microblogging company shared information with congressional investigators about advertising campaigns orchestrated by the Russian media group RT during the election. Last week, Twitter announced it would ban ads from RT going forward. In the hearings, investigators should probe whether those ads were targeted at the same users the Trump team was targeting.
What about Google?
Google has gotten off easy so far, amid the revelations from, and about, Facebook and Twitter. But Google sells more digital ads than those two companies combined, accounting for $79 billion in ad revenue in 2016. In its blog post, Google reported $4,700 worth of IRA-linked ads, an alarmingly low number, considering Google is the world’s largest advertising business. It also detected 18 YouTube channels related to IRA, which it has since shut down.
It’s unclear whether Google and YouTube were tougher targets or whether they are being more flexible in their definition of what is and isn’t a Russian influence campaign. The New York Times recently reported on Russia Today’s massive following on YouTube. Despite warnings from American intelligence that RT is a Kremlin propaganda arm, Google has declined to rein in RT, saying in the blog post, “Our investigation found no evidence of manipulation of our platform or policy violations.”
YouTube hasn’t been shy about aggressively policing its content in other contexts, though. It currently uses machine learning to automatically identify and take down extremist videos. Another tool redirects users searching for ISIS-related content to videos that debunk ISIS propaganda. So it’s worth asking what YouTube and parent Alphabet will do to prevent foreign adversaries from tampering with democracy in video, as well.
Why did Facebook, Twitter, and Google embed with the Trump campaign?
Advertising staffers at Facebook, Twitter, and Google worked alongside the Trump staff in its San Antonio digital headquarters, fueling much speculation. The companies maintain they offered the same assistance to all 2016 candidates, and they worked closely with the Clinton team, if not physically side-by-side. Still, as speculation swirls, it’s worth having the companies set the record straight about the precise nature of the working relationship. That said, none of the executives who will testify were among the tech company staffers working in San Antonio, meaning their information about who did what and when will be secondhand at best.
Should online political ads be required to disclose who paid for the ad, as TV and radio spots are?
Now that the world knows how these platforms were exploited during the 2016 election, both Facebook and Twitter have announced changes to their election advertising protocol. Twitter is launching an advertising transparency center that allows anyone to see any ad placed on the platform, with additional information provided about political ads. The company is also adding disclaimers to political ads that alert users that a given tweet is, indeed, a political ad. Facebook, too, has announced plans to allow users to click through any Facebook page and see every ad it’s placed. It will also disclose who paid for a political ad.
These are critical steps, and yet, there’s no guarantee that every tech platform will commit to policing itself in this way. That’s why Sens. Amy Klobuchar, Mark Warner, and John McCain have introduced the Honest Ads Act, which would demand more disclosures from tech companies about who is buying political ads. Tech companies have turned their lobbying might against such regulation in the past. Now, as they begin to recognize just how vulnerable their platforms are to abuse, Congress should have them answer for those decisions in a public forum.