Joe Biden’s presidential campaign was furious with Facebook when the social network allowed his rival to run a political attack ad it said was filled with lies. Biden’s campaign wasn’t happy with Twitter and Google, either.
In a 30-second video it created last year, President Donald Trump’s campaign stated the former vice president had promised Ukraine $1 billion if the country fired a prosecutor investigating a company tied to Biden’s son. The video, like many political ads, included authentic but carefully edited clips that created the impression Biden, the presumptive Democratic nominee, had pressured Ukrainian officials to help his son, a charge that’s been debunked by fact-checking groups and media reports.
Some broadcast networks also ran the controversial ad, though CNN and NBC declined to air it. Still, Biden’s campaign focused on the three tech companies, asking them to pull it. They all refused.
“The spread of objectively false information to influence public opinion poisons the public discourse and chips away at our democracy,” TJ Ducklo, a Biden spokesman, said in a statement last October. “It is unacceptable for any social media company to knowingly allow deliberately misleading material to corrupt its platform.” (The Trump campaign called the ad “entirely accurate.”)
The clash between Biden’s team and some of the world’s largest tech companies foreshadowed a growing debate over how social networks should handle political ads. Social media allows campaigns to target people who are most likely to be receptive to their messages, making for a powerful tool that could fuel the spread of misinformation and disinformation. That puts tech companies in an uncomfortable place as they try to strike a balance between free expression and preventing the spread of misleading information. The companies also face pressure to protect the integrity of elections because Russian trolls used online ads to interfere in the 2016 election.
The ability to precisely target ads makes social media “a different kind of weapon in the war of disinformation,” said Joan Donovan, who runs Harvard University’s Technology and Social Change Research Project. “You can target your ad at communities that you want to inflame or create outrage.”
Trump’s team has complained that social media companies apply their rules unevenly and has accused them of assisting the Biden campaign. In March, Trump’s campaign detailed five videos from the Biden campaign that it said “mislead Americans.” One of the videos, the Trump campaign said, creates the impression that the president called the coronavirus a “hoax” by selectively editing his statements. Politifact, a Facebook fact-checking partner, said the ad misled viewers. “The words are Trump’s,” Politifact said, “but the editing is Biden’s.”
Facebook, Twitter and Google each handle political advertising differently. Facebook doesn’t send ads from politicians to fact-checkers but includes them in a public database. It also limits the amount of political ads people see on the social network. Twitter has the most restrictive approach, banning political ads with a handful of exceptions. Google, which owns YouTube, allows political campaigns to target people based on age, gender and postal code, but not on voters’ political affiliations or public voter records.
Social media companies don’t consider a post an ad unless they’re directly paid to promote the content. That’s given political campaigns wiggle room to work around policies from tech companies. The campaign of former Democratic presidential candidate Mike Bloomberg, for example, paid influencers on Facebook-owned Instagram to promote posts supporting the former New York City mayor. Since Facebook didn’t consider the posts ads, they weren’t included in its political ads database, a tool journalists and researchers use to find misinformation and disinformation. On Twitter, tweets can still reach millions of people without advertising. Some political consultants also say that tech companies should focus on individual ads and user posts that contain misinformation rather than restrict all political advertising.
Since May 2018, Trump’s Facebook page has spent more than $38 million on ads. Biden’s Facebook page has purchased more than $13 million in ads over the same period. Online advertising has also become increasingly important during the coronavirus pandemic because it’s a way to reach voters who have to stay at home. The controversy over political ads isn’t going away anytime soon.
Facebook’s army of fact-checkers
Facebook uses an army of fact-checkers to keep track of misinformation spreading on the world’s largest social network. If a fact-checker rates a post as false, the post will appear lower in the News Feed and a notice appears over the content stating that it’s “false information.” The exception: posts and ads directly from politicians.
“By limiting political speech we would leave people less informed about what their elected officials are saying and leave politicians less accountable for their words,” Facebook said on its website. CEO Mark Zuckerberg has also said that banning political ads, the approach Twitter took, would benefit incumbents and candidates with name recognition.
Facebook partners with more than 60 fact-checkers throughout the world, including Factcheck.org, Politifact, Reuters Fact Check and the Associated Press in the US. Still, some of Facebook’s fact-checking partners don’t agree with how the company approaches political ads.
Eugene Kiely, director of Factcheck.org, said Facebook should consider showing related articles with accurate information in posts and ads from politicians that contain misinformation. That would give users more information without reducing the post’s distribution in the News Feed.
“There’s an in-between ground that they can be doing,” Kiely said.
Facebook employees have also pushed for a middle ground solution. In a letter obtained by The New York Times, employees outlined six steps Facebook could take to combat misinformation in political ads. The steps include a stronger visual design so users can identify political ads, restrictions on ad targeting and caps on the amount of money politicians can spend on such ads.
Facebook says it bases its approach on transparency. A spokesman said the company combats disinformation by examining the behavior of certain pages to see if they’re misleading others about who they are and their purpose.
The social network’s online database shows users political ads, but critics have cited problems with the tool. That includes its failure to include the pro-Bloomberg memes produced by paid influencers.
A New York University study showed a total of 68,879 Facebook pages that ran US political ads between May 2018 and June 2019 failed to disclose who was funding them.
“Transparency actually is somewhat promising if it were well implemented and well enforced,” said Damon McCoy, an assistant professor at NYU and a co-author of the study. “It would potentially be a promising avenue towards understanding what legitimate politicians are doing in terms of political messaging and also finding more nefarious advertising operations.”
Twitter makes it black-and-white. Sort of
Twitter banned political ads in November 2019. But almost a year and a half later the social network can’t stop criticism that it’s failing to combat misinformation from politicians.
In February, Twitter, as well as Facebook, came under fire for refusing to pull down an edited video posted by Trump that some Democrats complained misled viewers. The roughly 5-minute clip shows House Speaker Nancy Pelosi, a Democrat, repeatedly tearing up a copy of Trump’s State of the Union address as he honored several Americans, including a former Tuskegee Airman. In reality, Pelosi ripped up the speech just once, after Trump had concluded his address.
Twitter said the president’s tweet didn’t constitute advertising because the company wasn’t paid to promote it. The content of the video, which appears to have been produced by a conservative nonprofit, didn’t violate any of its current rules.
The episode underscores the power of Twitter: Politicians don’t need to purchase an ad to reach a massive audience. They can simply attach them to tweets.
In December, Biden tweeted a one-minute video attacking Trump that quickly racked up more than 12 million views. In the carefully edited video, world leaders appear to mock Trump. A violin swells in the background, setting the mood. “The world is laughing at President Trump,” Biden tweeted. “They see him for what he really is: dangerously incompetent and incapable of world leadership.” Like Trump’s tweet, Biden’s didn’t violate Twitter’s rules because the campaign didn’t pay the company to promote it.
Still, the video immediately startled Twitter users, who don’t always see the distinction.
“I thought Twitter banned political ads??” a Twitter user replied.
Twitter is known for its 280-character posts and fast pace. Because messages are bite-sized, posts flow through Twitter more rapidly than they would on other social networks. The retweet button also makes it easy to share a tweet. That makes for an environment in which misinformation is bound to spread rapidly.
Unlike bigger rival Facebook, Twitter doesn’t have partnerships with third-party fact-checkers to review posts for accuracy or display related articles from trustworthy news outlets. Twitter says it might eventually add brightly colored labels to misleading tweets from politicians and public figures. Tweets from journalists and fact-checkers who correct the misleading information could appear below the label, and the visibility of the tweet would be reduced.
Marketing experts say banning or restricting political ads could hurt candidates who don’t have strong name recognition.
“While the goal was worthy, the different mechanisms by which each of these platforms have tried to take on the issue really misses the mark,” said Christine Bachman, who runs CDB Digital, a Virginia-based firm that works with progressive state legislature candidates on digital campaigns. The policy may weigh most on candidates running for state or local offices.
A Twitter spokesman didn’t answer questions about how effective its political ad ban has been on curbing misinformation, pointing instead to earlier remarks by CEO Jack Dorsey. The Twitter founder said political messages should be “earned not bought” and specifically cited “unchecked misleading information” as a risk.
Google tries to split the difference
Google is staking out a middle ground.
Ads on Google are different from those on Facebook and Twitter, which often resemble regular social media posts. Political ads on Google look like search results. Ads on YouTube run before a video. You can’t easily spread Google or YouTube ads, though, because there’s no retweet or share button. Google restricts how narrowly campaigns are allowed to target an audience with election ads.
“We believe the balance we have struck — allowing political ads to remain on our platforms while limiting narrow targeting that can reduce the visibility of ads and trust in electoral processes — is the right one,” a spokeswoman said in an email. Google wouldn’t provide details on how it enforces its political ads policy.
Not everyone agrees with Google’s approach. In November, a group of bipartisan digital strategists from the University of Chicago wrote a letter saying the company should focus on stopping disinformation instead of limiting legitimate political speech. Limiting ad targeting could harm political campaigns with less funding than incumbents, making it harder and more expensive to reach younger voters and people of color.
Jared Kamrass, a political consultant with Rivertown Strategies in Ohio, said tech companies should look at individual ads that contain misinformation.
“Ultimately, these companies are afraid of being accused of censorship or of partisanship,” he said. “I would rather run that risk then be a tool for government interference or limit a huge gain of potential revenue from political ads.”