Without sufficient transparency and accountability, online platforms have become hotbeds for disinformation that manipulates, maligns, and disenfranchises voters, especially people of color and women. The Online Political Ads Transparency Project is critical to Democracy Fund’s Digital Democracy Initiative’s goal of providing greater transparency and oversight to combat coordinated disinformation campaigns, minimize misinformation, and define and defend civil rights online.
There is nothing new about misinformation, dirty tricks, and voter suppression in the history of democracy. But as political campaigns – like much of the rest of public life – have moved online, so have tactics to subvert election outcomes. Political ads and messaging are micro-targeted at voters who have no idea who is paying to influence them or what their motives might be. Or, as Laura Edelson and Damon McCoy, researchers for the Online Political Ads Transparency Project at New York University’s Center for Cybersecurity, would put it, democracy has a cybersecurity problem.
In May 2018, Edelson and McCoy found a perfect opportunity to study this problem: they decided to look at Facebook’s newly public, searchable archive of political ads. Facebook had released this archive following criticism that it was profiting from political ads while not disclosing information about them to the public. Unlike TV and radio broadcasters, who are required to report political ad buys on television and radio to the Federal Communications Commission, online platforms like Facebook — to this day — are not legally required to do so. But while Facebook’s lack of transparency was technically legal, that doesn’t mean it was right. The democratic process is harmed when Americans don’t know who is attempting to influence them via political ads.
Diving into Facebook’s archive of political ads, Edelson and McCoy scraped information and used the resulting data to publish an analysis that showed that from May 2018 to July 2018, Donald Trump was the largest spender on the platform — a key insight into political influence on Facebook. Unfortunately, Facebook eventually shut down the NYU team’s ability to gather information by scraping — but this was only a temporary setback. Facing mounting pressure from the research community, Facebook soon after created a way for researchers to obtain these data programmatically, via an API interface. This made it simpler to do an ongoing analysis of the ad library corpus, versus a one-time scrape covering a limited time period.
In doing all of this work, the researchers’ goal was to push Facebook to adopt better transparency policies — by presenting them with the evidence of how inadequate their current policies were. But Edelson and McCoy were learning that was an even more difficult task than they had expected.
“When you are battling a traditional cybersecurity problem like spam” explains Edelson, “the honest actors – whether it’s a bank, an insurance company, or something else – have incentives to change their behavior, because their customers will reward them with increased profits. But in this case, online platforms may have a long-term interest in being good citizens, but their short term interest is in making money off of ads and targeted content, precisely the tools the bad actors are gaming. So it’s hard to get them to change.” In other words: social media platforms have competing motivations.
But the team did have one advantage: the power of public pressure. And they uncovered plenty of things that would worry the public. When they conducted a thorough cybersecurity analysis of how well Facebook was adhering to its own policies on political ad disclosure, they found numerous problems. More than half of the advertising pages they studied – representing $37 million of ad spending – lacked proper disclosure of which candidate or organization paid for the ads. Even when names of sponsors were disclosed, the information was sloppy and inconsistent.
They also identified “inauthentic communities” — clusters of pages that appeared to cater to different racial or geographic identity groups that do not adequately disclose how they are connected to each other.
Rather than going straight to the public with this information, Edelson and McCoy reached out to Facebook to share their findings, letting the company know that they planned to present their research publicly in May 2020 at the IEEE Symposium on Security and Privacy. And it did have an impact: in response, Facebook made internal changes that addressed some of these issues.
This was a victory for the researchers, but the work continues and many obstacles and mysteries remain. Sometimes the Facebook API stops working. Sometimes researchers find ads that are clearly political, but are not included in the official ad library. And sometimes the reports that Facebook releases that aggregate ad data don’t match the raw data they’ve collected.
But despite the difficulties, Edelson and McCoy persist. “I’m proud of the fact we’ve moved Facebook on transparency,” says Edelson, “but there is always more work to do. Voters need to know who is targeting them and how — and how much they are spending — to help them make informed decisions when they fill out their ballots.”
In 2020, the researchers are continuing to work on projects aimed at making Facebook and other platforms safer for our democracy. They have launched AdObserver, a browser plugin that allows Facebook users a way to volunteer data on the ads they are seeing. This will yield valuable information on whether ads are missing from the Facebook Ad Library, as well as information on targeting that the social media platform does not make available. And they are creating a new tool that will help civil society organizations – who represent people who often are targeted by such ads – to quickly identify problematic ad campaigns. While there’s no doubt democracy still has a cybersecurity problem, the NYU researchers are working hard to protect it from threats.
Cover Photo: Laura Edelson and Damon McCoy of The Online Political Ads Transparency Project at New York University’s Center for Cybersecurity. Photo Credit: New York University.