Blog

Tackling Democracy’s Cybersecurity Problem Requires Collective Action

/
August 17, 2021

For several years, Democracy Fund has been pushing for greater platform transparency and working to protect against the harms of digital voter suppression, surveillance advertising, coronavirus misinformation, and harassment online. But the stakes for this work have never been higher.

One in five Americans rely primarily on social media for their political news and information, according to the Pew Research Center. This means a small handful of companies have enormous control over what a broad swath of America sees, reads, and hears. Now that the coronavirus has moved even more of our lives online companies like Facebook, Google, and Twitter have more influence than ever before. Yet, we know remarkably little about how these social media platforms operate.   

With dozens of academic researchers working to uncover these elusive answers, it is essential that we fund and support their work despite Facebook’s repeated attempts to block academic research on their platform.

Earlier this month Facebook abruptly shut down the accounts of a group of New York University researchers from Cybersecurity for Democracy, whose Ad Observer browser extension has done pathbreaking work tracking political ads and the spread of misinformation on the social media company’s platform.

In full support of Cybersecurity for Democracy, Democracy Fund today joined with its NetGain Partnership colleagues to release this open letter in support of our grantee, Cybersecurity for Democracy, and the community of independent researchers who study the impacts of social media in our democracy.

The Backstory

For the past three years, a team of researchers at NYU’s Center for Cybersecurity has been studying Facebook’s advertising practices. Last year, the team, led by Laura Edelson and Damon McCoy, deployed a browser extension called Ad Observer that allows users to voluntarily share information with the researchers about ads that Facebook shows them. The opt-in browser extension uses data that has been volunteered by Facebook users and analyzes it in an effort to better understand the 2020 election and other subjects in the public interest. The research has brought to light systemic gaps in the Facebook Ad Library API, identified misinformation in political ads, and improved our understanding of Facebook’s amplification of divisive partisan campaigns. 

Earlier this month, Facebook abruptly shut down Edelson’s and McCoy’s accounts, as well as the account of a lead engineer on the project. This action by Facebook also cut off access to more than two dozen other researchers and journalists who relied on Ad Observer data for their research and reporting, including timely work on COVID-19 and vaccine misinformation. 

As my colleague Paul Waters shared in a deep dive blog on this topic:

“Platforms have strong incentives to remain opaque to public scrutiny. Platforms profit from running ads — some of which are deeply offensive — and by keeping their algorithms secret and hiding data on where ads run they avoid accountability — circumventing advertiser complaints, user protests, and congressional inquiries. Without reliable information on how these massive platforms operate and how their technologies function, there can be no real accountability. When complaints are raised, the companies frequently deny or make changes behind the scenes. Even when platforms admit something has gone wrong, they claim to fix problems without explaining how, which makes it impossible to verify the effectiveness of the “fix.” Moreover, these fixes are often just small changes that only paper over fundamental problems, while leaving the larger structural flaws intact. This trend has been particularly harmful for BIPOC who already face significant barriers to participation in the public square.” 

This latest action by Facebook undermines the independent, public-interest research and journalism that is crucial for the health of our democracy. Research on platform and algorithmic transparency, such as the work led by Cybersecurity for Democracy, is necessary to develop evidence-based policy that is vital to a healthy democracy. 

A Call to Action

Collective action is required to address Facebook’s repeated attempts to curtail journalism and independent, academic research into their business and advertising practices. Along with our NetGain partners, we have called for three immediate remedies:

  1. We ask Facebook to reinstate the accounts of the NYU researchers as a matter of urgency. Researchers and journalists who conduct research that is ethical, protects privacy, and is in the public interest should not face suspension from Facebook or any other platform. 
  2. We call on Facebook to amend its terms of service within the next three months, following up on an August 2018 call to establish a safe harbor for research that is ethical, protects privacy and is in the public interest.  
  3. We urge government and industry leaders to ensure access to platform data for researchers and journalists working in the public interest. 

The foundations who make up the NetGain Partnership share a vision for an open, secure, and equitable internet space where free expression, economic opportunity, knowledge exchange, and civic engagement can thrive. This attempt to impede the efforts of independent researchers is a call for us all to protect that vision, for the good of our communities, and the good of our democracy. 

Read the NetGain Partners’ Open Letter to Facebook 

Blog

When it’s Time to Learn Fast: How our Learning Processes Changed to Meet the Moment in the Summer of 2020

May 18, 2021

We tried something different. As a foundation, we are only as effective as our understanding of and alignment to what is occurring in the fields we fund. That’s tough to do in a complex environment. During a crisis, it’s even tougher. Try several crises.

In the summer of 2020, the grantees of our Digital Democracy Initiative (DDI) were revving up to combat a trifecta of mis- and disinformation about COVID-19, the Black Lives Matter movement, and the 2020 election. And we wanted to know how we could support them — not with slow, drawn-out information-gathering and analysis, but with something more agile. 

We had to rethink the way we learn. 

We didn’t have the luxury to wait for researchers to conduct a study and package it up for us to leisurely read nine months later. Nor did we want to ask our grantees to spare time that could be better used to do the work. So, we decided to approach our research and evaluation a little differently. We made a decision to minimize our plans for a developmental evaluation into a set of learning conversations that prioritized strengthening and facilitating information flows among our grantees over answering our own set of learning questions. 

We also made a conscious decision to do something researchers would not advise (because of possible observer effects): we broke the fourth wall of objectivity. Our Associate Director of DDI and our Strategy and Learning Manager joined in on the focus groups facilitated by our evaluator. This had positive implications on our construction of knowledge. We were able to hear and respond to concerns in real time as our grantees were experiencing it and extract key points outside of those captured by our evaluators. Grantees were also able to learn from each other in real time and see other parts of the wider field they contribute to. While the resulting report, Responding to the Moment, synthesized much of this information, it was invaluable to have immediate access to it. 

Our grantees expressed gratitude for the time to connect, particularly during the pandemic lockdown because some felt increasingly siloed. Hunkered down within the circles they were already in pre-pandemic, some felt it a challenge to do what the moment demanded: connect with new folks in order to advance the work. 

We learned that one of the largest gaps in the mis- and disinformation network space existed between researchers and activists. While field-building and connecting across network gaps is a critical tactic for the Digital Democracy Initiative, this was an urgent learning for us. Leaning into making connections across fields of work is vital to successfully attacking the complex problem of mis- and disinformation. We have begun this through follow-up meetings and we are already seeing our grantees make these connections more explicitly in their work.

In our real-time learning, we made sure to center the experiences of people of color and women, with special attention to women of color who fall within both groups and experience unique circumstances because of this intersectionality. One important learning that resulted from this centering was the consequences and inequity of uniformed dollars in the philanthropic field due to “parachuting” and “trendiness.” As money was pouring into the mis- and disinformation space, dollars were going to new actors parachuting into the space for those resources as opposed to going to long-term actors who already worked on these issues. Additionally, a surface understanding of the challenges in the field made it likely that grantmakers would give their well-intentioned dollars to solutions that were trending, but not necessarily effective instead of buttressing effective efforts that activists and researchers were already cultivating. We have worked to elevate the voices and work of those who have been working in this space over time, and ensure funders understand the importance of that work as an anchor in this field.

These learnings underscore the inequitable ways that philanthropic support rarely goes into the hands of those most impacted by the problem and therefore best suited to address the problems. Centering the perspectives and experiences of those most negatively impacted by disinformation, people of color and women, allowed us to best understand our points of leverage for field solutions that are either out of the focus of or deprioritized by a broader philanthropic sector that is overwhelmingly wealthy and white.

The summer of 2020, like other crisis moments, was filled with chaos, trauma, and uncertainty. We were surprised by the learning that can happen even in the midst of crises when we strip away the formalities and reduce the amount of time and attention being taken away from important work being done in the field. Many of those crises continue today, and the changes we made to our learning will extend past the summer of 2020. We are thankful to our grantees for their time and honesty. The lessons we learned come from them. 

 

Blog

The Growing Movement for Platform Accountability

/
March 8, 2021

Social media companies have harmed our economy, government, social fabric, and public square. The January 6, 2021 insurrection at the United States Capitol, which was fueled not just by partisan networks like Parler, but by household social media platforms like Facebook and Twitter, has made it clear that government intervention and better oversight is urgently needed. 

While many people still still understand the  problems in general terms, such as, “social media makes us polarized,” or “there’s no privacy online” there is a growing and strengthening movement to hold these companies accountable. Too often the voices of these organizers, researchers and civil society groups are missing from the discussion about how to develop better public policy, track online mis and disinformation, and hold platforms accountable through public advocacy campaigns.

Last year, we commissioned an independent report from ORS Impact to gain an in-depth understanding of the policy ideas and issues these organizations are pursuing and create a comprehensive view of current efforts to address these problems at their roots. This kind of report is an important part of our work at Democracy Fund that we use to make informed ongoing decisions about our strategy and highlight the vital work of grassroots organizations. We are publishing this report to help funders and organizations interested in doing platform accountability work gain an understanding of the field as it stands today, and develop effective strategies and programs of their own.

Three major learnings from the report will inform Democracy Fund’s Platform Accountability strategy:

1. The algorithms behind social media platforms often amplify existing inequalities along the lines of race, class, and gender, and allow for bad actors both foreign and domestic to manipulate public opinion. Existing laws and legal precedent make it difficult to regulate algorithms with public policy. For example, current interpretations of the First Amendment generally protect algorithms as a form of speech. And to begin with, Section 230 of the Communications Decency Act absolves social media companies of responsibility for the content their users publish on their platforms, under the theory that the threat of being held liable for what users post would make platforms act as speech police rather than open platforms for free expression. In practice, the platforms have used this protection to avoid all responsibility  for hate speech and mis/disinformation that manipulates public opinion and undermines elections. They have also used the liability protection under Section 230 as a shield against transparency and due process in their moderation practices.

Important grantees and partners in this area include:

2. Journalists, researchers, and other investigators face difficulties as they try to understand how the platforms distribute and amplify information. It’s also very difficult for everyday users to know who is behind the political advertising they see. The platforms have offered very little access to internal data, and as a public, we can’t solve problems we don’t understand. Opportunities include the potential for research institutions to partner with one another to collect data about how the platforms operate, and act as data brokers between platforms and researchers. Challenges include the additional need for more qualitative data from the platforms about how they develop policy and make decisions about their algorithms and content moderation processes.

Important grantees and partners in this area include:

  • The NYU Online Political Ads Transparency Project, which has created a free tool that allows users and researches to track the sources of political advertising on platforms. 
  • The Stigler Committee on Digital Platforms, which has argued that the Federal Trade Commission could be empowered to have access to platforms databases, so they can perform their own research on platform impacts, and grant selective access to independent researchers. 
  • The German Marshall Fund, which advocates for new legislation similar to existing law that requires politicians to disclose the funding source of their TV ads (the Honest Ads Act).

3. There is a need for coordination between grantees, funders and partners to distribute important civic information at scale by leveraging the tools of social media. At present, there are few viable ideas for large-scale intervention, which points out the need for more research, strategy, and relationship-building. Major efforts in this space include the 2020 Elections Research Project, a first-of-its-kind collaboration between Facebook and outside academic researchers to study Facebook and Instagram’s impact on political participation and the shaping of public opinion; the Civic Information API, which aggregates essential information on local representatives and elections to empower developers and inform everyday people; and the Voting Information Project, which helps voters find reliable information on where to vote and what issues are on their ballots. 

Important collaborations in this area include:

  • The Social Science Research Council, which supports scholars, generates new research, and connects researchers with policymakers, nonprofits, and citizens. 
  • The Google News Initiative and Facebook Journalism Project, both of which provide monetary and in-kind support to help local news publishers connect with their communities and adapt their business models for the digital age.
  • The Facebook Civil Rights Audit, which Facebook initiated after a campaign led by groups like Free Press and Color of Change pressured the company to take civil rights issues on its platform more seriously. 

The ORS Impact report will inform Democracy Fund’s grantmaking strategy, and how we build networks between grantees that cut across traditional divides between researchers, civil society organizations, advocates and policymakers. The report provides a snapshot of the field during a critical time for platform accountability work, providing a fuller understanding of the current context. Our sister organization, Democracy Fund Voice, will be implementing a similar review process in the coming months for its Media Policy strategy, which will include in-depth interviews with several grantees mentioned in this report about how the challenges of 2020 have impacted their work. 

To learn more about our Digital Democracy program, contact Paul Waters, associate director, Public Square Program at pwaters [@] democracyfund.org. 

Blog

Fighting for an internet that is safe for all: how structural problems require structural solutions

/
September 30, 2020

In 2017, a college student named Taylor Dumpson achieved what many young scholars dream of: she was elected student body president. As the first African-American woman president at American University in Washington, D.C., news of her election was celebrated by many as a sign of growing racial equity in higher education.

But day one of her presidency was anything but triumphant. The night before, a masked man hung bananas around campus inscribed with racist slogans. The neo-Nazi website The Daily Stormer then picked up news reports of the incident and directed a “troll army” to flood the Facebook and email accounts of Dumpson and AU’s student government with hateful messages and threats of violence. Dumpson feared being attacked while carrying out her duties as president and attending class and was later diagnosed with post-traumatic stress disorder.

Two years later, the Lawyers’ Committee for Civil Rights Under Law helped Dumpson win a major lawsuit against her harassers. Building on the D.C. Human Rights Act of 1977, Dumpson’s legal team successfully argued that the harassment she faced online limited her access to a public accommodation, her university. It was a significant victory for online civil rights, but her case raises an important question: why weren’t there laws or policies to protect her in the first place?

Part of the problem is that civil rights laws have yet to be updated for the 21st century. “No one predicted the internet when they wrote these laws,” says David Brody, a lead attorney in Dumpson’s case. “Only just now are these laws getting applied to the internet,” he added. A 2020 Lawyers’ Committee report that Brody co-authored shows that laws preventing discrimination online vary widely state-to-state, leaving large gaps in civil rights protections online. 

The second part of the problem is that social media platforms are designed to optimize for engagement, — to keep people on their platform as long as possible. This sounds like a reasonable business goal, but the result is that oftentimes the platforms’ algorithms elevate the most extreme or offensive content, like racist threats against an African-American student body president, simply because it gets the quickest and most intense reactions. While Brody and the Lawyers’ Committee did not pursue this issue in the Taylor Dumpson case, experts agree that it is a major structural barrier to ensuring civil rights in the 21st century. Optimizing for engagement too often means optimizing for outrage, providing extremists and hate groups tools to spread and popularize their destructive ideologies.

Deeply rooted problems like these have created an internet that is often unsafe and unjust, particularly for people of color and women, who have long borne the brunt of online harms, leaving them with an impossible choice: stay on social media and accept daily threats and harassment, or leave the platforms altogether, giving up on participating in the 21st century public square. In 2014, Black feminist bloggers like l’Nasha Crockett, Sydette Harry, and Shafiqah Hudson warned of the rise of online hate and disinformation – two full years before “alt-right” groups and Russia-funded “troll armies” wreaked havoc on public discourse during the 2016 U.S. presidential election

The harassment of people of color and women on platforms owned by Facebook, Google, and Twitter  illustrates larger problems that should concern us all. The digital tools and technologies we have come to depend on are largely owned by private companies driven to maximize profits — even at the expense of the civil rights protections guaranteed under U.S. laws and the Constitution. When clicks and viral posts are prioritized at any cost, democracy suffers. 

Policymakers must recognize that we need to update our civil rights laws, and create new laws where necessary, to fulfill our nation’s Constitutional promises. Within the private sector, tech companies must take it upon themselves to track and combat discrimination on their platforms and stop the spread of online hate. When they do not, we must build public movements to hold them accountable and demand equal access to civil rights protections. Structural problems require structural solutions. Some possible solutions that Democracy Fund grantees have put forth include things like: 

The Digital Democracy Initiative is proud to fund groups like the Lawyer’s Committee, Data for Black Lives, and MediaJustice who work to fill gaps in law and public policy — as well as groups like Stop Online Violence Against Women and Color of Change, whose work exposing and combatting coordinated hate and harassment specifically centers the concerns of people of color and women.

Democracy Fund supports coalition building, independent research, and policy development that hold platforms accountable to the public interest, not just their own profits. If you would like to get involved, here are three things you can do: 

  1. Learn more about root causes. Take a look at our systems map to gain a greater understanding of the interconnected nature of the issues we’re working on. 
  2. Support organizations working on these issues. This is incredibly important, particularly as budgets are strained during the COVID-19 pandemic. See our grantee database for the full list of organizations Democracy Fund is supporting. 
  3. Look for ways to make your voice heard. Grantees like Free Press and Color of Change regularly organize petitions to hold tech platforms accountable

To learn more about our work, contact Paul Waters, associate director, Public Square Program, at pwaters [@] democracyfund.org. 

Blog

Social Media Transparency is Key for Our Democracy

/
August 11, 2020

According to the Pew Research Center, one in five Americans rely primarily on social media for their political news and information. This means a small handful of companies have enormous control over what a broad swath of America sees, reads, and hears. Now that the coronavirus has moved even more of our lives online, companies like Facebook, Google, and Twitter have more influence than ever before. And yet, we know remarkably little about how these social media platforms operate. We don’t know the answers to questions like: 

  • How does information flow across these networks? 
  • Who sees what and when? 
  • How do algorithms drive media consumption? 
  • How are political ads targeted? 
  • Why does hate and abuse proliferate? 

Without answers to questions like these, we can’t guard against digital voter suppression, coronavirus misinformation, and the rampant harassment of Black, Indigenous, and people of color (BIPOC) online. That means we won’t be able to move closer to the open and just democracy we need. 

A pattern of resisting oversight 

The platforms have strong incentives to remain opaque to public scrutiny. Platforms profit from running ads — some of which are deeply offensive — and by keeping their algorithms secret and hiding data on where ads run they avoid accountability — circumventing advertiser complaints, user protests, and congressional inquiries. Without reliable information on how these massive platforms operate and how their technologies function, there can be no real accountability. 

When complaints are raised, the companies frequently deny or make changes behind the scenes. Even when platforms admit something has gone wrong, they claim to fix problems without explaining how, which makes it impossible to verify the effectiveness of the “fix.” Moreover, these fixes are often just small changes that only paper over fundamental problems, while leaving the larger structural flaws intact. This trend has been particularly harmful for BIPOC who already face significant barriers to participation in the public square.   

Another way platforms avoid accountability is via legal mechanisms like non-disclosure agreements (NDAs) and intellectual property law, including trade secrets, patents, and copyright protections. This allows platforms to keep their algorithms secret, even when those algorithms dictate social outcomes protected under civil rights law

Platforms have responded to pressure to release data in the past — but the results have fallen far short of what they promised. Following the 2016 election, both Twitter and Facebook announced projects intended to release vast amounts of new data about their operations to researchers. The idea was to provide a higher level of transparency and understanding about the role of these platforms in that election. However, in nearly every case, those transparency efforts languished because the platforms did not release the data they had committed they would provide. Facebook’s reticence to divulge data almost a year after announcing the partnership with the Social Science Research Council is just one example of this type of foot-dragging

The platforms’ paltry transparency track record demonstrates their failure to self-regulate in the public interest and reinforces the need for active and engaged external watchdogs who can provide oversight. 

How watchdog researchers and journalists have persisted despite the obstacles

Without meaningful access to data from the platforms, researchers and journalists have had to reverse engineer experiments that can test how platforms operate and develop elaborate efforts merely to collect their own data about platforms. 

Tools like those developed by NYU’s Online Political Transparency Project have become essential. While Facebook created a clearinghouse that was promoted as a tool that would serve as a compendium of all the political ads being posted to the social media platform, NYU’s tool has helped researchers independently verify the accuracy and comprehensiveness of Facebook’s archive and spot issues and gaps. As we head into the 2020 election, researchers continue to push for data, as they raise the alarm about significant amounts of mis/disinformation spread through manipulative political groups, advertisers, and media websites. 

Watchdog journalists are also hard at work. In 2016, the Wall Street Journal built a side-by-side Facebook feed to examine how liberals and conservatives experience news and information on the platform differently. Journalists with The Markup have been probing Google’s search and email algorithms. ProPublica has been tracking discriminatory advertising practices on Facebook.

Because of efforts like these, we have seen some movement. The recent House Judiciary Committee’s antitrust subcommittee hearing with CEOs from Apple, Facebook, Google and Amazon was evidence of a bipartisan desire to better understand how the human choices and technological code that shape these platforms also shape society. However, the harms these companies and others have caused are not limited to economics and market power alone. 

How we’re taking action

At Democracy Fund, we are currently pushing for greater platform transparency and working to protect against the harms of digital voter suppression, coronavirus misinformation, and harassment of BIPOC by: 

  • Funding independent efforts to generate data and research that provides insight regarding the platforms’ algorithms and decision making; 
  • Supporting efforts to protect journalists and researchers in their work to uncover platform harms;
  • Demanding that platforms provide increased transparency on how their algorithms work and the processes they have in place to prevent human rights and civil rights abuses; and
  • Supporting advocates involved in campaigns that highlight harms and pressure the companies to change, such as Change the Terms and Stop Hate for Profit.

Demanding transparency and oversight have a strong historical precedent in American media. Having this level of transparency makes a huge difference for Americans — and for our democracy. Political ad files from radio and television broadcasters (which have been available to the public since the 1920s) have been invaluable to journalists reporting on the role of money in elections. They have fueled important research about how broadcasters work to meet community information needs. 

The public interest policies in broadcasting have been key to communities of color who have used them to challenge broadcaster licenses at the Federal Communications Commission when they aren’t living up to their commitments. None of these systems are perfect, as many community advocates will tell you, but even this limited combination of transparency and media oversight doesn’t exist on social media platforms. 

Tech platforms should make all their ads available in a public archive. They should be required to make continually-updated, timely information available in machine-readable formats via an API or similar means. They should consult public interest experts on standards for the information they disclose, including standardized names and formats, unique IDs, and other elements that make the data accessible for researchers.

Bottomline, we need new policy frameworks to enforce transparency, to give teeth to oversight, and to ensure social media can enable and enhance our democracy. Without it, the open and just democracy we all deserve is at real risk.  

Blog

How Political Ad Transparency Can Help Fix Democracy’s Cybersecurity Problem

/
August 7, 2020

Without sufficient transparency and accountability, online platforms have become hotbeds for disinformation that manipulates, maligns, and disenfranchises voters, especially people of color and women. The Online Political Ads Transparency Project is critical to Democracy Fund’s Digital Democracy Initiative’s goal of providing greater transparency and oversight to combat coordinated disinformation campaigns, minimize misinformation, and define and defend civil rights online. 

There is nothing new about misinformation, dirty tricks, and voter suppression in the history of democracy. But as political campaigns – like much of the rest of public life – have moved online, so have tactics to subvert election outcomes. Political ads and messaging are micro-targeted at voters who have no idea who is paying to influence them or what their motives might be. Or, as Laura Edelson and Damon McCoy, researchers for the Online Political Ads Transparency Project at New York University’s Center for Cybersecurity, would put it, democracy has a cybersecurity problem. 

In May 2018, Edelson and McCoy found a perfect opportunity to study this problem: they decided to look at Facebook’s newly public, searchable archive of political ads. Facebook had released this archive following criticism that it was profiting from political ads while not disclosing information about them to the public. Unlike TV and radio broadcasters, who are required to report political ad buys on television and radio to the Federal Communications Commission, online platforms like Facebook — to this day — are not legally required to do so. But while Facebook’s lack of transparency was technically legal, that doesn’t mean it was right. The  democratic process is harmed when Americans don’t know who is attempting to influence them via political ads. 

Diving into Facebook’s archive of political ads, Edelson and McCoy scraped information and used the resulting data to publish an analysis that showed that from May 2018 to July 2018, Donald Trump was the largest spender on the platform — a key insight into political influence on Facebook. Unfortunately, Facebook eventually shut down the NYU team’s ability to gather information by scraping — but this was only a temporary setback. Facing mounting pressure from the research community, Facebook soon after created a way for researchers to obtain these data programmatically, via an API interface. This made it simpler to do an ongoing analysis of the ad library corpus, versus a one-time scrape covering a limited time period. 

In doing all of this work, the researchers’ goal was to push Facebook to adopt better transparency policies — by presenting them with the evidence of how inadequate their current policies were. But Edelson and McCoy were learning that was an even more difficult task than they had expected. 

“When you are battling a traditional cybersecurity problem like spam” explains Edelson, “the honest actors – whether it’s a bank, an insurance company, or something else  – have incentives to change their behavior, because their customers will reward them with increased profits. But in this case, online platforms may have a long-term interest in being good citizens, but their short term interest is in making money off of ads and targeted content, precisely the tools the bad actors are gaming. So it’s hard to get them to change.” In other words: social media platforms have competing motivations. 

But the team did have one advantage: the power of public pressure. And they uncovered plenty of things that would worry the public. When they conducted a thorough cybersecurity analysis of how well Facebook was adhering to its own policies on political ad disclosure, they found numerous problems. More than half of the advertising pages they studied – representing $37 million of ad spending – lacked proper disclosure of which candidate or organization paid for the ads. Even when names of sponsors were disclosed, the information was sloppy and inconsistent.

They also identified “inauthentic communities” — clusters of pages that appeared to cater to different racial or geographic identity groups that do not adequately disclose how they are connected to each other.

Rather than going straight to the public with this information, Edelson and McCoy reached out to Facebook to share their findings, letting the company know that they planned to present their research publicly in May 2020 at the IEEE Symposium on Security and Privacy. And it did have an impact: in response, Facebook made internal changes that addressed some of these issues. 

This was a victory for the researchers, but the work continues and many obstacles and mysteries remain. Sometimes the Facebook API stops working. Sometimes researchers find ads that are clearly political, but are not included in the official ad library. And sometimes the reports that Facebook releases that aggregate ad data don’t match the raw data they’ve collected. 

But despite the difficulties, Edelson and McCoy persist. “I’m proud of the fact we’ve moved Facebook on transparency,” says Edelson, “but there is always more work to do. Voters need to know who is targeting them and how — and how much they are spending — to help them make informed decisions when they fill out their ballots.”

In 2020, the researchers are continuing to work on projects aimed at making Facebook and other platforms safer for our democracy. They have launched AdObserver, a browser plugin that allows Facebook users a way to volunteer data on the ads they are seeing. This will yield valuable information on whether ads are missing from the Facebook Ad Library, as well as information on targeting that the social media platform does not make available. And they are creating a new tool that will help civil society organizations – who represent people who often are targeted by such ads – to quickly identify problematic ad campaigns. While there’s no doubt democracy still has a cybersecurity problem, the NYU researchers are working hard to protect it from threats. 

Cover Photo: Laura Edelson and Damon McCoy of The Online Political Ads Transparency Project at New York University’s Center for Cybersecurity. Photo Credit: New York University. 

Blog

It’s Time for an Internet That Supports Our Democracy

/
June 15, 2020

Social media platforms like Facebook, Twitter, and Google play an essential role in our democracy. They provide a way for communities to organize and speak directly to politicians. They enable companies to find customers and allow customers, in turn, to pressure companies to live up to higher standards. And they allow news outlets to reach households and create venues for friends and family to discuss current events.

But, far too often, these same platforms provide cover to unlawful practices and malicious actors that harm people and weaken our democracy, because the algorithms they run on are designed and managed without any public oversight. These algorithms are weaponized by foreign governments to inflame hatred and suppress voter turnout. They’re used by hate groups to create online mobs that harass and intimidate people of color and women. And they allow conspiracy theories to go viral. This kind of discrimination and manipulation would be unacceptable for other basic services we rely on, like telecommunications, electricity, or voting systems.

These are just a few of the harms inflicted through social media, and they stem from one fatal flaw: platform companies like Facebook, Twitter, and Google are not accountable to the public. Their unchecked power also extends far beyond their own platforms, as they have acquired countless other companies like Instagram, WhatsApp, and YouTube and have spread their tracking software across the web.

It doesn’t have to be like this. At Democracy Fund, we believe that the digital tools and platforms we all rely on can support democratic systems and protect human rights, rather than undermining them. This belief is at the core of our Digital Democracy Initiative, which funds advocacy, research, and innovations that work towards three concrete goals:

  1. Improve civil and human rights practices online
  2. Strengthen public interest journalism
  3. Reduce inauthentic and coordinated disinformation campaigns

For these goals to become a reality, we must see specific actions from policymakers to adopt a civil and human rights framework — a way of thinking that puts the needs of people first — focused on changing the terms of service to better support people of color online and serve community information needs.

To achieve these goals, Democracy Fund partners with civil rights groups, technologists, university researchers, and policy organizations working to improve our public square. Some use policy and litigation to protect people of color and hold platforms accountable to the public interest. Others, like Data for Black Lives, mobilize networks of grassroots activists to develop policies to protect users. And organizations like Free Press work to increase funding for news outlets, track and debunk misinformation, and strengthen and diversify news outlets. In particular, the Digital Democracy Initiative supports efforts led by or serving the people most frequently harmed online: people of color and women.

The 2016 US presidential election made clear the power of social media on our politics when the Russian-controlled Internet Research Agency flooded social media with fake groups and posts to divide, harass, and confuse the American public. But this was only the most high-profile case. For years, white supremacists and other hate groups have tested and developed tactics to disrupt our democracy, using the platforms’ tools for targeting individual users based on characteristics like race, gender, political affiliation, or economic status. All the while, social media platforms like Facebook, Twitter, and Google have expanded their reach into nearly every area of life with little to no oversight. Less obvious issues like algorithmic discrimination have led to civil rights violations, like real estate companies excluding people of color from seeing their online ads for housing. Journalists and academics need new tools and transparency laws to help them track and expose these hidden harms, just as they did with the great issues of prior generations, from segregation and Jim Crow to pesticides and big tobacco.

The pattern is now clear. Every few months, another problem with the platforms makes headlines. At first, the companies deny it or announce minor changes. Company leaders promise the public and Congress that they will do better. But once the headlines fade, little has changed.

We aim to keep the pressure on by supporting a wide range of efforts with diverse focus areas, leadership, and strategies. The platforms must enact strong policies that uphold democratic norms and prioritize quality information over misleading content and opaque systems. And in the meantime, users need tools to protect themselves and expose bad actors while navigating online spaces and discussions.

It’s time that we reclaim the digital tools and spaces that shape our democracy. Our elections, our lives, and our liberty depend on it.

If you’re interested in learning more about our work, contact Paul Waters, associate director, Public Square Program.

Systems Map

Digital Democracy Initiative Core Story

/
May 15, 2020

Our democracy is a complex political system made of an intricate web of institutions, interest groups, individual leaders, and citizens that are all connected in countless ways. Every attempt to influence and improve some aspect of this complex system produces a ripple of other reactions. To identify the root causes of problems we want to address, find intervention points, and design strategies to affect positive change, we use a methodology called systems mapping. We create systems maps in collaboration with broad and inclusive sets of stakeholders, and use them to design and then assess our grantmaking strategies. They are intended to provide a shared language, creating new opportunities for dialogue, negotiation, and ideas that can improve the health of our democracy.

This systems map describes how digital tools and technologies have transformed our public square in recent years for better and for worse. The flow of news, information and civic discourse is now largely governed by five major companies: Facebook, Twitter, Google, Microsoft, and Apple. Following numerous high-profile scandals, the public has grown concerned about issues of discrimination, mis/disinformation, online hate and harassment, lack of transparency, voter suppression, and foreign interference in our elections through the platforms. The platforms’ lackluster response to these crises suggests that we need to build a strong movement to force the platforms to become accountable not just to their shareholders, but to the public.

The map consists of three interlocking loops.

  1. Platform Power & Profitability describes how the platforms have come to dominate digital communications at the expense of the public square’s overall health and transparency.
  2. Discriminatory Targeting lays out the ways in which platform tools have been used to weaken our democracy, spread hateful content and disinformation, and have exacerbated longstanding racial, economic, and gender inequalities.
  3. The Decline of Commercial News shows why and how news publishers have been unable to compete with platforms for attention and profits in the digital age, and what the loss of journalism means for the public square.
Statement

Democracy Fund Statement on Twitter’s Decision on Political Ads

/
October 31, 2019

WASHINGTONDemocracy Fund president, Joe Goldman, and managing director, Tom Glaisyer, issued the following statement in response to Twitter’s announcement that it will no longer run political or advocacy ads:

“Twitter’s decision yesterday is a positive development, but it doesn’t go far enough —our political discourse remains broken on social media platforms. Companies like Twitter must adopt and enforce a code of conduct against hate speech and disinformation, and we must continue to hold them accountable until they do.

The time for half-measures and minor reforms has passed. Simply ending a portion of an advertising policy without providing transparency, addressing misinformation, and ending racially biased algorithms only deals with one part of a larger issue. In the lead up to the 2020 election, we need bold leadership from all platforms to strengthen our digital public square and preserve a healthy democracy.”

Two years ago, Democracy Fund and the Omidyar Network published a report, asking “Is Social Media a Threat to Democracy?” The report chronicled the role of social media platforms in spreading misinformation and divisive propaganda during the 2016 election. Democracy Fund continues to invest in programs, people and organizations that are working to create a robust public square that serves our democracy.

Blog

Democracy Fund and Omidyar Network Support Independent Analysis of Facebook’s Role in Elections

/
April 9, 2018

Today Facebook announced a new initiative which will provide independent researchers access to Facebook data to study the impact the social network has on our elections and our democracy. Democracy Fund, along with the Omidyar Network, Hewlett Foundation and several other leading foundations have come together to support the research efforts that will be enabled through this program. We believe that independent funding of this research is critical, and hope that the program will help the public and policymakers better understand how Facebook is shaping our elections, social fabric, and democratic life.

This announcement comes amidst a firestorm of attention focused on the social media giant’s role in allowing vast amounts of personal data to be released, data which was then used to target shady and divisive political ads at Americans. Last week Facebook revealed that tens of thousands more people were affected by that breach than was first reported. As a foundation fundamentally concerned with the health of our democracy, we have been following this story closely.

In fact, Democracy Fund and the Omidyar Network have been raising the alarm about these issues for sometime. Late last year, the organizations published an in-depth paper asking, “Is Social Media a Threat to Democracy?” and identifying six ways in which digital platforms pose direct challenges to our democratic ideals. We have signed on to support this research initiative, but are realistic about the complexities and risks of supporting this effort and are approaching it as one part of a multipronged strategy to create a safer, stronger and more meaningful digital public square.

We are deeply committed to working on meaningful solutions that help rebuild trustworthy spaces for communities to connect, share information and participate in our democracy. We currently fund a range of efforts focused on combating hyper-partisanship, ensuring the integrity of our elections, and fostering a robust fourth estate locally and nationally.

Grantees like Prof. Zeynep Tufekci and ProPublica are doing powerful work on algorithmic accountability. Prof. Young Mie Kim tracked political ads on Facebook in 2016 and Politifact is helping sort truth from fiction on the platform. The German Marshall Fund is tracking Russian misinformation and Free Press is organization diverse communities around the rights to connect and communicate. The Center for Democracy and Technology is helping strengthen election cybersecurity, and spreading best practices for data privacy in voter registration databases and campaign data. Launched in 2017, the Social Science Research Center’s Media & Democracy program encourages academic research, practitioner reflection, and public debate on all aspects of the close relationship between media and democracy, including how changes in the political landscape, such as increasing polarization, have affected the media.

However, in our work with activists, organizations, and scholars in the field we have consistently heard that we can’t address what we don’t know. Through this new research effort Facebook says it will give researchers unpresented access to its data in ways it never has before. The research will be driven by a diverse coalition of scholars. Research projects will have to go through relevant university Institutional Review Board (IRB) reviews, will be rigorously peer reviewed, and may be vetted to ensure Facebook lives up to its legal and ethical commitments to users. Crucially, the research results themselves will not be subject to approval by Facebook

The emphasis of this first announcement is on Facebook’s role in elections, but the committee is also expected to address how Facebook’s systems influence viral deceptions, polarization, and civic engagement. Democracy Fund believes the American people must have effective ways to understand and be a part of the democratic process. As the internet transforms political life, it opens exciting new pathways for public engagement but has also created a fertile ground for abuse, harassment and manipulation that hurt our communities and our society. As this research is planned Democracy Fund will pay special attention to ensuring that the voices and the priorities of those disproportionately harmed by social media are included.

The flood of news about bad actors gaming the system have revealed a troubling disregard for the critical responsibility social media companies have had over our personal privacy and public debate. Facebook, and other platforms, need to acknowledge the oversized role they play in our society and truly prioritize privacy, embrace transparency, and accept accountability. We are realistic about the complexities here, but see this research partnership as a key step towards that goal. Through this program, and in separate endeavors, we are deeply committed to working on meaningful solutions that help rebuild trustworthy spaces for communities to connect, share information and participate in our democracy.

Democracy Fund
1200 17th Street NW Suite 300,
Washington, DC 20036