Facebook logo

Facebook Files: How a ban on surveillance advertising can fix Facebook

by Alia Al Ghussain, Campaigner, Amnesty Tech 

Facebook is engulfed in the biggest crisis to hit the company since the Cambridge Analytica scandal. The explosive revelations by whistle-blower Frances Haugen, is that Facebook’s leadership refused to make changes that would make their platforms safer because they “put their immense profits before people”.

The claims by Haugen first published in the Wall Street Journal’s “Facebook Files” investigation, and in her subsequent testimony to US Congress last week, included:

  • Facebook’s plans to attract more teens to the platform despite internal research showing that Instagram can have serious negative effects on the mental health of teen girls.
  • The spread of anti-vaccine misinformation on Facebook
  • Facebook’s role in the spread of disinformation in developing countries
  • Algorithm changes which Facebook’s own research found were fuelling the spread of misinformation, toxicity, and violent content
  • A system built by Facebook which exempts high-profile users from some or all its rules, allowing them to post potentially harmful content without consequence

What does this have to do with surveillance advertising?

While these issues may seem to be separate – ultimately, they all stem from Facebook’s need to maintain user engagement and keep people on its platforms.  

This is driven by Facebook’s use of surveillance advertising, which is at the heart of its business model. To micro-target users with ads, Facebook must find out as much as possible about them – their behaviours, relationships, and vulnerabilities by keeping users engaged and collecting more data about them. The company uses constant surveillance of online activities, far beyond activity within the company’s own platforms, to gather data, decide what information users see, and use these insights to sell advertising. More engagement equals more data equals more ad dollars.

The result is that its algorithms amplify extreme, divisive, and harmful content – because this is what keeps users engaged and online. The latest revelations echo extensive previous research documenting this problem on Facebook and other social media platforms, particularly YouTube.

Global Harms

Facebook strongly contests the claims by Frances Haugen , including in a blog post by Mark Zuckerberg stating that “the idea that we prioritize profit over safety and well-being …[is] just not true”.  

Zuckerberg said he is proud of the impact Facebook has had on the world. Yet Frances Haugen – and other brave whistle-blowers such as Sophie Zhang – have told another story.

The company has been responsible for the spread of harmful and divisive content across the world – and this risk is greatly heightened in countries in the Global South where Facebook is synonymous with the internet.

Facebook’s algorithms ability to push out extreme content has already had devastating impacts in the Global South. For example, an independent assessment commissioned by Facebook itself,, as well as a UN fact-finding report, have shown that the platform played a key role in fomenting violence against the Rohingya people in Myanmar. Similarly, another human rights impact assessment commissioned by Facebook showed that the platform had a role in fuelling ethnic violence in Sri Lanka.  

However, according to Facebook’s own research disclosed by Frances Haugen, just 13% of staff time tracking misinformation is focused on places other than the US.

Calling for accountability

The Facebook Files provide further support for what Amnesty and many others have long been saying – Facebook’s surveillance-based business model is leading to tangible human rights harms. The company needs users to stay on the platform for as long as possible, to continue to gather data and then micro-target them with ads. This is Facebook’s main source of profit – and it’s why Amnesty International is calling for a ban on surveillance advertising.

During her Senate testimony, Frances Haugen said “Until the incentives change, Facebook will not change.” Advertising that does not rely on invasive tracking and profiling will mean that Facebook no longer has the impetus to prioritize user engagement and invasive surveillance above everything else – including human rights.

Facebook won’t regulate itself – it has the internal research, it knows about the harms, and it continues the same practices due to its unrelenting drive for engagement and growth. This crisis is being billed as Big Tech’s “Big Tobacco” moment We now need to see action from governments to tackle this harm by:

  • Stopping surveillance for profit: Banning surveillance advertising that relies on invasive tracking and the profiling of users for profit.
  •  Turning off the manipulation machine: Ensuring control and independent oversight of algorithms behind the platforms to limit the amplification of disinformation, hate speech and other harmful content
  • Putting people back in charge: Challenging the dominance of Big Tech companies to ensure people can choose truly rights-respecting alternatives.


    Right now there is a huge opportunity to implement meaningful regulation in the EU, through the Digital Services Act (DSA). It is crucial that lawmakers take this moment to force Facebook, and other Big Tech platforms, to overhaul their business model, assuring platform users that their human rights will be protected and respected.

    Until governments show the political will to step in and hold Big Tech accountable, the harms of the toxic surveillance-based business model will continue to spill into the offline world. As Frances Haugen said, “We can do better”.