More language and fewer errors

More language and fewer errors

Meta’s platforms are designed to be places where people can express themselves freely. This can be chaotic. Everything good, bad and ugly is on display on platforms where billions of people can have a voice. But that is free speech.

In his speech at Georgetown University in 2019, Mark Zuckerberg argued that free speech is the driving force behind progress in American society and around the world and that inhibiting speech, no matter how well-intentioned the reasons for it, is , often strengthening existing institutions and power structures instead of empowering people. He said: “Some people think giving more people a voice will drive division rather than bring us together. More and more people across the spectrum believe that achieving the political outcomes they believe is important is more important than every person having a voice. I think that’s dangerous.”

In recent years, we have developed increasingly complex systems for managing content on our platforms, in part in response to social and political pressure to moderate content. This approach has gone too far. As well-intentioned as many of these efforts have been, they have escalated over time to the point where we make too many mistakes, frustrate our users, and too often stand in the way of the free expression we wanted to enable. Too much harmless content is censored, too many people are unfairly locked up in “Facebook prison,” and we are often too slow to respond when they do.

We want to fix that and return to that fundamental commitment to free speech. Today we’re making some changes to stay true to that ideal.

Termination of third-party fact-checking program, transition to Community Notes

When we launched our independent fact-checking program in 2016, we knew we didn’t want to be the arbiters of truth. At the time, we made what we believed to be the best and most sensible decision, which was to hand over this responsibility to independent fact-checking organizations. The aim of the program was for these independent experts to give people more information about the things they see online, particularly viral hoaxes, so that they can judge for themselves what they see and read.

That is not the case, especially not in the United States. Experts, like everyone else, have their own biases and perspectives. This was evident in the decisions some made about what and how to fact-check. Over time, too much content has been fact-checked for people to understand it as legitimate political speech and debate. Our system then led to real consequences in the form of intrusive labels and reduced distribution. A program intended only to inform has all too often become a tool of censorship.

We are now changing this approach. We will end the current third-party fact-checking program in the United States and instead begin transitioning to a Community Notes program. We’ve seen this approach at We believe this could be a better and less biased way to achieve our original goal of providing people with information about what they see.

  • Once the program is running, Meta does not write community notes or decide which ones are displayed. They are written and rated by contributing users.
  • Just like X, Community Notes require agreement between people with different perspectives to prevent biased reviews.
  • We want to be transparent about how different viewpoints influence the notes displayed in our apps and are working on the right way to share this information.
  • People can sign up today (Facebook, Instagram, Topics) for the opportunity to be among the first contributors to this program when it becomes available.

We plan to initially roll out Community Notes gradually in the US over the next few months and will continue to improve it throughout the year. As we make the transition, we will eliminate our control over fact-checking, stop demoting fact-checked content, and instead of overlaying full-screen interstitial warnings that require you to click through before you can even see the post, we will use much less Intrusive label indicating that there is additional information for those who want to see it.

Allow more language

Over time, we have developed complex systems to manage content on our platforms, which we find increasingly difficult to enforce. As a result, we have over-enforced our rules, limited legitimate political debate, censored too much trivial content, and subjected too many people to frustrating enforcement measures.

For example, in December 2024, we removed millions of pieces of content every day. Although these actions represent less than 1% of the content produced daily, we estimate that one to two in ten of these actions were errors (i.e. the content may not have actually violated our policies). This does not take into account the measures we take to combat large-scale adversarial spam attacks. We plan to expand our transparency reporting to regularly publish numbers on our mistakes so people can follow our progress. As part of this, we will also provide further details about the mistakes we make in enforcing our spam policies.

We want to stop the mission creep that has left our rules too restrictive and prone to over-enforcement. We are removing a number of restrictions on issues such as immigration, gender identity and gender, which are the subject of frequent political discussions and debates. It’s not right for things to be said on television or in Congress but not on our platforms. It may take a few weeks for these policy changes to be fully implemented.

We will also change the way we enforce our policies to reduce the kinds of errors that account for the majority of censorship on our platforms. Previously, we used automated systems to review all policy violations. However, this has led to too many mistakes being made and too much content being censored that shouldn’t have been censored. Therefore, we will continue to focus these systems on combating illegal and serious violations such as terrorism, child sexual exploitation, drugs, fraud and scams. For less serious policy violations, we rely on someone to report a problem before we take action. We also demote too much content that our systems predict may violate our standards. We are in the process of removing most of these downgrades and are requiring greater assurance that content violates them for the rest. And we will adapt our systems to require a much higher level of trust before any content is removed. As part of these changes, we will move the trust and safety teams that write our content policies and review content from California to Texas and other locations across the United States.

People often have the opportunity to appeal our enforcement decisions and ask us to take another look, but the process can be frustratingly slow and does not always produce the right result. We have hired additional staff to do this work and in more cases we are now also requiring multiple reviewers to make a decision to remove something. We’re working on ways to make account recovery easier and testing facial recognition technologies. We have also started using AI LLMs (Large Language Models) to get a second opinion on some content before taking enforcement action.

A personalized approach to political content

Since 2021, we’ve made changes to reduce the amount of civic content people see – posts about elections, politics, or social issues – based on feedback from our users that they wanted to see less of this content. But that was a pretty direct approach. We will begin to gradually expand this back across Facebook, Instagram and Threads with a more personalized approach so that people who want to see more political content in their feeds can do so.

We’re continually testing how we deliver personalized experiences and recently conducted testing around citizen content. That’s why we’ll start treating civic content from people and pages you follow on Facebook more like any other content in your feed, and we’ll start ranking and displaying that content based on explicit signals (e.g . Like content) and implicit signals (e.g. viewing posts) that help us predict what matters to people. We will also recommend more political content based on these personalized signals and expand people’s ability to control how much of this content they see.

These changes are an attempt to return to the commitment to free speech that Mark Zuckerberg outlined in his Georgetown speech. That means being vigilant about the impact our policies and systems have on people’s ability to make their voices heard, and having the humility to change our approach when we know we’re doing it wrong.

Leave a Reply

Your email address will not be published. Required fields are marked *