Policies that outline what is and isn't allowed on our apps.
Policies for ad content and business assets.
Other policies that apply to Meta technologies.
How we update our policies, measure results, work with others, and more.
Explore how we help teens have safe, positive experiences on Facebook and Instagram.
How we approach dangerous organizations and individuals.
How we support communities in the face of the opioid epidemic.
How we help prevent interference, empower people to vote and more.
How we work with independent fact-checkers, and more, to identify and take action on misinformation.
How we assess content for newsworthiness.
How we reduce problematic content in News Feed.
How we build AI systems.
Comprehensive access to public data from Facebook and Instagram
Comprehensive and searchable database of all ads currently running across Meta technologies
Additional tools for in-depth research on Meta technologies and programs
Quarterly report on how well we're doing at enforcing our policies on the Facebook app and Instagram.
Report on how well we're helping people protect their intellectual property.
Report on government request for people's data.
Report on when we restrict content that's reported to us as violating local law.
Report on intentional internet restrictions that limit people's ability to access the internet.
Quarterly report on what people see on Facebook, including the content that receives the widest distribution during the quarter.
Download current and past regulatory reports for Facebook and Instagram.
We’re publishing our first quarter reports for 2025, including the Community Standards Enforcement Report, where following the changes announced in January we’ve cut enforcement mistakes in the U.S. in half, while during that same time period the low prevalence of violating content on platform remained largely unchanged for most problem areas.
We have also released our Adversarial Threat Report, Widely Viewed Content Report and the biannual Transparency Report consisting of Government Requests for User Data and Content Restrictions Based on Local Law. All of the reports are available in our Transparency Center.
Community Standards Enforcement Report
In January, we announced a series of steps we’d be taking to allow for more speech while working to make fewer mistakes. This included continuing to focus our proactive enforcement on illegal and high-severity violations and, for less severe violations, relying more on someone reporting an issue before we take action.
Addressing mistakes is important. In December 2024, for example, we removed millions of pieces of content every day. Yet we estimated that one to two out of every 10 of those actions may have been mistakes (i.e., the content may not have actually violated our policies). This did not include the actions we take to tackle large-scale adversarial spam attacks.
Since that announcement, we have taken steps to reduce the number of mistakes we make in the United States, cutting them in half for the time period of Q4 2024 to the end of Q1 2025. Some of these steps include:
Auditing our automated systems to find where we were making too many mistakes — and turning those off while we focus on improving them;
Using additional signals to improve the accuracy of other classifiers; and
Requiring more confidence that content violates our policies before we remove it — by both raising the thresholds of our automated systems and in some cases requiring multiple layers of review before removal.
During this same time period and even with these changes, prevalence overall has remained largely consistent across most violation types, with two exceptions. There was a small increase in the prevalence of bullying and harassment content from 0.06-0.07% to 0.07-0.08% on Facebook due to a spike in sharing of violating content in March. There was also a small increase in the prevalence of violent and graphic content on Facebook from 0.06%-0.07% to about 0.09% due to an increase in sharing of violating content as well as a result of ongoing work to reduce enforcement mistakes.
Over time, our systems to manage and enforce content on our platforms have grown increasingly complex and that is what we have been working to address. We’re constantly working to strike the right balance: if our enforcement is too narrow, we risk missing potentially violating content, while if our enforcement is too broad, we make too many mistakes, subjecting too many people to frustrating enforcement actions.
While all of these changes are designed to reduce mistakes, we also want to make sure teens are having the safest experience possible. That’s why, for teens, we’ll continue to both proactively enforce on the highest severity harms and proactively hide other types of harmful content, such as bullying.
Our Q1 2025 report is the first quarter where these changes are reflected in the data. Across a range of policy areas we saw a decrease in the amount of content actioned and a decrease in the percent of content we took action on before a user reported it. This was in part because of the changes we made to ensure we are making fewer mistakes. We also saw a corresponding decrease in the amount of content appealed and eventually restored.
Outside of this, we are seeing limited impact to the metrics.
Content Actioned on Facebook under our Dangerous Organizations and Individuals, Organized Hate policy decreased due to a bug that has since been addressed.
Content actioned for Spam on Instagram increased as a result of adjustments made to our proactive detection technology.
Later this year, we plan to expand our Community Standards Enforcement Report to include metrics on our mistakes so that people can track our progress. We’re also working to include information historically found in our Intellectual Property report as part of this report.
You can read the full report here.
Adversarial Threat Report
We’re sharing threat research into three covert influence operations we disrupted in Q1 of 2025 originating from Iran, China and Romania. We detected and removed these campaigns before they were able to build authentic audiences on our apps. You can read the full report here.
Government Requests for User Data
During the second half of 2024, global government requests for user data decreased 0.5% from 323,846 to 322,062. India is the top requester, with a +3.8% increase in requests, followed by the United States, Brazil and Germany.
In the US, we’ve received 74,672 requests – decreased by 8.8 % – from the first half of 2024, which includes non-disclosure orders prohibiting Meta from notifying the user 76.6% of the time. Emergency requests accounted for 14.1% of the total requests.
Additionally, as a result of transparency updates introduced in the 2015 USA Freedom Act, the US government lifted non-disclosure orders on 21 National Security Letters (“NSLs”). These NSLs, along with the US government’s authorization letters, are available here.
As part of our ongoing efforts to ensure the accuracy of our reporting, we conducted review of our processes and made some adjustments. Specifically, we've removed pages of territories that are subject to the jurisdiction of a country. While we'll continue to include these requests in the overall numbers for the country of jurisdiction, we're removing the individual territory pages to prevent confusion and provide a clearer picture of our reporting. You can read the full report here.
Content Restrictions Based on Local Law
For many years, we’ve published biannual transparency reports, which include the volume of content restrictions we make when content is reported as violating local law but doesn’t go against our Community Standards. Beginning with the H2 2023 report, we included information where, in limited countries, we are obligated to automatically restrict content at scale and in country, based on local law requirements, which is reflected in the comparatively higher volumes of content restrictions for those countries. For the first time in this report, we are including information about content restrictions on Threads.
During this reporting period, the volume of content restrictions based on local law for Facebook and Instagram decreased globally from 148 million in H1 2024 to over 84.6 million in H2 2024. This number was largely driven by obligations in Indonesia where over 84 million items were restricted under the Electronic Information and Transactions (EIT) Law and KOMINFO Regulation 5/2020 on Private Electronic Services Operator for gambling content — a decrease from 148 million in H1 2024.
We continually review our processes and protocols to help ensure the accuracy of our reporting. You can read more here.
NCMEC CyberTips
As part of our ongoing work to provide young people with safe, positive online experiences, we're continuing to provide more transparency into our efforts to find and report child exploitation to the National Center for Missing and Exploited Children (NCMEC).
In Q1 2025, we reported the following number of CyberTips to NCMEC from Facebook, Instagram and Threads:
Facebook, Instagram and Threads sent over 1.7 million NCMEC CyberTip reports for child sexual exploitation.
Of these reports, over 281 thousand involved inappropriate interactions with children. CyberTips relating to inappropriate interactions with children may include an adult soliciting child sexual abuse material (CSAM) directly from a minor, online enticement of a minor, minor sex trafficking, or attempting to meet and cause harm to a child in person. These CyberTips also include cases where a child is in apparent imminent danger.
Over 1.4 million reports related to shared or re-shared photos and videos that contain CSAM.
Other Integrity Updates
Large Language Models (LLMs) in Content Enforcement: Over the years, AI has become one of the most effective tools for improving precision of our enforcement and reducing the prevalence of violating content. We began testing LLMs by training them on our Community Standards to help determine whether a piece of content violates our policies. Early tests suggest that LLMs can perform better than existing machine learning models, or enhance existing ones. Upon further testing, we are beginning to see LLMs operating beyond that of human performance for select policy areas. We’re also using LLMs to remove content from review queues in certain circumstances when we’re highly confident it does not violate our policies. This frees up capacity for our reviewers allowing them to prioritize their expertise on content that’s more likely to violate. We believe that LLMs and other AI tools provide significant opportunities to counter high severity and illegal content, at scale, and will continue to test the benefits of LLMs to our enforcement systems particularly to continue to improve accuracy.
Community Notes: In Q1 of 2025 we began testing Community Notes in the United States, a new way for our community to add context to posts that may be misleading or confusing, and Community Notes written and rated by users have begun to publish on public posts across Instagram, Threads, and Facebook. Since the initial launch we’ve added the ability to write notes on Reels and on replies in Threads, and added the option to request a community note as a part of our user reporting experience. We’ll continue to share product updates in the months to come.