The decision by Meta, the parent company of Facebook and Instagram, to program and otherwise reduce content moderation raises the question of what content on those social media platforms will look like going forward.
One worrisome possibility is that the change could open the floodgates to more climate misinformation on Meta’s apps, including during disasters.
In 2020, Meta its on Facebook to respond to climate misinformation. Currently, third-party fact-checkers working with Meta . Meta then decides whether to attach a warning label to them and reduce how much the company’s algorithms promote them.
Meta’s policies have “viral false information,” hoaxes and “provably false claims that are timely, trending and consequential.” Meta explicitly states that this excludes opinion content that does not include false claims.
The company its agreements with U.S.-based third-party fact-checking organizations in March 2025. The planned changes slated to roll out to U.S. users won’t affect fact-checking content viewed by . The tech industry faces on combating misinformation in other regions, such as the European Union.
Fact-checking curbs climate misinformation
I climate change communication. Fact-checks can help correct political misinformation, climate change. People’s beliefs, ideology and prior knowledge affect fact-checks work. Finding messages that align with the , along with using trusted messengers – like climate-friendly conservative groups when speaking to political conservatives – can help. So, too, does appealing to shared social norms, like limiting harm to future generations.
, and fire conditions are becoming and as the world warms. Extreme weather events often lead to in social media attention to climate change. Social media posting a crisis but drops off quickly.
Low-quality fake images created using generative artificial intelligence software, so-called , is adding to confusion online during crises. For example, in the aftermath of back-to-back hurricanes Helene and Milton last fall, fake AI-generated images of a young girl, shivering and holding a puppy in a boat, on the social media platform X. The spread of rumors and misinformation .
What distinguishes misinformation from disinformation of the person or group doing the sharing. Misinformation is false or misleading content shared without active intention to mislead. On the other hand, disinformation is misleading or false information shared with the intent to deceive.
Disinformation campaigns are already happening. In the wake of the 2023 Hawaii wildfires, researchers at , Microsoft, and the University of Maryland independently an organized by Chinese operatives targeting U.S. social media users.
To be sure, of misleading information and rumors on social media is not a new problem. However, not all content moderation approaches have the same effect, and platforms are changing how they address misinformation. For example, X replaced its rumor controls that had false claims during fast-moving disasters with user-generated labels, .
False claims can go viral rapidly
Meta CEO Mark Zuckerberg specifically as an inspiration for his company’s planned changes in content moderation. The trouble is false claims quickly. has found that the response time of crowd-sourced Community Notes is too slow to stop the diffusion of viral misinformation early in its online life cycle – the point when posts are most widely viewed.
In the case of climate change, misinformation is ” .” It is especially hard to dislodge falsehoods from people’s minds once they encounter them repeatedly. Furthermore, climate misinformation of established science. Just sharing more facts to combat the spread of false claims about climate change.
Explaining that that climate change is happening and is caused by humans burning greenhouse gases can prepare people to avoid misinformation. Psychology research indicates that this ” ” approach works to reduce the influence of false claims to the contrary.
That’s why warning people against climate misinformation before it goes viral is crucial for curbing its spread. Doing so is likely to get harder on Meta’s apps.
Social media users as sole debunkers
With the coming changes, you will be the fact-checker on Facebook and other Meta apps. The most effective way to pre-bunk against climate misinformation is to , then warn briefly about the myth – but only state it once. Follow this with explaining why it is inaccurate and repeat the truth.
During disasters, people are desperate for accurate and reliable information to make lifesaving decisions. Doing so is already challenging enough, like when the Los Angeles County’s emergency management office an evacuation alert to 10 million people on Jan. 9, 2025.
Crowd-sourced debunking is no match for organized disinformation campaigns in the midst of information vacuums during a crisis. The conditions for the rapid and unchecked spread of misleading, and outright false, content could get worse with Meta’s content moderation policy and algorithmic changes.
The U.S. public by and large wants the industry false information online. Instead, it seems that big tech companies are leaving fact-checking to their users.