The case to vote FOR Proposal 8 (“Report on Hate Targeting Marginalized Communities”) on Meta’s 2025 Proxy Statement

PROXY MEMORANDUM

To: Shareholders of Meta Platforms, Inc. (the “Company” or “Meta”)

From: The Anti-Defamation League & JLens (together, “we”)

Date: April 24, 2025

Re: The case to vote FOR Proposal 8 (“Report on Hate Targeting Marginalized Communities”) on Meta’s 2025 Proxy Statement

We urge you to vote FOR the “Report on Hate Targeting Marginalized Communities” (Proposal 8) in Meta’s 2025 Proxy Statement.

This proposal effectively calls on Meta to address the material risks of letting antisemitism and other forms of hate speech run rampant on their platform. The proposal requests that the Company issue a comprehensive report detailing the company’s policies, practices, and effectiveness in combating hate on its platform(s) and services, specifically antisemitism, anti-LGBTQ+ and anti-disability hate.

A Note on The Timing of Our Proposal

JLens submitted this proposal before Meta announced that it would end its third-party fact-checking program in the United States and reduce its content moderation policies. These sweeping rollbacks in critical content moderation policies heighten the urgent risks that JLens’ proposal seeks to address. 

These concerns are not limited to outside observers.  On April 23, 2025, Meta’s own Oversight Board openly questioned whether the Company’s aggressive rollback of content moderation policies had gone too far in dismantling critical safeguards for users, and urging Meta to assess the human rights impact of its current policies [1]. We believe the Oversight Board’s statement not only validates Shareholder Proposal 8, it affirms and injects real urgency: investors now have confirmation from Meta’s self-appointed oversight group that the risk of unchecked hate speech requires swift, transparent action.

Below, we outline why we believe your support for this proposal is critical, not only from a social responsibility perspective but also for protecting and enhancing shareholder value.

 

Escalating Hate: A Societal Crisis

Unchecked hate speech on social media platforms creates real-world harm and poses significant risks for users, society, and shareholders alike.

  • Spike in Antisemitism: Data from the Anti-Defamation League (“ADL”) shows that antisemitic incidents have been on the rise [2], with social media playing a significant role in spreading harmful rhetoric. Recently, 41% of Jewish adults reported altering their online behavior to avoid being recognized as Jewish due to an uptick in online harassment [3]. Meta’s recent reduction in content moderation could further increase these risks, making it essential for the Company to act proactively.
  • Vulnerable Users: Meta’s weakened moderation could not only embolden antisemitism, but could also adversely affect individuals from other communities, driving away users who no longer feel safe. We believe hate speech and misinformation, if left unchecked, could undermine the open exchange of ideas, chilling healthy discourse for everyone and potentially damaging Meta’s reputation and user base.
  • Public Scrutiny and Accountability: Regulators, advocacy groups, and the public are increasingly holding technology companies responsible for content that fosters division and inflames real-world violence [4] [5]. By proactively addressing these harms, Meta can demonstrate leadership, corporate responsibility, and a commitment to protecting shareholder value.

Material Financial and Reputational Risks

We believe Meta’s inadequate response to hate speech presents significant financial, regulatory, and reputational threats that shareholders cannot afford to overlook.

  • Advertiser Trust: Advertisers remain highly sensitive to brand safety. Content moderation changes could lead to decreased user engagement and brand safety concerns, potentially affecting Meta’s advertising revenue, which accounts for 97% of its total revenue [6]. We believe Meta risks losing a significant portion of its user base if it doesn’t enhance content safety. The more hate speech that permeates Meta’s platforms, the greater the likelihood that prominent advertisers will reduce or withdraw their spending to avoid reputational damage, posing substantial risks to Meta’s revenue growth and profitability.
  • User Retention and Growth: Ineffective moderation can drive users to competing platforms, particularly users from targeted communities or those simply seeking a safer online environment. If Meta’s user base erodes, so does long-term revenue potential.
  • Regulatory and Legal Exposure: Around the world, governments are intensifying scrutiny of social media’s role in promoting harmful content. For example, the European Union’s Digital Services Act (“DSA”), which came into full force in February 2024, imposes stringent obligations on large online platforms like Meta to mitigate illegal content, including hate speech [7]. Non-compliance can result in fines of up to 6% of the Company’s global annual revenue. Failure to demonstrate robust content controls raises the risk of fines, regulatory actions, or legislation compelling stricter oversight.

Any one of these scenarios would likely prove costly and disruptive for shareholders [8]. Meta has incurred substantial fines in the past, including a $263 million fine for a 2018 data breach [9], a $5 billion FTC fine in 2019 for privacy concerns [10], and a fine of €405 million issued by Ireland’s Data Protection Commission related to processing children’s personal data on Instagram [11].

The High Cost of Inadequate Content Moderation: Insights from YouTube, X, and TikTok

Companies operating in the digital media landscape increasingly find that weak content moderation policies can directly harm their financial stability and brand reputation based on our research. Recent experiences from YouTube, X, and TikTok offer critical lessons on the importance of proactively addressing harmful content.

  • YouTube – In 2017, YouTube experienced a severe backlash known as the “Adpocalypse” when major advertisers such as AT&T Inc., Verizon Communications Inc., and Johnson & Johnson withdrew their advertisements after discovering their ads appeared alongside extremist and hateful content [12]. This boycott extended beyond YouTube, impacting Google’s broader advertising networks, highlighting concerns about brand safety and forcing Google to promise policy improvements and better control measures for advertisers [13] [14]. Following these events, as part of a June 2019 update, YouTube said it would ban “videos alleging that a group is superior in order to justify discrimination, segregation or exclusion based on qualities like age, gender, race, caste, religion, sexual orientation or veteran status” [15]. In announcing the update, YouTube said it would specifically ban promotion of Nazi ideology and Holocaust denial.
  • X (formerly Twitter) – The financial and reputational challenges at X also serve as a stark warning of the potential consequences Meta could face if hate speech and harmful content are not adequately moderated. In 2022, changes to X’s content moderation and related practices  prompted major advertisers, including Chipotle Mexican Grill, Inc., General Mills, Inc., and United Airlines, Inc. to pause advertising campaigns over concerns about brand safety, which led to revenue declines at X [16] [17] [18]. Following those same changes, a study by market research firm Kantar indicated that about 25% of advertisers planned to cut spending on X in 2025 [19]. The primary reasons cited for this planned reduction include concerns about brand safety and a decline in trust regarding the platform’s content [20]. X’s active daily user base in the UK dropped from 8 million in 2023 to approximately 5.6 million in September 2024 [21]. A similar trend is observed in other regions, including the US, where active users fell by about 20% over the same 16-month period [22]. 
  • TikTok – TikTok has faced substantial financial and operational disruptions, including multiple lawsuits and fines globally, due to serious content moderation failures. For instance, in March 2024, Italy fined TikTok €10 million for its failure to effectively moderate harmful content, specifically dangerous viral challenges that endangered minors [23]. Additionally, countries such as India, Pakistan, and Indonesia imposed temporary bans citing concerns over inappropriate and harmful content, significantly impacting TikTok’s revenue and growth potential in these markets [24][25]. 

These examples underscore that inadequate moderation not only harms reputation but also can lead to financial losses and regulatory penalties. Thus, effective content moderation is crucial for sustaining advertiser confidence, user retention, and overall long-term profitability.

Conclusion: Vote FOR Proposal 8 in Meta’s 2025 Proxy Statement to Protect Shareholder Interests

In short, JLens’ proposal calls for a detailed report evaluating how effectively Meta addresses hate content, particularly antisemitism, on its platforms. This report should include insights into content moderation practices, enforcement mechanisms, transparency measures, and the integration of expert guidance. By shedding light on what is working, and its shortfalls, Meta can better align its policies with user safety, brand protection, and long-term shareholder value.

In an era of increasing polarization, we believe ensuring Meta’s platforms do not enable hateful or destabilizing content is not just a moral imperative, but a key factor in safeguarding the Company’s competitive position, regulatory compliance, and financial performance.

As an investor in Meta, you have a vital role in guiding the Company toward responsible growth. By supporting JLens’ proposal, you can help ensure that Meta undertakes a rigorous review of its policies and practices, sets clear benchmarks for improvement, and preserves a trusted and inclusive environment for all users. We believe that such transparency and accountability will ultimately strengthen Meta’s brand, reduce legal and reputational risks, and promote sustainable long-term growth and profitability for shareholders.

For more information, please contact Dani Nurick, JLens Director of Advocacy, at dani@jlensnetwork.org.

Endnotes:

  1. “Wide-Ranging Decisions Protect Speech and Address Harms,” Oversight Board, April 23, 2024, https://www.oversightboard.com/news/wide-ranging-decisions-protect-speech-and-address-harms/.
  2. “U.S. Antisemitic Incidents Skyrocketed 360% in Aftermath of Attack on Israel, According to New ADL Report”, Anti-Defamation League, January 9, 2024, https://www.adl.org/resources/press-release/us-antisemitic-incidents-skyrocketed-360-aftermath-attack-israel-according.
  3. “Online Hate and Harassment: The American Experience”, Anti-Defamation League, June 11, 2024, https://www.adl.org/resources/report/online-hate-and-harassment-american-experience-2024.
  4. “Online Safety Act Is a Good Start, but Its Efficacy Remains to Be Seen”, The Times, April 3, 2025, https://www.thetimes.com/uk/law/article/online-safety-act-is-a-good-start-but-its-efficacy-remains-to-be-seen-fxm6crhs0
  5. “Tech companies, the media and regulators must come together to prevent online harm”, World Economic Forum, September 29, 2023, https://www.weforum.org/stories/2023/09/digital-safety-multi-faceted-approach-tackle-real-world-harm/
  6. “Why Meta’s New Content Moderation Policy Could Pose Risks to Its Stock”, Morningstar UK, March 18, 2025, https://www.morningstar.co.uk/uk/news/262193/why-metas-new-content-moderation-policy-could-pose-risks-to-its-stock.aspx.
  7. “Digital Services Act (DSA) Enforcement”, European Commission, February 12, 2025, https://digital-strategy.ec.europa.eu/en/policies/dsa-enforcement.
  8. “Advertiser Exodus from X: Survey Shows 2025 Cuts Amid Concerns Over Content and Trust.” The Guardian, September 5, 2024, https://www.theguardian.com/media/article/2024/sep/05/advertiser-exodus-x-survey-2025-elon-musk.
  9. “Meta fined $263M over 2018 security breach that affected ~3M EU Facebook users”, TechCrunch, December 17, 2024, https://techcrunch.com/2024/12/17/meta-fined-263m-over-2018-security-breach-that-affected-3m-eu-users/. 
  10. “Why Meta’s New Content Moderation Policy Could Pose Risks to Its Stock”, Morningstar, March 13, 2025, https://www.morningstar.ca/ca/news/262235/why-metarsquo%3Bs-new-content-moderation-policy-could-pose-risks-to-its-stock.aspx.
  11. Dan Milmo, “Instagram owner Meta fined €405m over handling of teens’ data”, The Guardian, September 5, 2022, https://www.theguardian.com/technology/2022/sep/05/instagram-owner-meta-fined-405m-over-handling-of-teens-data.
  12. “Google’s bad week: YouTube loses millions as advertising row reaches US”, The Guardian, March 25, 2017, https://www.theguardian.com/technology/2017/mar/25/google-youtube-advertising-extremist-content-att-verizon#:~:text=Verizon’s%20ads%20were%20featured%20alongside%20videos%20made,the%20murder%20of%20a%20politician%20in%20Pakistan
  13. Alex Hern, “YouTube and Google boycott spreads to US as AT&T and Verizon pull ads,” The Guardian, March 23, 2017, https://www.theguardian.com/technology/2017/mar/23/youtube-google-boycott-att-verizon-pull-adverts-extremism
  14. Davey Alba, “YouTube’s Ad Problems Finally Blow Up in Google’s Face”, WIRED, March 25, 2017, https://www.wired.com/2017/03/youtubes-ad-problems-finally-blow-googles-face/.
  15. YouTube Team, “Our ongoing work to tackle hate”, YouTube Official Blog, June 5, 2019, https://blog.youtube/news-and-events/our-ongoing-work-to-tackle-hate/.
  16. Kari Paul, “General Mills latest to halt Twitter ads as Musk takeover sparks brand exodus”, The Guardian, November 3, 2022, https://www.theguardian.com/technology/2022/nov/03/general-mills-twitter-ads-halt-musk-takeover.
  17. Leslie Patton, “Chipotle Is Latest Company to Pull Back Paid Ads From Twitter”, Bloomberg News, November 10, 2022, https://www.bloomberg.com/news/articles/2022-11-10/chipotle-is-latest-company-to-pull-back-paid-ads-from-twitter.
  18. Rhea Binoy and David Shepardson, “United Airlines suspends ad spending on Twitter”, Reuters, November 4, 2022, https://www.reuters.com/technology/united-airlines-suspends-ad-spending-twitter-2022-11-05/.
  19. “More marketers to pull back on X (Twitter) ad spend than ever before”, Kantar, September 5, 2024, https://www.kantar.com/company-news/more-marketers-to-pull-back-on-x-ad-spend-than-ever-before.
  20. Alex Barker and Tim Bradshaw, “Marketers threaten to cut spending on Elon Musk’s X in record numbers”, Financial Times, September 4, 2024, https://www.ft.com/content/cf023e18-a6c7-4d78-bcb5-f295b5bcb928.
  21. Hannah Murphy, “With Bluesky, the Social Media Echo Chamber Is Back in Vogue,” Financial Times, September 22, 2024, https://www.ft.com/content/65961fec-a5ab-4c71-b1c8-265be3583a93
  22. Ibid
  23. Natasha Lomas, “TikTok fined in Italy after ‘French scar’ challenge led to consumer safety probe”, TechCrunch, March 14, 2024, https://techcrunch.com/2024/03/14/tiktok-italy-french-scar-fine/.
  24. “TikTok, a threat or a victim of complicated cyber-diplomatic relationships?” Digital Watch Observatory, May 22, 2024, https://dig.watch/updates/tiktok-a-threat-or-a-victim-of-complicated-cyber-diplomatic-relationships.
  25. “Trump extends TikTok’s sell-by deadline again”, NPR, April 4, 2025, https://www.npr.org/2025/04/04/nx-s1-5347418/trump-tiktok-second-ban-delay.

About the Anti-Defamation League

ADL is the leading anti-hate organization in the world. Founded in 1913, its timeless mission is “to stop the defamation of the Jewish people and to secure justice and fair treatment to all.” Today, ADL continues to fight all forms of antisemitism and bias, using innovation and partnerships to drive impact. A global leader in combating antisemitism, countering extremism and battling bigotry wherever and whenever it happens, ADL works to protect democracy and ensure a just and inclusive society for all.

About JLens
JLens’ mission is to empower investors to align their capital with Jewish values and advocate for Jewish communal priorities in the corporate arena. Founded in 2012 to give the Jewish community a strategic presence in this influential arena, JLens promotes Jewish values and interests, including combating antisemitism and Israel delegitimization. More at www.jlensnetwork.org.

THIS IS NOT A PROXY SOLICITATION AND NO PROXY CARDS WILL BE ACCEPTED

Please execute and return your proxy card according to Meta’s instructions.

Recent Posts