Anthropic framed its donation as an extension of its long-stated position that federal rules are necessary to limit potential risks from highly capable AI models. In a corporate blog post released alongside the funding announcement, the company cited the need to safeguard employment, protect children, and improve transparency in the deployment of powerful algorithms. The start-up has previously participated in discussions at the White House and in Congress, joining peers such as Google DeepMind and OpenAI in supporting policies that mandate safety evaluations and reporting requirements for advanced systems.
The company’s advocacy has drawn criticism from opponents who portray the effort as an attempt at regulatory capture. In October, David Sacks—appointed by then-President Donald Trump as a White House adviser on AI and cryptocurrency—argued on social media that Anthropic was driving a “regulatory frenzy” that could harm smaller technology companies. Sacks’ remarks followed an essay by Anthropic co-founder Jack Clark that warned of potential downsides if governments fail to address AI-related risks sufficiently.
Political momentum around federal AI oversight has intensified over the past year. Two months after Sacks’ comments, President Trump signed an executive order that consolidated federal guidance on AI, limiting individual states’ authority to impose their own rules. The move reduced regulatory initiatives already under consideration in Democratic-led states such as California and New York. Nonetheless, multiple bipartisan bills remain pending in Congress, covering topics ranging from algorithmic accountability to export controls on advanced chips.
Public First Action’s leaders say they are aligning their strategy with broad public sentiment. A Gallup survey published in September reported that 80 percent of U.S. adults favor stronger safety and data-security rules for AI, even if those rules slow technological progress. Carson contends that such findings indicate voters want elected officials—not industry executives—to determine the framework governing artificial intelligence. By channeling resources toward both Democratic and Republican candidates, the group aims to ensure that forthcoming legislation reflects that preference across the political spectrum.

Imagem: Internet
Anthropic’s $20 million donation represents a sizable share of Public First Action’s anticipated war chest and underscores a growing trend of technology firms directly financing political efforts. The company joins a cohort of AI developers and investors that see the upcoming election cycle as pivotal for establishing national standards. Industry analysts note that the 2026 races could shape committee leadership in both chambers of Congress, influencing whether proposals on licensing, liability, and data governance advance or stall.
At the same time, companies that oppose immediate regulation continue to accelerate fundraising. Leading the Future’s backers argue that excessive oversight would disadvantage U.S. firms against competitors in Europe and China. The widening financial commitments on both sides suggest that AI governance may become one of the most consequential tech policy battles in the next two election cycles, rivaling previous debates over net neutrality and online privacy.
For Anthropic, the political investment complements its technical strategy. The company, valued at more than $18 billion after recent funding rounds, has made safety-focused research a central element of its public identity. Its flagship language model Claude incorporates guardrails designed to minimize disinformation and harmful content. By aligning with Public First Action, the start-up seeks to translate those engineering priorities into federal policy, betting that legislative clarity will foster a more predictable environment for commercial deployment.
The group’s first round of advertisements is expected to run through early summer. Additional endorsements will be announced after the primaries determine final candidate line-ups, according to people familiar with the plan. As both pro- and anti-regulation coalitions broaden their outreach, campaign finance analysts anticipate a surge in AI-linked political spending that could extend well into 2026.
Crédito da imagem: Nurphoto