The recent disagreement between Anthropic and the United States Department of Defense has raised a large and looming corporate governance topic.
The recent disagreement between Anthropic and the United States Department of Defense has raised a large and looming corporate governance topic.
The question shareholders in Anthropic—and boards more broadly—should be asking is straightforward but consequential: Is it appropriate for corporations to make business decisions on behalf of shareholders based on social and political values or bias? And how does that square with a company’s vision, mission, and purpose within our existing legal frameworks?
Anthropic’s large language model operates under its own “constitution.” It has embedded values, a moral framework, and guardrails that guide its behavior. Inevitably, because it is created by people, it also carries the potential for unconscious bias that will always factor into how the system functions.
This constitution is not the U.S. Constitution. As a private company, Anthropic has every right to establish its own values and principles so long as it complies with current legal frameworks.
The real business judgment question the board should be asking is: Where is the line drawn between social values, political values, and the duty of loyalty to shareholders to build a thriving corporation that maximizes opportunity within the law?
At the center of the issue between the Department of Defense and Anthropic is the need, as articulated by former Undersecretary of Defense Emil Michael, for LLMs to permit autonomous decision-making without a human in the loop under certain circumstances.
The example often cited is a military base under a drone attack with only 90 seconds of warning. In such a situation—perhaps with personnel asleep—the ability to have autonomous drone intercept protection without a human in the loop may be necessary. The Department of Defense argues that flexibility like this is essential for national security.
Under Anthropic’s current terms of service, however, it is conceivable that a guardrail could be triggered and the LLM could shut down at a critical moment.
The Department of Defense argues that autonomous drone-on-drone intercept capability is completely within the current approved legal framework. What the Pentagon is asking for contractually is simple: language permitting “all lawful use cases.”
The broader corporate governance issue this dispute highlights is dependence on AI infrastructure controlled by private companies with their own value systems.
As boards consider the role of AI and large language models in corporate operations, one practical lesson becomes clear: corporations should have second sources and deploy a multi-cloud, multi-LLM environment. Otherwise, a company could become vulnerable to tripping a guardrail based on the values embedded by an LLM provider.
The All-In Podcast offered a hypothetical example to illustrate the point. Imagine an LLM that hypothetically mandates that abortion access should be available for everyone. If deployed in a state that prohibits abortion, the LLM’s internal values could trigger a guardrail and cease functioning.
This is an extreme example, but it highlights a broader concern about our increasing dependence on AI as critical infrastructure. The podcast compared this risk to social media platforms de-platforming voices they disagreed with. If expanded broadly, this risk could affect every industry—from pharmaceuticals and finance to consumer packaged goods.
As boards think about AI and large language models within a corporate governance framework, directors are accountable for anticipating risks. Having an open-cloud, multi-LLM architecture is increasingly becoming a prudent safeguard.
For the Anthropic board, the central question becomes: How do you create an AI governance philosophy, principle, and value set that is consistent with the current laws of the land but does not introduce political or social bias that could limit your total available market?
Prospective customers may reasonably ask a follow-on question: If you would limit the Department of Defense’s use of an LLM—even when it states it will only be deployed for lawful use cases—what else might you limit?
More broadly, boards must consider where CEOs and leadership teams may unintentionally introduce personal, political, or social bias that impacts long-term market opportunity.
Boards must also remain mindful of when a corporation’s values and purpose begin to bleed into social or political topics. Over the last decade, boards have learned that “lightning rod” issues—such as the George Floyd murder, the overturning of Roe v. Wade in Dobbs v. Jackson Women's Health Organization, and the conflicts in Ukraine and Gaza—often result in CEOs and management teams being pressured to take a public stance.
Experience has shown that companies are generally rewarded when they speak on topics directly related to their core business. For example, it is logical for an oil and gas company to comment on climate change, but less logical to take a position on unrelated social issues.
In the age of AI, LLMs, and increasingly agentic systems, these lessons apply directly to governance strategy.
Boards should proactively incorporate AI oversight into their governance frameworks. This includes reviewing the LLMs that underpin operational decisions and conducting deliberate inspections for potential unconscious bias or drift into social and political positioning.
Because it is easy to blur the lines between corporate mission and social advocacy, this oversight must be a conscious part of both the company’s policy framework and the board’s business judgment process.
This is an extremely delicate and polarizing topic. However, once public corporations accept the public’s investment capital, it becomes critical to remain aware of how internal value systems—whether embedded in leadership decisions or AI systems—may shape long-term corporate outcomes.
For that reason alone, boards would be wise to raise and discuss this topic early in the year—before an active issue forces the conversation.
Thanks for signing up to Minutehack alerts.
Brilliant editorials heading your way soon.
Okay, Thanks!