Sam Altman in Washington DC

TRUST ME BRO: The Corporate-Military AI Faustian Bargain

A deep dive into OpenAI's Pentagon contract and why 'trust me' is the wrong thing to say about AI weapons and surveillance.

πŸ“ Washington, DCΒ· 5 min read

Share Article

Loading advertisement...
Sam Altman in Washington DC

THE STORY

OpenAI secured a $200 million Pentagon contract just hours after Trump ordered the federal government to cut ties with its rival, Anthropic. The timing couldn't be more obscene: Washington was carrying out strikes on Iran, the same Pentagon demanding OpenAI not use its tools for "autonomous killings."

But here's the kicker: the contract has not been released.

OpenAI CEO Sam Altman posted vague assurances to X that the Pentagon "agrees" not to use OpenAI's tools for mass surveillance or autonomous weapons. Former DoD AI official Brad Carson calls it "weasel words" that give the military enough flexibility to do whatever they want and then say "oops, sorry."

WHAT WE KNOW (AND DON'T KNOW)

What OpenAI Claims:

- No intentional use for domestic surveillance of U.S. persons
- No use for autonomous weapons systems
- No NSA/NSGIA intelligence agency use without contract modification
- "Layered safeguards" and "technical experts in the loop"

What We Actually Need:

THE CONTRACT.

Instead, OpenAI national security chief Katrina Mulligan refused to share contract language, telling a concerned X user: "I do not agree that I'm obligated to share contract language with you."

THE RED FLAGS

1. The "Intentionally" Loophole

Altman's revised assurances include the phrase "the AI system shall not be intentionally used for domestic surveillance."

The word "intentionally" provides a **miles-wide wall of plausible deniability** β€” exactly what the intelligence community uses.

Remember 2013? James Clapper testified before Congress that the NSA wasn't collecting data on Americans. When pressed, he added "Not wittingly." Months later, Snowden revealed this was false. The NSA was collecting vast quantities of Americans' data as "incidental collection" β€” a euphemism that doesn't mean by mistake, but rather secondary.

"Intentionally" doesn't mean what it sounds like. The NSA/ODNI are staffed by sharp legal minds and brilliant mathematicians funded with billions. They don't "accidentally" surveil.

2. The "Deliberate" Tracking Dodge

Altman also wrote: "the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons."

The word "deliberate" is load-bearing, but "tracking," "surveillance," and "monitoring" are undefined. Former Army general counsel Brad Carson: "The word surveillance doesn't even include the kind of activities that people are most concerned about. They're trying to blind you with complicated legal terms."

3. No Transparency = No Accountability

When The Intercept asked OpenAI for specific contract language, spokesperson Kate Waters sent back links to prior statements from Altman. No contract. No transparency.

OpenAI researchers claim their engineers will make sure the Pentagon doesn't break its commitments, but CNBC reported that Defense Secretary Pete Hegseth would hold "ultimate authority" over how the Pentagon makes use of the contract.

4. OpenAI's Track Record

Altman was scared by Trump in 2016. Ten years later, he announced his company would sell services to the Trump administration hours after the Middle East war launched. OpenAI was founded to benefit humanity β€” until it silently deleted its military prohibition from the terms of service.

Image depicting AI intergration in the Pentagon

Image depicting AI intergration in the Pentagon

ANTHROPIC WENT FIRST, THEN GOT FIRED

Anthropic CEO Dario Amodei said his company wouldn't agree to anything without assurances that technology wouldn't be used to power autonomous weapons or mass domestic surveillance.

Hegseth gave them a deadline or the Pentagon would label them a "supply chain risk" β€” a designation reserved for foreign adversaries like Huawei β€” telling every defense contractor they can't use Anthropic's AI.

OpenAI stepped in and signed the deal, saying they had negotiated "stricter protections." But the only proof they're releasing is PR-speak and tweets from Altman.

THE TRUTH ABOUT "COMMERCIALLY AVAILABLE DATA"

Asked whether the Pentagon would get/analyze commercially available data at scale, OpenAI's Mulligan said the Pentagon had "no legal authority" to do this.

This is false. A declassified 2022 report by the Office of the Director of National Intelligence documented the Pentagon's collection of commercially available data β€” exactly the activity Mulligan denied.

Senator Ron Wyden has revealed for years that the Defense Intelligence Agency has spied on Americans' precise movements and locations by simply buying access to their GPS coordinates. The Pentagon lawyers blessed this surveillance.

WHY THIS MATTERS

This isn't just about AI ethics. This is about **who holds the power to kill** and **who decides who we surveil**.

OpenAI's deal allows the Pentagon to use American-developed AI for:

- Target identification and tracking
- Surveillance of U.S. persons
- Autonomous weapons (unless the loophole is wide enough to let them use it anyway)

And OpenAI's "safety stack" is just PR β€” they haven't explained what it actually does or how "technical experts" oversee the country's single largest bureaucracy of 2 million service members and 800,000 civilians.

THE BOTTOM LINE

When OpenAI CEO Sam Altman promised we could trust his word, plus Trump's, plus Hegseth's, we got this:

- The contract isn't public
- The terms are written in legal jargon that means what lawyers say it means, not what regular people think
- Hegseth can move military contractors around whenever he wants
- Altman has been called a "person of low integrity" by former colleagues
- The NSA is America's leading surveillance agency with a history of extra-constitutional dragnet spying

If you trust the cabal of Sam Altman, Donald Trump, and Pete Hegseth, there's nothing I can do for you.

THE VERDICT

This contract represents the same Faustian bargain the corporate-military complex has always cut with us:

- The powerful get unlimited tools
- The powerless get promises of "accountability" and "safety"
- No one has to answer for what happens

OpenAI says it's not like Anthropic. OpenAI says they found a way to do better. But the only proof they'll provide is their word.

In a democracy, contracts should be public. In this country, transparency is a luxury the powerful can do without.

Sources & Methodology(3 sources)
Advertisement
Loading advertisement...
Join the Discussion

Comments require functional cookies to load. Update your cookie preferences to participate in the discussion.

Update Cookie Preferences