OpenAI Didn’t Break a Rule. They Broke a Promise They Never Knew They Made.
The OpenAI–Pentagon deal isn’t a strategy failure. It’s a textbook case of what happens when a brand ignores its own Promise Market Fit.
Last Friday, within hours of the Pentagon blacklisting Anthropic for refusing to strip safety guardrails from its military AI contract, Sam Altman posted a tweet announcing that OpenAI had reached an agreement with the Department of War.
A $200 million deal. Any lawful use. One tweet.
By Monday, Altman was calling the deal “opportunistic and sloppy.” Amendments were announced. Press releases followed. An AMA. A new model launch. All of it trying to un-ring a bell that had already shattered the glass.
But here’s the thing most people are missing: OpenAI didn’t violate a policy. They violated a Promise.
And worst of all — it was a Promise they never intentionally made.
The Promise You Don’t Make Is Made For You
There’s a concept I work with called Promise Market Fit. It builds on the familiar idea of Product Market Fit, but goes deeper. Where Product Market Fit asks, “Does the market want what you’re building?” — Promise Market Fit asks, “Does the market trust who you say you are?”
Your Promise is the intersection of your Purpose (what you say you’ll do) and your Passion (the evidence that you mean it). When those two things are aligned and visible, the market builds pillars of trust underneath your brand. Loyalty. Identity. Belonging. Advocacy.
But here’s the part most founders miss:
You don’t get to opt out of having a Promise. If you don’t define it yourself, the market will define it for you. And they’ll hold you to their version — whether you agreed to it or not.
That’s exactly what happened to OpenAI.
The Accidental Brand
OpenAI didn’t set out to build a brand rooted in safety, ethics, and human-first AI development. But through years of positioning — the nonprofit origin, the “open” in the name, the safety research, the careful rhetoric — the market collectively decided: These are the good guys.
People didn’t just use ChatGPT. They trusted it. They trusted the company behind it. They built an identity around being aligned with OpenAI’s perceived values. Developers, researchers, educators, and everyday users all placed pillars of expectation under a brand that OpenAI never formally acknowledged or committed to.
The Promise was never grounded. It was never intentional. It just… happened.
And because it was never grounded, the team at OpenAI had no framework for recognizing when they were about to violate it. To them, a government contract probably felt perfectly reasonable — even patriotic. They weren’t measuring the decision against a Promise they didn’t know existed.
The Betrayal
That’s the word for what people felt. Not disappointment. Not frustration. Betrayal.
Betrayal is what happens when someone violates a trust you thought was mutual. It’s the specific pain of realizing that the relationship you believed in wasn’t shared — that the other party never saw the commitment the way you did.
And the context made it exponentially worse:
- The timing. The deal landed on the same day Anthropic was blacklisted for refusing to remove safeguards against mass surveillance and autonomous weapons. The contrast was devastating.
- The partner. This isn’t a neutral government contract. It’s an agreement with an administration that has labeled a domestic AI company a “supply chain risk” — a designation typically reserved for foreign adversaries — for the crime of maintaining ethical red lines.
- The geopolitical backdrop. This is happening on the eve of escalating military posture toward Iran, making the “any lawful use” language feel far less theoretical.
- The communication. A single tweet. No preparation of the community. No acknowledgment of the weight of the moment. Just a transactional announcement for what the market experienced as an existential shift.
When people trust you at the level OpenAI was trusted, you don’t get to make moves like this casually. The depth of the reaction isn’t irrational — it’s proportional to the depth of the Promise people believed in.
The Scramble That Made It Worse
What followed was a masterclass in how not to recover from a Promise violation.
Altman’s admission that the deal “looked opportunistic and sloppy” was meant to show accountability. Instead, it confirmed suspicion: this wasn’t a carefully weighed ethical decision — it was a land grab dressed in a press release.
The subsequent amendments — barring intelligence agencies, prohibiting mass surveillance of Americans — were meant to reassure. But critics, including former Army general counsel Brad Carson, have publicly questioned whether the surveillance protections actually exist in the contract language. OpenAI has declined to release the relevant provisions.
And then came the new model announcement. Right on schedule. A shiny distraction that the market immediately read for what it was: Please look over here instead.
Each of these moves, taken individually, might have been fine. Taken together, in sequence, they created a pattern that reeked of desperation — not leadership.
The Lesson: Promise Market Fit Is Not Optional
Here’s what I want founders, operators, and leaders to take from this:
Your Promise is not a tagline. It’s not a values page on your website. It’s the contract between your brand and the people who chose to believe in it.
And you have exactly two options:
- Define your Promise intentionally — ground it in your actual purpose and passion, communicate it clearly, and measure every major decision against it.
- Let the market define it for you — and hope you never accidentally violate the version they built without your input.
OpenAI chose option two. Not maliciously — just carelessly. They let an extraordinary amount of public trust accumulate without ever building the internal framework to steward it. And when the moment came that required Promise-aware decision-making, they had no compass. They just saw a contract.
The cost isn’t just reputational. It’s structural. Once people feel betrayed at this level, the recovery isn’t about better messaging or amended contract language. It’s about rebuilding something that took years to form and seconds to fracture.
A Final Thought
Anthropic, in this same story, did the opposite. They drew a line. They said no to mass surveillance and autonomous weapons. They accepted the consequences — blacklisting, supply chain risk designation, potential loss of all government business.
They defined their Promise. Out loud. Under pressure. When it was expensive to do so.
That’s what Promise Market Fit looks like when it’s intentional.
And that’s the difference between a brand that bends and one that breaks.
Promise Market Fit is a framework developed by Sightbox for aligning brand, product, and culture with the deeper expectations of your market. If you’re building something that matters, your Promise is the foundation. Ignore it at your own risk.