LLM

Karen Hao Interview Deep Dive: Inside OpenAI's Empire of AI

Technology journalist Karen Hao spent seven years investigating OpenAI. Her book 'Empire of AI' reveals the company's secretive culture, religious devotion to AGI, and Sam Altman's vision for AI supremacy.

Karen Hao Interview Deep Dive: Inside OpenAI’s Empire of AI

The Journalist Who Knew OpenAI Best

Karen Hao isn’t just another tech reporter covering the AI boom. She’s the journalist who first profiled OpenAI for MIT Technology Review in 2020—two years before ChatGPT ignited the generative AI revolution. Now, after seven years of investigation, Hao has published “Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI,” a groundbreaking book that pulls back the curtain on Silicon Valley’s most influential—and secretive—company.

From Nonprofit Idealism to Commercial Empire

Hao’s central revelation is stark: OpenAI abandoned its founding mission. Established in 2015 as a nonprofit research lab dedicated to developing AI “for the benefit of all humanity,” the organization promised to open-source its research for public good. The word “Open” was literally in the name.

But Hao’s reporting exposed a different reality:

  • Incredible Secrecy: Despite claiming to be a transparent nonprofit, OpenAI operated with extreme confidentiality
  • Competitive Obsession: Executives emphasized being “No. 1 in AI progress” above all else
  • Commercial Intent: The secrecy made sense only if the company had profit motives
  • Mission Drift: The tension between “open” and “first” proved impossible to reconcile

As Hao explained in her NPR interview: “Why would you be secretive if you’re a nonprofit that’s purely doing research in the interest of the public? You would be secretive if you might have some kind of commercial intent.”

The Religious Cult of AGI

Perhaps the most startling revelation in Hao’s book is the quasi-religious fervor surrounding AGI (Artificial General Intelligence) within OpenAI.

The Prophet and the Effigy

Hao documents how cofounder and chief scientist Ilya Sutskever—often viewed as a prophet-like figure—conducted ritualistic ceremonies at company retreats. In one incident, Sutskever burned a wooden effigy representing “a good, aligned AGI that OpenAI had built, only to discover it was actually lying and deceitful. OpenAI’s duty, he said, was to destroy it.”

He repeated this ritual at another retreat.

Utopia vs. Extinction

Hao’s interviews with OpenAI employees revealed the extreme emotional stakes:

  • The Believers: Spoke with “wide-eyed wonder” about how AGI would bring utopia. One said, “We’re going to reach AGI and then, game over, like, the world will be perfect.”
  • The Fearful: Others trembled when describing how AGI could destroy humanity
  • The Race: Both groups agreed on one thing—they needed to reach AGI first to ensure their vision prevailed

This ideological clash, Hao argues, is more fundamental than personality conflicts. It’s a battle between competing visions of humanity’s future.

Sam Altman: The Storyteller-in-Chief

Hao’s portrait of Sam Altman is nuanced but critical. She acknowledges his unique talents while questioning his methods:

His Strengths

  • Fundraising Genius: “Once-in-a-generation fundraising talent”
  • Master Storyteller: Able to paint persuasive visions of the future
  • Coalition Builder: Provokes people to rally around sweeping missions

His Tactics

Hao makes a damning observation about Altman’s communication style: “When he says something to someone, what he’s saying is more tightly correlated with what he thinks they need to hear than what he actually believes or the ground truth of the thing.”

This explains how Altman successfully navigated OpenAI’s transformation from nonprofit to for-profit empire while maintaining support from investors, employees, and the public.

The November 2022 Coup

Hao’s book provides new details about the dramatic week in November 2023 when OpenAI’s board fired Altman, only to reinstate him days later after employee backlash.

Key revelations: - The board’s decision stemmed from safety concerns about Altman’s rush toward AGI - Employees threatened mass resignation if Altman wasn’t reinstated - The incident revealed that OpenAI’s loyalty was to Altman’s vision, not to safety governance - The “safety-first” nonprofit had become a company where commercial imperatives trumped oversight

AI Colonialism: The Global South Connection

Hao doesn’t limit her critique to OpenAI’s internal culture. She exposes how Western AI companies exploit the Global South:

  • Data Centers: Built in developing countries to avoid environmental regulations
  • Data Laborers: Workers in poorest countries paid “literal pennies per hour” to annotate training data
  • Mental Health Toll: Laborers exposed to the darkest corners of the internet without adequate support
  • Resource Extraction: Unlimited compute power demands strain local infrastructure

This pattern, Hao argues, mirrors historical colonial exploitation—hence her book’s title referencing empire.

The DeepSeek Challenge

In recent interviews, Hao has commented on DeepSeek’s emergence as a competitor. The Chinese company trained a model for approximately $6 million—versus the hundreds of millions or billions spent by OpenAI.

Hao notes: “That delta demonstrated to people that this—what Silicon Valley has tried to convince everyone for the last few years, that this is the only path to getting more AI capabilities, is totally false.”

This challenges OpenAI’s core narrative that massive compute investment is the only route to advanced AI.

Sam Altman’s Response

Altman attempted to discredit Hao’s book before its release, posting on X (formerly Twitter): “There are some books coming out about OpenAI and me. We only participated in two… No book will get everything right, especially when some people are so intent on twisting things.”

Hao revealed that OpenAI promised to cooperate with her research for months but never did—while fully participating in other authors’ projects.

Key Lessons from the Interview

1. AI Is About Power, Not Technology

Hao argues that AI coverage shouldn’t be relegated to the technology section. It’s a story about power, democracy, and public trust. Every newsroom must investigate AI’s societal impact.

2. The AGI Timeline Is Unclear

Despite OpenAI’s confident predictions, Hao notes that AGI remains undefined. The company’s certainty about achieving it—and doing so soon—reflects faith more than evidence.

3. Multiple Paths Exist

The DeepSeek breakthrough proves that efficient training methods can compete with brute-force approaches. OpenAI’s “more compute = better AI” doctrine isn’t the only path forward.

4. Governance Matters

The November 2022 board revolt showed that safety-focused governance can challenge commercial imperatives—but also how easily such governance can be overturned by employee loyalty to a charismatic leader.

Why This Matters

Karen Hao’s investigation matters because OpenAI isn’t just another tech company. Its models power critical infrastructure, influence public discourse, and shape how billions of people interact with information.

Her revelations raise fundamental questions:

  • Can a company driven by AGI obsession be trusted with technologies that affect all of humanity?
  • Should AI development be subject to democratic oversight, or left to self-interested corporations?
  • Is the “move fast and break things” approach appropriate for technologies with existential risks?

The Path Forward

Hao doesn’t offer simple solutions, but her reporting suggests several directions:

  1. Transparency: AI companies must be genuinely open about their capabilities, limitations, and risks
  2. Accountability: Independent oversight, not self-regulation, is essential
  3. Diverse Voices: AI development needs input from beyond Silicon Valley
  4. Public Interest: AI should serve humanity—not shareholders or ideological missions

Conclusion

Karen Hao’s “Empire of AI” is essential reading for anyone trying to understand the forces shaping our AI future. Her seven years of investigation reveal a company that transformed from idealistic nonprofit to commercial empire while maintaining the rhetoric of altruism.

The book’s greatest contribution may be reframing AI not as an inevitable technological force, but as a human choice—one that requires democratic debate, not blind faith in technocrats.

As Hao herself said: “AI is not a niche technology beat. It’s a story about power.” And power, history teaches us, must be questioned.


“Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI” by Karen Hao is available from Penguin Press. Hao previously worked at MIT Technology Review and The Wall Street Journal, and currently leads The AI Spotlight Series with the Pulitzer Center.