What the Journalist Found
Karen Hao is an MIT Technology Review alumna who now writes for The Atlantic, and she has spent the better part of four years embedded in the orbit of OpenAI. Her book, Empire of AI: Inside the Reckless Race for Total Domination, published by Penguin and already a New York Times bestseller, is the most thoroughly reported account to date of how the most consequential technology company of our era actually operates behind closed doors. It is built on hundreds of interviews, reporting trips to data labeling operations in Nairobi and Colombia, and the kind of sustained access that produces sentences like "the technology was already old and available to the public through an API" when describing the release of ChatGPT.
The book's thesis is not subtle, and it does not pretend to be balanced. Hao argues that the AI industry, led by OpenAI and its competitors, has constructed a narrative of inevitability and progress that serves the financial interests of a remarkably small number of people while distributing the costs across communities, workers, and ecosystems that have no seat at the table. The machine learning models that power ChatGPT, Claude, Gemini, and their successors are presented to the public as products of breakthrough research. Hao pulls the curtain back to show what they also are: products of traumatized gig workers in the Global South, staggering resource consumption, and a corporate culture that has systematically chosen commercialization over the safety principles it was founded to uphold.
This is not a neutral observation. Hao has a point of view, and she prosecutes it with the determination of someone who has watched promises get broken in real time. Some readers will find her framing reductive. The world is more complicated than oppressors and oppressed. But the facts she marshals are the facts, and the facts are uncomfortable regardless of the frame you put around them.
The Humans Inside the Machine
The most damning sections of Empire of AI are not about Sam Altman or boardroom politics. They are about the data labeling industry. Hao traveled to the slums of Nairobi and to operations in Colombia to interview the workers who do the manual labor of teaching AI models how to behave. These are the people who make reinforcement learning from human feedback possible. Without them, ChatGPT would not know the difference between a helpful response and a harmful one. It would not know how to tell jokes. It would not know that describing graphic violence in response to a benign query is something humans find unacceptable.
The work itself is straightforward and brutal. Workers spend hours reading and rating AI-generated text, comparing responses, flagging harmful content. A significant portion of this content is violent, sexually explicit, or otherwise disturbing. Multiple former data annotators have reported symptoms consistent with post-traumatic stress. They are paid wages that would be considered poverty-level in the countries where the AI companies are headquartered. The companies that employ them are not OpenAI or Anthropic directly, but intermediary firms like Sama and Scale AI, creating a layer of contractual distance between the product and the people who build it.
Hao's reporting on this supply chain is meticulous. She names companies, cites wages, interviews workers, and traces the contractual structures that allow AI labs to benefit from cheap labor while maintaining plausible distance from the conditions under which that labor is performed. The result is a portrait of an industry that has reinvented the sweatshop for the information age: invisible, distributed across borders, and connected to products whose users have no idea how they are made.
This matters for anyone trying to evaluate claims about artificial intelligence. When Altman says that GPT-5 demonstrates reasoning, the honest version of that sentence is: GPT-5 demonstrates patterns that were shaped by thousands of hours of human judgment, performed by workers who were paid a fraction of what the engineers who designed the system earn, evaluating content that in some cases caused them measurable psychological harm. The intelligence is not artificial. It is borrowed, at a discount, from people who cannot afford to say no.
The Resources Nobody Talks About
The second pillar of Hao's argument concerns the environmental cost of the AI boom, and here the numbers speak louder than any editorial framing. Training GPT-4 consumed an estimated 50 gigawatt-hours of electricity, enough to power San Francisco for three days. A single large data center can consume up to five million gallons of water per day for cooling. In Texas alone, data centers are projected to use 49 billion gallons of water in 2025 and as much as 399 billion gallons by 2030. Global data center electricity consumption was approximately 460 terawatt-hours in 2022 and is projected to reach 1,050 terawatt-hours by 2026.
These are not hypothetical figures. They are measured, published, and largely uncontested. They represent real electricity drawn from real grids, real water drawn from real aquifers, and real heat dissipated into real communities. The semiconductor fabrication process alone requires 1,500 gallons of piped water to produce 1,000 gallons of the ultrapure water needed for chip manufacturing. Every H100 GPU running inference at 700 watts of thermal design power requires cooling infrastructure that exists solely to prevent the silicon from destroying itself.
What Hao emphasizes, and what the industry would prefer you not think about, is the ratio between these costs and the value being created. A typical ChatGPT query on GPT-4o consumes approximately 0.3 watt-hours of electricity. That is less than an LED lightbulb uses in a few minutes. The cost per query is trivial. But the aggregate cost of running hundreds of millions of queries per day, training new model generations every few months, and maintaining the inference infrastructure for a rapidly growing user base adds up to a resource footprint that rivals heavy industry.
And the use cases absorbing those resources are, according to Anthropic's own Economic Index, increasingly mundane. Sports scores. Product comparisons. Home maintenance questions. The average task value is dropping because the marginal user is not a software engineer debugging production code. The marginal user is someone who could have typed the same question into Google. We are deploying infrastructure at an unprecedented scale to deliver answers that a search engine could have provided at a fraction of the environmental cost.
Hao does not argue that AI should not exist. She argues that the cost-benefit analysis has not been performed honestly, and that the communities bearing the costs are not the communities reaping the benefits. The data centers being built in Texas and Iowa and Virginia are using water and electricity that those communities need, generating heat and noise that affect local residents, and producing economic value that accrues primarily to shareholders and employees in San Francisco. This is a familiar pattern in American industrial history. It does not become less familiar because the product is artificial intelligence instead of petroleum.
What They Promise vs. What They Ship
The core tension of Empire of AI, and the one that makes it relevant to this publication, is the gap between what AI companies promise and what they deliver. OpenAI was founded as a nonprofit dedicated to ensuring that artificial general intelligence benefits all of humanity. It is now a for-profit company valued at over $150 billion, pursuing the same commercial strategy as every other big tech firm: ship fast, raise capital, capture market share, and deal with the consequences later.
The promises have escalated in direct proportion to the capital requirements. When you need to raise $6.6 billion in a single round, you cannot describe your product as "a sophisticated autocomplete that is useful for coding and writing but still hallucinates regularly." You need to say AGI. You need to say UBI. You need to say "cure cancer." The fundraising narratives have become detached from the engineering reality, and Hao documents this detachment with precision.
ChatGPT was meant to be a research preview. It was not designed as a consumer product. OpenAI was genuinely surprised by the product-market fit. The technology behind it, transformer-based language models trained on internet text with RLHF alignment, was not new. The papers were public. The architecture was known. What was new was the interface: a chat window that made the technology accessible to people who had never heard of a transformer. The product innovation was not technical. It was experiential. And the company that originally promised transparency has since closed its research, built a commercial product around it, and justified the closure in the name of safety.
Hao argues convincingly that safety has been the rhetorical tool of first resort and the strategic priority of last resort at every stage of OpenAI's evolution. When it suits the company to release a model publicly, safety concerns are manageable. When it suits the company to withhold research, safety concerns are paramount. The determination of which situation applies at any given moment appears to correlate reliably with what serves the company's commercial interests.
This is where Hao's reporting connects to the data in the Economic Index. The public is being told that AI will transform everything, immediately, dramatically. Anthropic's own data says the transformation is gradual, uneven, and concentrated in a narrow band of high-value tasks that are slowly broadening. The public is being told that AI will displace millions of workers. The BLS data says it has not happened yet, and the earliest signal is not mass layoffs but a quiet reduction in junior hiring. The public is being told that these models are intelligent. The assembly line says they are the product of human judgment, borrowed cheaply and at significant human cost.
The gap between the pitch and the product is not a minor discrepancy. It is a chasm wide enough to swallow hundreds of billions of dollars of investor capital, reshape energy policy for entire states, and alter the career planning of millions of workers who are making decisions based on a threat model that the data does not yet support.
Why You Should Read This Book Anyway
It would be dishonest to review Empire of AI without acknowledging what it gets wrong, or at least what it underweights. Hao's lens is predominantly critical, and the book reads at times like a prosecution rather than an investigation. The data labeling industry is real and its problems are real, but the implication that AI is nothing more than borrowed human judgment elides genuine advances in capability that cannot be fully explained by the training process. The models are doing things their annotators cannot do, which means something is happening during training that is more than aggregation.
The environmental argument, while factually grounded, also lacks context. Data centers consume significant resources. So do hospitals, manufacturing plants, universities, and every other institution that provides value at scale. The relevant question is not whether AI uses resources but whether the value it creates justifies the resources it consumes, and that question cannot be answered by enumerating costs alone.
Similarly, the commercialization critique, while valid in many specifics, sometimes conflates "company prioritizes revenue" with "company is acting in bad faith." Every company prioritizes revenue. That is what companies do. The question is whether the prioritization is so extreme that it overrides genuine safety commitments, and on this point reasonable people can disagree about where the line falls.
But here is why you should read the book despite its limitations: it is the only thorough, reported account of what is actually happening behind the curtain. Most AI coverage is either breathless optimism from people with equity stakes or vague anxiety from people who have never used the technology. Hao has done the work. She went to Nairobi. She interviewed the engineers. She traced the money. She built her argument on evidence rather than vibes. And in an industry where the primary product is a black box, journalism that illuminates what is inside the box is not just useful. It is necessary.
The Anthropic Economic Index tells us what AI is doing. Karen Hao tells us what it costs. Together, they paint a picture of a technology that is genuinely transformative, genuinely costly, and genuinely overhyped, all at the same time. The honest position is not that AI is a scam or that AI is salvation. The honest position is that it is a powerful tool whose costs are being hidden, whose benefits are being exaggerated, and whose actual trajectory is slower and less dramatic than anyone with a financial stake in the outcome wants you to believe.
The machine works. It just isn't what they told you it was. And the people holding it together are paid a lot less than the people selling it.