What the Data Actually Says
On March 24, 2026, Anthropic quietly published the third installment of their Economic Index, a privacy-preserving analysis of how Claude is actually being used across the economy. The report landed with the muted fanfare typical of academic-leaning research publications, which is unfortunate, because the findings should be front-page news for anyone trying to understand what AI is actually doing to the labor market versus what we keep being told it will do.
Here is what Anthropic's own data shows, based on a sample of one million conversations from February 2026: coding remains the dominant use case, but it is migrating off the consumer-facing Claude.ai product and onto the API, where it is being chopped into smaller agentic tasks by tools like Claude Code. Personal queries are rising. People are asking Claude about sports scores, product comparisons, and home maintenance. The average economic value of a Claude conversation, measured by the hourly wage of the worker whose tasks it maps to, dropped from $49.30 to $47.90.
That number deserves a moment. The average value went down. Not because Claude got less capable, but because the new users flooding in are doing less valuable things with it. They're asking about the weather. They're comparing toasters. The early adopters were software engineers and analysts using Claude to write production code and analyze datasets. The next wave is everyone else, and everyone else has less expensive problems.
This is, mechanically, a standard adoption curve. Early adopters favor specific high-value uses. Later adopters bring a much wider, shallower range of tasks. What makes it newsworthy is the gap between this data and the public narrative. We have been told, repeatedly and emphatically, that AI will transform the knowledge economy within years. That millions of jobs are at immediate risk. That universal basic income may be necessary to manage the displacement. Anthropic's own data shows that after three and a half years of this drumbeat, 30% of American workers have zero measurable exposure to AI in their job tasks. Not low exposure. Zero. Cooks, mechanics, lifeguards, bartenders, dishwashers: if you work with your hands, AI has not touched you.
Meanwhile, 49% of all jobs have seen at least a quarter of their tasks performed using Claude. That number barely changed from the previous report three months earlier. The breadth of AI's reach plateaued while depth increased. More people are doing more things with AI, but AI isn't reaching new kinds of work at the pace the headlines imply.
Experience Is Compounding
The most consequential finding in the March report is one that the tech press has largely ignored. Anthropic documented that users who have been on the platform for six months or more have a 10% higher success rate in their conversations than newer users. They attempt higher-value tasks. They use Opus for complex work and lighter models for simpler queries. They have fewer personal conversations and a higher education level reflected in their inputs.
The researchers are careful to note that this could reflect the natural sophistication of early adopters rather than learning-by-doing. People who signed up for Claude in 2024 were, by definition, more technically inclined and more motivated than people who signed up because it showed up in a Super Bowl ad. But the association persists after controlling for task selection, country of origin, and other confounding factors. Something is happening beyond self-selection. People who use AI more get better at using AI.
This should concern anyone thinking about equity in the AI transition, because it implies that the benefits of AI adoption are self-reinforcing. The people who started early have had months of practice crafting effective prompts, choosing the right models for the right tasks, and developing mental models of what AI can and cannot do. They are not just further along the timeline. They are on a steeper curve. And if their advantage compounds, then every month that passes without broader adoption widens the gap between those who use AI effectively and those who do not.
Anthropic's data shows that model selection itself is a learned skill. Among paying Claude.ai users, Opus is selected four percentage points more often for coding tasks and seven percentage points less often for tutoring. API users show model-switching behavior that is about twice as pronounced. Experienced users are not just better at prompting. They are better at choosing which tool to prompt. They have developed an intuition for the capability gradient across model tiers that is invisible to newcomers who treat "Claude" as a single undifferentiated product.
Consider the implications for the labor market. If AI proficiency is a skill that compounds with practice, and if that skill provides measurable advantages in task completion and economic output, then the labor market will stratify not just between AI-exposed and non-exposed occupations but between workers within the same occupation who differ in their AI fluency. Two marketing analysts with identical resumes and identical job descriptions will produce meaningfully different output if one has spent six months learning how to use Claude and the other has not. The first analyst is not being replaced by AI. She is being augmented by it, and the augmentation is getting stronger.
This is the learning curve that gives this report its name. And it cuts both ways. For individuals, it means that early investment in AI proficiency pays compounding returns. For society, it means that without deliberate intervention to broaden access and training, AI adoption could widen inequality even in occupations where it is theoretically available to everyone.
Not a Wave. A Slow Drain.
Anthropic published a companion paper on March 5 titled "Labor Market Impacts of AI," introducing a new metric called observed exposure that combines theoretical capability with actual real-world usage data. The findings invert the standard displacement narrative in ways that deserve far more attention than they have received.
Computer Programmers top the list at 75% task coverage. Customer Service Representatives sit at roughly 67%. Data Entry Keyers: also 67%. These are not hypothetical exposure scores based on what an LLM could theoretically do. These are measurements of what Claude is actually doing, right now, in production environments.
And yet: there is no systematic increase in unemployment for highly exposed workers since late 2022. Three and a half years of "AI will take your job" headlines, and the Bureau of Labor Statistics says it hasn't happened. The most exposed occupations have not seen meaningful employment losses.
What they have seen, in a finding the researchers describe as "suggestive," is a slowdown in hiring of younger workers. This is the signal that matters and the one that nobody is talking about. Displacement in 2026 does not look like mass layoffs. It looks like the job posting that used to exist for a junior data analyst and no longer does. It looks like the customer service team that handles the same volume with twelve people instead of fifteen, not because anyone was fired but because three people who left were not replaced. It looks like a hiring freeze that nobody calls a hiring freeze because the workload is being absorbed by the remaining staff plus their AI tools.
This is not the dramatic, headline-generating displacement that justifies calls for UBI or emergency legislation. It is a slow drain. It is invisible in aggregate unemployment data because the denominator keeps shifting. And it disproportionately affects exactly the people least equipped to respond: young workers trying to enter fields where the entry-level positions are quietly evaporating.
The demographic profile of the most exposed workers also inverts expectations. They are more likely to be older, female, more educated, and higher-paid. This is not the automation narrative we have been sold, where robots replace blue-collar workers in factories. This is white-collar knowledge work: the tasks of analysts, administrators, programmers, and customer service professionals being augmented or automated by language models. The factory floor is untouched. The office is where the action is.
What They Say vs. What They Know
There is a peculiar dissonance between what AI companies tell the public and what their own research departments publish. In press conferences and keynote speeches, the language is transformative, urgent, revolutionary. Artificial general intelligence is imminent. Millions of jobs will be displaced. UBI may become necessary. Cancer will be cured. The future is arriving at exponential speed and the only responsible action is to invest more, build faster, deploy wider.
In the research papers, the language is different. "Suggestive evidence." "Modest changes." "Slight decreases." "No systematic increase in unemployment." The Anthropic Economic Index is, to its enormous credit, a genuinely honest document. It describes a technology that is being adopted unevenly, used mostly for tasks that are less valuable than the hype implies, and whose economic impact is so gradual that it cannot yet be distinguished from normal labor market fluctuation. This is not a revolution. It is a diffusion, and like all diffusions, it is messy, slow, and distributed unevenly along lines of privilege and access.
The gap between the public narrative and the private data matters because the public narrative drives policy. When Sam Altman tells Congress that AI may displace millions of jobs and that we should consider universal basic income, he is making a claim that Anthropic's own research does not yet support. When investors pour $690 billion into AI infrastructure on the premise that the transformation will be swift and total, they are pricing in a future that the data has not confirmed. When workers panic about their job security because every tech publication runs weekly stories about which professions AI will eliminate, they are responding to a threat model that is, at this point, more projection than observation.
None of this means AI will not transform the economy. The theoretical exposure is vast: Anthropic's own measurements show that 97% of the tasks Claude users perform were already rated as theoretically feasible for an LLM. The machinery is capable. The diffusion is what's slow. And the speed of diffusion depends on factors that have nothing to do with model capability: regulatory constraints, organizational inertia, the willingness of workers to adopt new tools, the ability of managers to redesign workflows, and the economic incentives that determine whether it is cheaper to automate a task or to keep paying a human to do it.
The honest version of the AI impact story is less dramatic and more useful than the one being sold. AI is not going to eliminate your job next year. It might eliminate the junior version of your job over the next five years. The people who learn to use it effectively now will have a compounding advantage over those who don't. The benefits will accrue unevenly, and the unevenness will track existing lines of privilege, education, and access unless deliberate effort is made to change that.
This is the story that Anthropic's data actually tells. It is not the story that sells subscriptions, raises capital, or generates clicks. But it is the story that anyone making real decisions about their career, their business, or their policy portfolio actually needs to hear.
The learning curve is real. The question is who gets to climb it.