Inside the Engine Room: What Happens When AI Builders Use Their Own AI?

Rohit Das

Anthropic studied its own engineers to see how Claude changed their work. Discover the stunning 50% productivity boost, the strange paradox of skill loss, and why roles are shifting from 'coder' to 'AI manager.' This is your essential look at the future of knowledge work, straight from the source.

Sometimes the best way to understand a revolution is to just watch the people at the absolute epicenter. When Anthropic, one of the leading AI research companies, decided to turn the lens inward and study how their own engineers and researchers were using Claude, their flagship AI, I was immediately intrigued. I mean, these are the folks with cutting-edge access to the best tools.

The report they published is, frankly, eye-opening. It reads less like a dry academic study and more like a detailed account of a workplace undergoing a rapid, almost spiritual transformation. It's not all sunshine and rainbows, either. While the productivity numbers are wild, the report also captures a deep sense of professional unease and changing human dynamics.

The Productivity Shock: 50% More Output, But At What Cost?

Let’s start with the headline number, because it’s hard to ignore.

A year ago, Anthropic engineers reported using AI in about 28% of their daily work, getting a roughly 20% productivity bump. Fast forward to the survey date, and those numbers have ballooned: they are now using Claude in nearly 60% of their work, with a self-reported productivity boost of 50%. Think about that for a second. That's a massive, step-change increase in output in just twelve months.

What's really fascinating is where this time is going. Apparently, a huge chunk of AI-assisted work, about 27%, consists of tasks that simply wouldn’t have been done otherwise. They call these "papercuts": small, boring refactoring jobs, building quick data dashboards, or doing exploratory work that wasn’t cost-effective to do manually. The AI is essentially eliminating the long tail of neglected, tedious tasks, freeing people up for other things.

To be honest, that really resonated with me. Who hasn't put off a boring cleanup job?

But here’s the kicker, the part that keeps it real: while many report saving time, some engineers actually said they spent more time on Claude-assisted tasks. Why? Because you have to debug the AI's code, or shoulder the cognitive burden of trying to fully understand code you didn't write yourself. It’s the difference between being a creator and being a meticulous quality-control manager.

The Rise of the "Full-Stack" Generalist

The biggest change, according to the interviews, is the rapid broadening of skillsets. Engineers are becoming "full-stack" in a hurry.

Someone who was previously a back-end expert might now confidently dabble in front-end development or transactional databases, because Claude covers the basics and handles the tedious syntax. The ceiling, as one person said, "just shattered."

This sounds fantastic on paper but there’s a genuine paradox here, and I think it’s the most concerning finding in the whole report. If you use AI to fill in your knowledge gaps every time you face a challenge, how do you ever build true, deep expertise?

As one engineer put it: “When producing output is so easy and fast, it gets harder and harder to actually take the time to learn something.”

It’s a real trade-off. We’re gaining breadth, but some are worried about the atrophy of those deep, foundational skills. And if everyone is managing an AI, who is going to have the deep competence required to effectively critique and supervise the complex outputs it generates? That’s something we all need to be thinking about.

When Your First Colleague is an Algorithm

Perhaps the most human and relatable change is in the social dynamic of the workplace.

The consensus is clear: Claude has become the first stop for questions that used to go to colleagues.

Instead of asking a senior developer about a tricky syntax error or how a specific part of the codebase works, junior staff just ask Claude. One employee noted they now ask 80-90% of their questions to the AI, filtering out the simple stuff. This is great for eliminating "social friction", you don't feel bad for taking up a colleague's time, but it has a noticeable impact on mentorship.

As one senior engineer sadly observed, “It’s been sad that more junior people don’t come to me with questions as often.”

The human-to-human collaboration isn’t gone; it’s just reserved for the really complex, strategic, or context-heavy issues, the "crucial last 20%." The role of the engineer is fundamentally shifting from a "net-new code writer" to, in their own words, a "manager of AI agents," spending most of their time reviewing, revising, and taking accountability for the AI’s work.

This leads to the final point: career uncertainty.

There's short-term optimism, for sure. You're more productive, you can tackle bigger projects. But in the long term, people are deeply uncertain. I mean, one person summed up the existential dread perfectly: “I feel optimistic in the short term but in the long term I think AI will end up doing everything and make me and many others irrelevant.”

It's a powerful statement. Anthropic’s study is a microcosm of the future for all knowledge work. We’re witnessing the birth of a new kind of professional role, one defined by delegation, oversight, and a deep, complex partnership with a very smart machine. And to be honest, nobody, not even the people building the thing, seems to know exactly where it goes from here. That's both terrifying and incredibly exciting.

References

[1] https://www.anthropic.com/research/how-ai-is-transforming-work-at-anthropic

Inspire Others – Share Now

Table of Contents

1. The Productivity Shock: 50% More Output, But At What Cost?

2. The Rise of the "Full-Stack" Generalist

3. When Your First Colleague is an Algorithm