Back to Blog
ai-careers

ChatGPT and Copilot at Work: Are Developers Becoming More Productive?

Everyone claims AI makes developers faster. But what do the actual studies say? We dig into the real data on developer productivity with AI coding tools.

April 27, 2026 8 min read 8 viewsFyrosoft Team
ChatGPT and Copilot at Work: Are Developers Becoming More Productive?
developer productivitychatgptgithub copilotai codingproductivity datadeveloper surveys

The headline that launched a thousand LinkedIn posts: "GitHub Copilot makes developers 55% faster." It's been cited so many times that most people in tech can recite it from memory. But like most statistics that become soundbites, the full picture is a lot more nuanced than that single number suggests.

I've been tracking the research on AI-assisted development since these tools went mainstream, and the data tells an interesting story. Not the simple "AI makes everything faster" story that tool vendors want you to believe, and not the "AI is useless hype" story that skeptics cling to. The truth, as usual, sits uncomfortably in the middle.

What the Studies Actually Say

The GitHub/Microsoft Study (2022-2023)

The original Copilot study that produced the "55% faster" number deserves proper context. Researchers at GitHub and Microsoft gave 95 developers a coding task: write an HTTP server in JavaScript. Half used Copilot, half didn't. The Copilot group finished 55% faster on average.

Here's what the headline doesn't tell you: the task was relatively straightforward, the developers were writing greenfield code in a popular language, and "faster" was measured purely by completion time -- not code quality, maintainability, or correctness. Several researchers later pointed out that the task was particularly well-suited to AI assistance because it involved common patterns that AI models have seen millions of times.

That doesn't mean the study was wrong. It means the 55% number applies to a specific kind of task, and generalizing it to all development work is misleading.

The McKinsey Study (2023-2024)

McKinsey studied developer productivity with AI tools across several large enterprises and found more modest gains: about 20-45% faster for code generation tasks, 25-30% faster for code documentation, and 15-20% improvement in code review speed. Importantly, they also found that the gains varied enormously by experience level and task type.

Junior developers showed larger raw productivity gains (because they have more boilerplate to write), but senior developers produced higher-quality outputs with AI assistance (because they knew when to accept and when to reject suggestions).

The Google Internal Studies (2024-2025)

Google published internal research on how their developers use AI tools. The findings were revealing: about 25% of new code at Google was AI-generated by late 2024. But here's the catch -- that code required human review and often significant modification. The net productivity gain, accounting for review time, was estimated at 10-15% for experienced developers.

Google also found that AI tools were most effective for three specific tasks: writing tests, generating documentation, and implementing well-defined specifications. For exploratory coding, debugging, and architecture work, the gains were minimal or negative (because developers spent time evaluating incorrect suggestions).

The Stack Overflow Developer Survey (2025)

Stack Overflow's most recent developer survey asked about AI tool usage and productivity perceptions. Among developers who use AI coding tools regularly, 44% said they felt "somewhat more productive," 28% said "significantly more productive," 18% said "no meaningful change," and 10% said they were actually less productive because of time spent debugging AI-generated code.

That last number is worth noting. A non-trivial percentage of developers find that AI tools slow them down. This usually correlates with either trying to use AI for the wrong tasks or not having enough experience to quickly evaluate AI output quality.

The Nuanced Reality

After reviewing this data and talking to dozens of developers about their experiences, here's what I believe is a fair summary.

Where AI Genuinely Helps

Boilerplate and repetitive code. This is the sweet spot. Writing similar CRUD endpoints, creating test fixtures, setting up configuration files -- these tasks are boring, well-defined, and repetitive. AI handles them well, and the time savings are real. A developer who writes five similar API endpoints per day easily saves 30-60 minutes.

Learning and exploration. When you're working with an unfamiliar library or language, AI tools drastically reduce the time spent searching documentation. Instead of reading through 20 pages of docs, you ask "how do I set up authentication with this library?" and get a working example in seconds. Multiple developers I've talked to say this is where they feel the biggest impact.

Writing tests. Test code is often structurally similar, and AI is remarkably good at generating it. Several teams report that AI test generation has significantly increased their test coverage, not because each test is faster, but because the reduced effort means developers actually write tests they would have skipped.

Documentation. Almost no one likes writing documentation, and AI is good enough at it that "generates docs from code" has become a standard workflow. The output needs editing, but starting from an AI draft is much faster than starting from nothing.

Where AI Doesn't Help (Or Hurts)

Complex debugging. AI can help identify potential causes for a bug, but actually tracing through complex system interactions to find the root cause requires contextual understanding that current AI tools don't have. Several developers report that AI debugging suggestions are often plausible-sounding but wrong, leading them down rabbit holes.

System design and architecture. As I discussed in a previous post, AI can be a useful thinking partner for architecture decisions, but it can't make those decisions for you. The developers who try to let AI architect their systems end up with technically competent but contextually inappropriate designs.

Code that requires deep domain knowledge. If you're writing code for a complex financial calculation or a healthcare workflow, AI suggestions are often dangerously plausible -- correct enough to look right but wrong in ways that require domain expertise to catch.

Performance-critical code. AI-generated code tends toward the obvious, readable solution, which is often not the most performant. If you're writing a high-frequency trading system or a real-time graphics engine, AI suggestions will usually need significant optimization.

The Developer Experience Angle

There's an aspect of AI-assisted development that the productivity studies don't capture well: how it changes the experience of programming.

Multiple developers have told me that coding with AI feels less tedious. The boring parts happen faster, which means more of your time is spent on the interesting problems. One senior developer described it as "finally, the computer is doing the mechanical work while I do the thinking." Whether or not that makes you measurably "faster," it makes the work more enjoyable.

On the flip side, some developers report a creeping sense of anxiety about skill atrophy. "I used to know the syntax for setting up a WebSocket connection in Node," one developer told me. "Now I just ask Copilot and it fills it in. Am I actually learning, or am I becoming dependent?" It's a legitimate concern, particularly for junior developers building their foundational skills.

What the Numbers Mean for Your Career

If AI makes developers 15-30% more productive on average (my honest estimate across all tasks), what does that mean for the industry?

It doesn't mean companies need 15-30% fewer developers. That's the naive interpretation. What it actually means is that developers can tackle more ambitious projects, teams can ship faster with the same headcount, and companies can afford to invest in quality (testing, documentation, refactoring) that they used to cut for time reasons.

The developers who benefit most from AI tools are the ones who already have strong fundamentals. They can quickly evaluate AI suggestions, know when something looks wrong, and use AI as a multiplier for their existing skills. If your skills are strong, AI makes you stronger.

If your skills are weak, AI becomes a crutch that masks your gaps in the short term and makes them worse in the long term. The developer who copies AI output without understanding it is building a career on borrowed knowledge that will eventually be called in.

The Honest Bottom Line

AI coding tools make developers meaningfully more productive for specific kinds of work. The gains are real but more modest than the marketing suggests. They're most impactful for experienced developers who know how to wield them effectively, and they work best for well-defined, repetitive tasks.

The 55% number is a best case for a narrow scenario. The real-world average is probably 15-30%, which is still significant over the course of a career. Use these tools. Learn to use them well. But don't outsource your thinking to them. The developers who'll thrive in the AI era are the ones who use AI to handle the mechanical work while they focus on what matters: solving hard problems for real users.

Share this article
F

Written by

Fyrosoft Team

More Articles →

Comments

Leave a comment

No comments yet. Be the first to share your thoughts!

Need Expert Software Development?

From web apps to AI solutions, our team delivers production-ready software that scales.

Get in Touch