A person sitting and reading a book in soft natural light.
Photo by Blaz Photo on Unsplash

AI isn't making people dumber. It's exposing who was already coasting.

The Dunning-Kruger effect, amplified by AI: shallow operators look polished while real expertise gets a force multiplier.

Two conversations in 24 hours convinced me the entire “AI is making us dumber” debate is looking at the wrong thing.

The first was with my best friend Guillaume. We were talking about building together, riffing on ideas, and he said something that stopped me: working with AI feels like finally having a collaborator that can keep up with the pace of your thoughts. Not better than you. Not replacing you. Just… fast enough to match the speed your brain actually operates at. For the first time, the bottleneck isn’t the tool. It’s you.

The next day, a coworker asked me point blank: “Isn’t AI just making people dumber?”

I’ve been chewing on the gap between those two conversations ever since. Because they’re both talking about the same technology. And they’re arriving at completely opposite conclusions. The difference isn’t the AI. It’s what each person is bringing to the table.

The amplifier problem

Here’s the thing nobody wants to say out loud: AI doesn’t make you anything. It makes you more of what you already are.

If you’re a clear thinker with domain expertise and good judgment, AI is the best multiplier you’ve ever had. You can move faster, test ideas quicker, produce higher-quality work in less time. You’re not outsourcing your thinking. You’re accelerating it.

But if you were already cutting corners, copying surface-level ideas, and skating by on confidence instead of competence… AI just lets you do that at scale. Faster. With better grammar.

And here’s where it gets interesting: the people in that second group often can’t tell the difference. They look at their AI-assisted output and think it’s brilliant, because it sounds polished. It reads well. The sentences are clean. But anyone with actual expertise can spot it in seconds. The ideas are shallow. The reasoning is circular. The specifics are wrong in ways that only matter if you know the subject.

This isn’t a new problem. AI didn’t create it. AI just made it impossible to ignore.

The research backs this up (and it’s more nuanced than the headlines)

There’s a study from Aalto University in Finland that I keep coming back to. Researchers gave about 500 people a set of logical reasoning problems from the Law School Admission Test. Half used ChatGPT to help. Half didn’t. Then everyone was asked to rate how well they did.

Here’s what happened: the Dunning-Kruger effect, that classic pattern where the least competent people are the most confident, disappeared when AI was involved. Instead, everyone overestimated their performance. But the people with the highest “AI literacy,” the ones who felt most comfortable with the tools, were the most overconfident. Not because they were smarter. Because the AI made their output look good enough that they stopped questioning it.

The researchers called it cognitive offloading. You trust the system’s output, skip the reflection, and walk away thinking you nailed it. Sound familiar? It should. We’ve all seen the LinkedIn post that reads like a million bucks and says absolutely nothing. We’ve all sat in the meeting where someone presented AI-generated analysis and couldn’t answer the first follow-up question.

That’s not AI making people dumber. That’s AI giving a megaphone to people who didn’t have much to say in the first place.

But what about the people who actually know what they’re doing?

This is the part that gets buried in the panic. Because while everyone’s wringing their hands about cognitive decline, something else is happening quietly: experienced professionals are becoming absurdly productive.

A 2026 study published in Science found that entry-level software developers showed almost no productivity gain from AI tools compared to their experienced counterparts. The experienced developers? They took off. The researchers explained it this way: if productivity means generating more lines of code, AI helps everyone. But if productivity means generating code that has market value, that actually works, that solves the right problem… then AI benefits the people who already have the judgment to know what “right” looks like.

Erik Brynjolfsson’s data tells a similar story at the macro level. US productivity grew roughly 2.7% in 2025, nearly double the average of the prior decade. But that growth isn’t evenly distributed. A small group of what he calls “power users” are compressing weeks of work into hours. They’re not using AI to think for them. They’re using it to execute at the speed of their thinking.

That phrase keeps echoing back. The speed of their thinking. Guillaume wasn’t excited because AI made the work easier. He was excited because it finally stopped being the thing that slowed him down.

PwC’s 2025 Global AI Jobs Barometer found something that should make the “AI is making us dumber” crowd uncomfortable: wages are rising twice as fast in industries most exposed to AI compared to those least exposed. Even in the most highly automatable roles. AI isn’t devaluing people. It’s revealing who was already valuable.

The real digital divide

I got this wrong for a long time. I used to think the risk with AI was access. That the divide would be between people who had AI tools and people who didn’t. That’s not the divide.

The divide is between people who have something worth amplifying and people who don’t.

If you’ve spent years building genuine expertise, developing taste, learning to think critically about your domain… AI is rocket fuel. It takes all that accumulated knowledge and lets you deploy it faster than you ever could alone. You’re not asking AI what to think. You’re asking it to help you think faster.

If you skipped that part, if you coasted on charisma or credentials or just never developed the deep skills… AI doesn’t help you. It hurts you. Because now you’re competing against people who have both the expertise AND the force multiplier. The gap that was always there is suddenly visible, and it’s widening every day.

This is what my coworker was sensing when he asked if AI was making people dumber. He wasn’t wrong to notice that something was shifting. He was just looking at it backwards. The shift isn’t that smart people are getting dumber. The shift is that the difference between “knows what they’re doing” and “sounds like they know what they’re doing” is getting harder to fake.

The Dunning-Kruger multiplier

Let me back up for a second and be more precise about what I mean, because I think this is the core of the whole thing.

The Dunning-Kruger effect has always been about the gap between competence and self-assessment. People who don’t know much about a topic overestimate their knowledge. People who know a lot tend to underestimate theirs. That asymmetry is baked into human cognition.

AI amplifies that asymmetry in a way we’ve never seen before.

The person with shallow knowledge prompts AI, gets a polished response, and thinks: look what I made. They don’t have the expertise to evaluate whether the output is actually good. They just know it looks professional. So their confidence goes up, their competence stays flat, and the gap between the two gets wider.

The expert prompts AI, gets a response, and immediately starts editing. They spot the hallucinations. They catch the subtle errors. They know which parts are strong and which parts are filler. They use the output as a starting point, not a finished product. Their confidence stays calibrated because they have the knowledge to assess what they’re looking at.

Same tool. Same technology. Completely different outcomes. Not because of the AI, but because of the human.

So what do we actually do about this?

Look, I’m not going to pretend there’s a clean five-step framework for this. But I do think there are a few things worth saying.

First, stop blaming the tool. AI isn’t making anyone dumber any more than Google made people dumber or calculators made people worse at math. Every generation has this panic when a new tool comes along, and every generation mistakes “I can see the problem more clearly now” for “the tool created the problem.”

Second, invest in the stuff AI can’t fake. Domain expertise. Critical thinking. The ability to look at a polished output and ask, “Is this actually right, or does it just sound right?” The ability to ask a better question. That’s the skill that separates the people AI is helping from the people AI is exposing.

Third, be honest about which category you’re in. This is the uncomfortable one. If you’re using AI and your work is getting better, faster, and more creative… you’re probably in the first group. If you’re using AI and your work looks better but you can’t explain it, defend it, or build on it without the tool… that’s worth sitting with.

The honest answer is that AI is the most honest mirror the professional world has ever had. It reflects what you bring to it. And some people don’t love what they’re seeing.

It was never about the AI

I keep coming back to those two conversations. Guillaume, buzzing with energy, talking about AI like he’d found the collaborator he’d been waiting for his entire career. And my coworker the next day, genuinely worried that AI was eroding something fundamental about how we think.

They’re both right, in a way. Guillaume is right that AI is transformative for people who have the skills, the taste, and the judgment to direct it. My coworker is right that something is being lost… but the thing being lost isn’t intelligence. It’s the ability to hide.

For years, a lot of people got by on sounding competent. AI has made competent output cheap. Which means the only thing that still commands a premium is being actually competent. Knowing the difference between a good idea and a good sentence. Being able to think, not just produce.

AI didn’t make anyone dumber. It just made it really, really obvious who was thinking in the first place.

← Back to all writing