After a year of using Cursor, Claude Code, Antigravity, and Copilot daily — I think AI tools are making a lot of devs slower, not faster. Here's why.

I know this is going to be controversial, but hear me out.

I've been using AI coding tools heavily for the past year. Cursor Pro, Claude Code (Max), Copilot, Windsurf, and recently Antigravity. I build production apps, not toy projects. And I've come to a conclusion that I don't see discussed enough:

A lot of us are slower with AI tools than without them, and we don't realize it because generating code feels fast even when shipping doesn't.

Here's what I've noticed:

1. The illusion of velocity

AI spits out 200 lines in 8 seconds. You feel productive. Then you spend 40 minutes reading, debugging, and fixing hallucinations. You could've written the 30 lines you actually needed in 10 minutes. I started tracking this and on days I used AI heavily for complex logic, I shipped fewer features than days I used it only for boilerplate and tests.

2. Credit anxiety is real cognitive overhead

Ever catch yourself thinking "should I use Sonnet or switch to Gemini to save credits?" or "I've burned 60% of my credits and it's only the 15th"? Cursor's $20 credit pool drains 2.4x faster with Claude vs Gemini. That's ~225 Claude requests vs ~550 Gemini. You're now running a micro-budget alongside your codebase and that mental load is real.

3. The sycophancy trap

You write mid code, ask AI to review it, and it says "Great implementation! Clean and well-structured." You move on. Bug ships to production. Remember when OpenAI had to roll back GPT-4o in April 2025 because it was literally praising users for dangerous decisions? That problem hasn't gone away. I now always add "grade this harshly" or "what would a hostile code reviewer find" the difference in feedback quality is night and day.

4. IDE-hopping is killing your productivity

All these IDEs use the same models. Cursor, Windsurf, Antigravity, Copilot they all have access to Claude and GPT-5. The differences come from context window management, agent architecture, system prompts, and integration depth. But devs spend weeks switching between them, losing their .cursorrules, their muscle memory, their workflows. You're perpetually a beginner.

5. Delegation requires clarity most of us don't have

When you code yourself, vagueness resolves naturally. When you delegate to an AI agent, vagueness compounds. The agent confidently builds the wrong thing across 15 files and now you're debugging code you didn't write and don't fully understand. The devs who benefit most from agent mode were already good at writing specs and decomposing problems.

6. Knowledge atrophy is real

If AI writes all your error handling, DB queries, and API integrations do you still understand them? Senior devs with deep fundamentals can review AI output critically. But I'm genuinely worried about junior/mid devs building on foundations they don't understand. When the AI generates a subtle race condition or an N+1 query, you need the knowledge to catch it.

7. Tool sprawl

Cursor, Windsurf, Antigravity, Copilot, TRAE, Kiro, Kilo for IDEs. Claude, GPT-5, Gemini, DeepSeek, Mistral, Kimi for models. Then image gen, OCR, automation tools, code review bots... That's not a toolkit, it's a part-time job in subscription management.

What actually works (for me):

TL;DR: AI coding tools are incredible, but generating code fast ≠ shipping fast. Most devs are in the "impressed by the chainsaw but haven't learned technique" phase. Depth with one tool > breadth across eight. Fight sycophancy. Write the hard parts yourself.

Curious if others are experiencing similar things or if I'm just doing it wrong. What's your honest take?

submitted by /u/riturajpokhriyal
[link] [comments]