What Developers Learned From the 2025 Stack Overflow Survey About AI
I went through the 2025 Stack Overflow Developer Survey and pulled a few AI notes that stood out to me. This is a short take with links if you want to dig deeper.
Read the full report here: Stack Overflow Developer Survey 2025
AI Used vs Admired
OpenAI GPT dominates actual use. Claude Sonnet leads admiration. Usage often reflects availability and integrations. Admiration tends to reflect output quality, tone, and trust.
How to choose
- Start with the job to be done: refactor code, draft docs, write tests, extract data
- Run the same prompt on two models you shortlist
- Score results for accuracy, edit effort, and repeatability
- Track cost per accepted output, not cost per token
Why there is a gap
- Defaults drive behavior. If your editor or platform ships with one model, usage follows
- Procurement and compliance shape access. Some orgs allow only certain vendors
- Training and team habits matter. People keep using what they already know
- Practicalities count. Rate limits, latency, and pricing can nudge choices
AI Agents Reality Check
Most developers do not use agents yet. Users report real gains. About 69 percent of agent users say productivity improved. About 70 percent say time on specific tasks went down.
When to try agents
- You have repeatable multi step tasks with written steps
- You can fence data access and tool use
- You can measure outputs and time saved
When to wait
- You lack guardrails for credentials or production writes
- Your tasks vary a lot and need hand holding
A simple pattern that helps
- Start with a checklist and a tiny toolset. Fewer actions, fewer surprises
- Run agents in dry run first to see decisions before actions
- Log actions and outcomes so you can tune prompts and tools safely
The Almost Right Problem
Sixty six percent say near miss answers slow them down. Forty five percent say debugging AI generated code takes more time. A process tweak usually helps more than chasing prompts.
How to review faster
- Set acceptance criteria up front: inputs, outputs, edge cases
- Add tiny tests first: unit and property checks catch drift
- Use two pass review: quick plausibility scan, then targeted checks on risky parts
- Log rejects and reasons: build a prompt and pattern library for your team
Helpful additions
- Keep a small calibration set of problems and re run them monthly to spot drift
- Add simple evals for accuracy and safety where you can
- Ask for citations when summarizing docs so you can click and verify
AI Learning Paths
Gemini users also explore LLMs, RAG, and local tools like Ollama. The pattern is breadth first, then depth where it pays off.
How to skill up
- Start with one hosted model to learn prompt patterns and evaluation
- Add retrieval when you have private context: small RAG often beats bigger raw models for policy and docs
- Try local runs with Ollama for privacy and quick iteration
- Keep a notebook of wins and misses to build a team recipe book
Practical notes
- Write down data rules early. What can models see and what must stay private
- Prefer small, well scoped RAG over dumping entire drives
IDE Reality
VS Code and Visual Studio still lead. Paid AI IDEs did not dethrone extensible editors. This suggests many developers prefer AI where they already work as add on extensions and commands.
What to do in the editor
- Pick one editor and standardize extensions to keep the stack simple
- Use inline tools for small edits and an external chat for bigger changes
- Treat generated code like any code: tests, reviews, and ownership stay the same
See more charts and details in the full report Stack Overflow Developer Survey 2025
Ready to Transform Your Career?
Let's work together to unlock your potential and achieve your professional goals.