Skip to content
R. Torrico
Back to blog

AI Is Redefining What It Means to Be a Developer

A few months ago I shipped a feature in a single afternoon that would have taken my team the better part of a sprint. Not a toy — a full data export pipeline with filtering, pagination, background processing, and tests. I closed the laptop and felt unsettled instead of proud. Something had shifted and I had not decided yet what to think about it.

I have had time to think about it now. The shift is not coming. It already happened. The question is not whether AI changed how we work — it did. The question is whether you have noticed, and whether you are working with that reality or around it. The job title is the same. The job is not.

The biggest shift since open source

I remember when npm made me feel like I was cheating. You could pull in a library that took someone years to build, drop it into a project in thirty seconds, and ship it as if you had written it yourself. It felt almost unfair. That feeling wore off fast, because everyone had the same access, and the industry adapted around it.

That is how every real shift in software has worked. Open source did not make developers obsolete — it made the gatekeeping obsolete. Before it, production-grade libraries required budget, legal sign-off, or internal teams rebuilding things from scratch. After it, that cost went to near zero. The capability was always there. The access was the problem.

Cloud did the same thing to infrastructure. The sysadmin role did not vanish — it reoriented. DevOps and SRE appeared. People who adapted moved up the stack. People who did not spent years defending work that was getting automated under them.

AI tools are doing this to implementation speed. The work of writing boilerplate, wiring known patterns, scaffolding services — a significant portion of it can now be generated. The developers writing those features did not disappear either. But what they do in a day looks different, and pretending otherwise is the sysadmin mistake all over again.

That is not an incremental improvement. That is a different game.

Writing code is no longer the bottleneck

Here is the uncomfortable version of what I noticed that afternoon: the pipeline was not hard. The hard part happened in the two hours before I opened my editor — deciding how it should work, what the failure modes were, what the data shape needed to look like downstream. The implementation took a few hours with AI. The thinking took exactly as long as it always did.

GitHub ran a controlled study on this in 2022. Developers with Copilot completed the same task 55% faster than those working without it. That number is striking but slightly misleading if you stop there. Research tracking real developer behavior has consistently found that actually writing code is already a minority of the workday — most time goes to understanding code, communication, and decisions. If that was already true, what AI accelerated was a fraction of the total work.

The bottleneck did not disappear. It moved upstream. It lives now in the decisions you make before the first line, and in your ability to evaluate what comes out the other end.

AI did not speed up the work. It made visible which part of the work was always the hard part.

What a senior developer actually does now

I have been paying attention to what I actually do in a day, and it looks different from what I would have described two years ago.

I write less code directly. I spend more time writing descriptions of what I want — precise, bounded, explicit enough that the output is useful and not just plausible. I spend more time reading code I did not write and asking whether it is correct, not just whether it runs. I spend more time earlier in the process, thinking through constraints before anything exists to review.

The judgment moved in two directions at once. Upstream: what should we build and how should it fit together? Downstream: is this output right, and is it safe to ship? The implementation layer in the middle — which used to be where most of the visible work happened — is thinner now.

The shift I keep coming back to is this: the job used to be mostly “how do I write this.” Now it is mostly “is this the right thing to write, and did the AI get it right.”

That is a bigger change than it sounds. “How do I write this” has a correct answer you can look up. “Did the AI get it right” requires you to hold the whole system in your head, know the edge cases, understand what the code is supposed to do well enough to catch something that looks right but is not. You cannot Google that. It requires the kind of judgment that only comes from having built enough things to see the failure modes before they happen.

When I try to compress it into a list, it looks like this:

  1. Writing precise problem descriptions before generating anything
  2. Evaluating output for correctness, edge cases, and security — not just whether it runs
  3. Setting architecture constraints that keep AI output coherent across the codebase
  4. Knowing when to override, rewrite, or discard what was generated, and being able to explain why
  5. Documenting decisions so the next context — human or AI — understands why things are the way they are

A senior developer’s most valuable output is no longer a pull request. It is a decision.

One developer can now outship a team

Last year a developer I know shipped a SaaS product alone in about six weeks. Authentication, billing, a dashboard, a background job system, an API. I asked him how. He said the honest answer was that he stopped writing things AI could write and spent the time on things it could not. Two years earlier, that project would have needed a team and a runway. He did it on evenings and weekends.

He was not exceptional. He just had full context — no handoff cost, no alignment meetings, no waiting on someone else’s PR. GitHub’s 2023 data found that 46% of code in Copilot-enabled files is completed by AI, and 92% of developers are already using these tools at work or personally. The solo developer outshipping a team is no longer a story about a prodigy. It is increasingly just what focused work looks like now.

But there are real ceilings and I do not want to paper over them.

Legacy codebases are harder. AI degrades without context, and old systems are full of undocumented assumptions baked into data shapes, implicit conventions held in nobody’s head in particular, and decades of decisions that made sense at the time. The older and messier the code, the less useful AI becomes. I have seen this personally — drop an AI into a ten-year-old Rails monolith and the output is immediately worse than greenfield.

Ambiguous requirements are worse, not better. AI does not resolve ambiguity — it fills it with confident-sounding guesses. Garbage in, garbage out, but faster and at greater volume.

Security is the sharpest limit. A 2022 study by researchers at NYU found that a meaningful percentage of Copilot suggestions contained security vulnerabilities. The tool does not know it is wrong. You have to catch it. That requires judgment the tool cannot supply.

Your codebase is a conversation with an AI agent

Think about what a new hire does on their first day. They open the repo, read whatever is there, look at how the existing code is structured, and try to infer the rules from the evidence. What they cannot figure out they ask about. They are slow at first because the context is missing, and they get better as they accumulate it.

An AI agent does the same thing, except it cannot ask questions. It reads your file structure, your naming, your existing code, your documentation — or the absence of those things — and infers what to do from whatever it finds. There is no follow-up. No “hey, what’s the convention for error handling here?” It works with what it has.

This means a messy project produces messy AI output. Not because the model is bad. Because the input is bad. Vague naming generates vague code. Missing conventions generate inconsistent code. No documentation of architectural intent generates code that is locally plausible and globally wrong.

Files like AGENTS.md and Cursor rules exist because people figured this out the hard way. They describe the conventions, the constraints, the patterns — and they serve two audiences simultaneously: the human reading them to understand the codebase, and the AI using them as context for generation. Writing for both at once is a new skill. Most codebases were not designed for it.

The most practical thing you can do today: open one of your projects and ask whether an AI dropped into it cold would understand the rules. If the answer is no, the new developer who joined last month probably does not either.

Your codebase is documentation. It always was. Now it is also instructions.

The uncomfortable part

I want to say something I have been thinking about for a while and that feels uncomfortable to say plainly.

I spent years proud of code that, in retrospect, was mostly assembly of known parts. Authentication flows. CRUD endpoints. Data transformation pipelines. Background jobs. Pagination logic. I wrote all of it carefully, and most of it followed patterns I had written before, or seen in other codebases, or could have described in a paragraph. It was not uncreative — but it was not where the actual thinking happened.

A Stripe survey of over 11,000 developers found that roughly 70% of development time goes to maintenance, debugging, and technical debt rather than building new things. That is not laziness or bad process. That is just what the work is, most of the time.

In a 2022 survey, 87% of developers said AI tools help them preserve mental effort on repetitive tasks. Not eliminate the tasks — preserve mental effort on them. Developers themselves already know which parts of the job are mechanical. The AI just made the ratio impossible to ignore.

The uncomfortable implication is that a significant portion of what the industry paid for — and what many developers built their careers on — was the implementation layer. The cost of doing business. That layer is getting cheaper.

Most of the code most of us wrote was assembly of known parts. AI just does that faster now.

The part that always mattered — the decisions, the design, the debugging of things that should not be possible — did not change. There is just more room for it now, which is either exciting or threatening depending on how honest you are willing to be about where you spent most of your time.

The skills that actually matter now

The skills that made you good at this job are not going away. They are just finally the whole job.

System design was always the senior skill, but it used to share the stage with implementation. If you could write complex code fast, you were valuable even if your architectural instincts were ordinary. That trade-off is gone. AI can fill in the pieces. It cannot decide where they go. Knowing how a system should fit together — where the boundaries are, what the failure modes will be, which trade-offs are actually trade-offs — is now the primary thing.

Code review changed too. It was always important. Now it is the core of the job. Evaluating output you did not write, understanding it well enough to catch what is subtly wrong rather than just what fails a test — that is the main activity. Most developers practiced this as a side task. It is no longer a side task.

Then there is something I do not see talked about enough: problem articulation. Describing what you want with enough precision that AI produces something useful is genuinely hard and takes practice. It is closer to writing a spec than to writing code. Developers who treat it as obvious are producing mediocre output and blaming the model.

What is losing value is the counterpart to all this: memorizing syntax, writing boilerplate from scratch, configuring standard tooling by hand. These things filled a lot of hours. They were never the point. The point was always the judgment underneath them, and now there is nothing left to hide it.

Where to go from here

The shift already happened. You are probably already doing parts of this without naming it. The question is whether you do it deliberately.

The most useful thing I did was pick one real project — not a tutorial, something with actual stakes — and use AI on it every day for a month. Not to generate everything. To figure out where it helped and where it made things worse, and to build a sense of when to trust the output and when to throw it away. That calibration does not come from reading about AI. It comes from being wrong about it enough times on things that mattered.

The second thing is harder to practice but more important: get precise about what you want before you ask for it. Most bad AI output is not the model’s fault. It is the prompt’s fault — vague, underspecified, missing the constraints that would have made the output useful. Treat writing a problem description like writing a spec. The discipline is the same.

Beyond that: read more code than you write. Review deliberately — AI output, open source, your own old code. The skill of reading critically is what separates developers who use AI well from developers who just trust it. Study system design explicitly. The architectural judgment that used to be optional for mid-level developers is now the thing. And document your projects as if an AI will work in them — because one will.

None of this requires a particular tool. The tools change every six months. The underlying shift does not.

The developer who shipped in an afternoon did not get lucky. They just noticed what changed.


References

Kalliamvakou, E. (2022). “Research: quantifying GitHub Copilot’s impact on developer productivity and happiness.” GitHub Blog, September 2022. https://github.blog/news-insights/research/research-quantifying-github-copilots-impact-on-developer-productivity-and-happiness/

Meyer, A. N., Barton, L. E., Murphy, G. C., Zimmermann, T., & Fritz, T. (2017). “The Work Life of Developers: Activities, Switches and Perceived Productivity.” IEEE Transactions on Software Engineering, 43(12), 1178–1193. https://doi.org/10.1109/TSE.2017.2656886

Kapser, C. J., & Godfrey, M. W. (2008). “‘Cloning Considered Harmful’ Considered Harmful: Patterns of Cloning in Software.” Empirical Software Engineering, 13(6), 645–692. https://doi.org/10.1007/s10664-007-9064-x

Stripe. (2018). “The Developer Coefficient.” Harris Poll survey of ~11,000 developers and business leaders. (Original page no longer resolves; findings widely reported in 2018 tech press.)

GitHub. (2023). “Octoverse: The state of open source and AI.” GitHub, Inc. https://github.blog/news-insights/octoverse/octoverse-2023/

Pearce, H., Ahmad, B., Tan, B., Dolan-Gavitt, B., & Karri, R. (2022). “Asleep at the Keyboard? Assessing the Security of GitHub Copilot’s Code Contributions.” IEEE Symposium on Security and Privacy (S&P 2022). https://doi.org/10.1109/SP46214.2022.9833571