Over a decade ago, I was working as a web developer. Back when jQuery was the bee's knees, before JavaScript had concepts like let or const, back when things like AngularJS, Handlebars, and EmberJS were just picking up steam. Back when MVC was a big thing.
Yes, back in this time. Do you remember? Around this time a little thing called TypeScript was just entering the scene. As a JavaScript expert, I took one look at it and scoffed. "Why would I need that?" I said to myself. "I know exactly what it's doing behind the scenes. Why would I ever want to write JavaScript like that? It takes the fun out of it." No self-respecting JavaScript engineer would ever use TypeScript, right? We can track our own types, thank you very much!
Fast forward to today and you're hard-pressed to find a web company that doesn't use TypeScript. Ten years ago, I wouldn't have guessed that TypeScript would have caught on like it did, but here we are, and I was wrong.
We have another change going on right now. This one is much bigger and much more impactful than a shift in the JavaScript ecosystem. But there are still many people who aren't noticing or, if they are noticing, they're scoffing at it like I did TypeScript.
What change is going on right now?
You may be sick of hearing about this, but the change is AI.
Specifically AI coding tools.
I've spent a lot of time recently working using Cursor and, to a lesser extent, Claude Code in an effort to help my team know how to adopt them. As I've been spending so much time with them, to me it's clear that these tools are part of the future. The people that use them will be more efficient. The people that don't? They'll either need to adopt them or face becoming obsolete.
That may seem like a bold claim to some, but let's look at what these tools are capable of to get a better idea.
First, it may seem obvious, but what coding tools? The ones I'm referring to are either IDEs or AI Agents that produce code or help with the coding process. Claude Code, Cursor, Windsurf, even possibly Github Copilot, and others are the tools I'm talking about specifically, though Github Copilot leaves a lot to be desired after using these other tools.
Next, when, how, and why do we use these tools? This is where we get to see the true benefit of these tools.
On my current team, we've seen benefit in using these tools in every part of the SDLC. And I mean every part. From ideation and design to tech specs and Jira issues to coding and testing and integrating. We've used AI at every step in the process. Here's what we've learned.
Context is everything
The more information you can give your AI relevant to the task at hand, the better. The more relevant the information and the more clearly we express our intent to our AI, the better it will be at doing what you want. If you don't get what you want from your AI, the first thing to do isn't to blame the AI, it's to look at your prompt and the context you've provided. Chances are, if it's not doing what you wanted it to do, changing your prompt or adding more context will improve the results.There is a lot to say about context and prompting, but I'll keep it simple in this post. Here's how I think about managing the context. First, I think of each chat as a new employee in the org. They are capable but lack the context of what we do and why we do it. They don't know the 1, 2, or even 10 or 20+ years of decisions that have been enshrined in our codebase. So, I need to provide them with the relevant parts, the necessary information, and clearly explain the desired outcome. When I do that, the AI does a pretty fantastic job at generating most of the code I need.
Here's what that looks like for me. If I'm starting a new task, I'll instruct the AI to read that task (this is accomplished via the Atlassian MCP). I'll then point it at any designs or documentation that's related to this work. I'll now instruct it to look at the files that are in the area we need to work and have it look at those. Usually by this point, it's already starting to suggest code edits. If the story is simple, I may jump into those. But if the story is more complex, I'll usually ask it to "think deeply" about the information it has and propose a way forward. I may even instruct it to validate that looking to disprove the plan then adjust to account for those gaps.
Once I've built that context, I then jump into coding. I may have it create a todo list of things to accomplish for this task, then have it execute on those. If I approach it this way, I usually get at least 80% of the way there very quickly even in an existing codebase. The times where I haven't done this, it usually goes a little off the rails and provides me things that may be good in a generic app but don't work or fit in my app. Spending the time building this context is definitely worth it in my experience.
Context Storage
Context is an absolutely essential and critical part of working with AI tools. Context is even better if you can store the important parts of it in persistent places like Jira issues or Confluence documents or another source that you can point your agents to so that they can quickly get up to speed. This can really upgrade your workflows and your usage of AI.
In the last section, I shared how I build the context the AI needs to accomplish a task. What I didn't share was the all the work I do with AI beefore we even get to that point. That's right, we're using Sonnet 4.1 and GPT 5 to help us get work defined and ready. We use AI in every stage of the process that we resonably can.
So, what's that look like? Let's assume that UX has provided a design and we know a high level of what we'll provide but we haven't figured out the details yet (we're doing this for simplicity since AI can be used in both of those activities as well). At this point, I'll share the Jira number with the AI and get it up to speed on what's captured there. I'll then send it the Figma designs and have it examine those. If possible, I'll then have it look generally at the code to see what it looks like in the areas it'll be working.
Once that context is established, I'll have it start working on technical design documents. I'm talking a design spec, API spec, ERD diagrams, sequence diagrams, component diagrams, and anything else that might be useful for us to understand and implement this feature. I'll have it create these documents as markdown files in the repository so it can refer to them later on in the process.
Once those documents are created, my work begins. The AI will do a great job getting 80% of the way there, but it'll either make assumptions, miss portions of the requirements, or make mistakes. So, my job is now to review these documents to ensure accuracy and completeness. If there are any problems, I prompt the AI to make the necessary adjustments. That way it has it in its context and not just in the document. I may ask it why it included things I didn't think of or have it remove things that aren't necessary for the first iteration of the feature.
Once the documents are to my liking, the work shifts back to the AI. At this point, I'll give the AI specific instructions to break this design down into frontend and backend designs. I'll review those for accuracy as well. Once those are good to go, I give the AI specific instructions on how to log this work in Jira. It'll create the stories, fill out the descriptions, write acceptance criteria, and provide links to all the relevant documentation. And you know what? It does a fantastic job at it. The stories it writes are way better than just about any story I've ever seen in over a decade of software engineering.
With those kind of stories, I can then do the process I described in the previous section with even greater success and speed.
It's Iterative
We're seeing a rise in one-shot prompting (which is exciting!) but that's still the exception from what I've seen, especiallly with existing codebases. I've noticed that AI seems to do better with easily digestible steps in the process rather than the entire process all at once. As I work with AI, I think of the next task to accomplish or the next problem to solve along the way. While I do provide the overall goal, the focus is usually on the next step, then the next, then the next, until the job is done. This generally results in better output from the AI.
AI as a Collaborator, Not a Replacement
One of the biggest mindset shifts when working with AI tools is to stop thinking of them as vending machines and start thinking of them as collaborators. They’re not here to replace you—they’re here to work with you.
The truth is, these tools can do a shocking amount of the actual “hands on keyboard” work. But if you unplug, zone out, and let them run wild, the results won’t be what you want. You’re still the engineer in the room. You understand the business, the tradeoffs, and the long tail of decisions baked into your codebase. The AI doesn’t.
Treat it like a junior developer who happens to work at superhuman speed. The better you guide, review, and course-correct, the stronger the output. And, much like a junior dev, over time you’ll learn when to trust it and when to double-check.
Expect Hallucinations
AI will hallucinate. It will make up APIs that don’t exist, reference libraries that aren’t in your stack, or confidently explain solutions that don’t actually work. And that’s okay.
When this happens, don’t fight it or get frustrated. Just reset. Start over, add clarity, refine your prompts. Think of it like a whiteboard session that went sideways: erase, reframe, and keep moving forward. The speed these tools work at makes failure cheap—much cheaper than burning hours writing the wrong code yourself.
Precision Matters
If you want the AI to take as much off your plate as possible, you need to set it up for success. Vagueness breeds weak results. Precision breeds strong ones.
Be specific. Provide the constraints, the desired architecture, the file paths, the libraries you expect, and the coding patterns you want to follow. When you do, you’ll often be shocked at how close the first draft is to what you actually wanted.
Think of it this way: every missing detail is a decision the AI will happily make for you—and odds are, it won’t be the decision you would have made.
Reuse Your Best Prompts
Some of the most underrated productivity gains come from not reinventing the wheel. If you find a prompt that consistently generates great test cases, or a way of asking for API docs that always produces what you need, save it.
Treat prompts like code snippets or utility functions. Build a library of them. Standardize them for your team. Over time, you’ll find that this library of prompts becomes almost as valuable as your actual codebase.
The Human Edge
There’s one final point that often gets overlooked: your judgment is irreplaceable. AI can generate endless possibilities, but it can’t tell which solution best fits your team’s priorities, your company’s goals, or your user’s needs. That’s your job.
The engineers who thrive in this new world won’t be the ones who fight AI or ignore it. They’ll be the ones who embrace it as a partner while sharpening their ability to make good decisions, give clear direction, and provide strong guardrails.
Pitfalls to Avoid
As exciting as these tools are, there are traps that can slow you down if you’re not careful.
The first is overtrusting the outputs. Just because the AI gives you something that looks polished doesn’t mean it’s correct. Always run it, test it, and review it with a critical eye. Think of it as reviewing a PR from a teammate you don’t fully trust yet.
The second is underdocumenting AI-assisted work. If you let the AI generate big chunks of code or specs without capturing the why behind decisions, you’ll end up with a knowledge gap later. Make sure the reasoning, tradeoffs, and context live somewhere your team can find them.
The third is letting your own skills atrophy. These tools are incredible, but they can’t replace your understanding of systems, architecture, or problem-solving. If you stop engaging and let the AI make all the calls, your engineering instincts will dull. Stay sharp.
Where This Is Heading
We’re only at the start of what these tools will become. Agent workflows—where the AI can plan, execute, and self-correct across multiple steps—are already making their way into IDEs. Soon, you won’t just be asking the AI for a function or test—you’ll be asking it to own an entire ticket from spec to PR.
On a team level, I think we’ll see the rise of company-wide prompt standards. Just like we have coding standards today, we’ll have libraries of prompts and context setups that let anyone on the team get consistent, high-quality results from AI.
And the IDE itself? It’s going to feel less like an editor and more like a cockpit. You’ll be steering, guiding, and making judgment calls, while the AI does the heavy lifting behind the scenes.
The shift we’re in right now is massive. Bigger than TypeScript. Bigger than any framework war. The people who learn to work with AI as a partner—not a crutch—are the ones who are going to define the next decade of software engineering.
Coming Full Circle
When I look back at my younger self scoffing at TypeScript, I smile a little. I was confident I didn’t need it, that it was just noise, that “real” engineers didn’t need that kind of crutch. I was wrong then, and a lot of people are wrong now about AI.
The pattern is the same: a new tool shows up, it feels foreign, maybe even unnecessary. But slowly, it changes the way we build. Ten years later, it’s hard to imagine working without it.
AI is that shift—except this time, it’s bigger. Way bigger. If TypeScript was a new pair of glasses, AI is night vision goggles.
The question is, do you want to be the engineer who embraced it early and leveled up, or the one still scoffing on the sidelines while the industry moves on?
Because this isn’t just another framework or library. This is the next era of software engineering. And it’s already here.
Comments
Post a Comment