Building AI-native software has changed how I work. It’s less about what I can fully delegate to the AI tools I work with, and how much I want to keep for myself. It’s about finding the best way to work with AI tools at different stages of the development process. And about figuring out how to get the best work out of myself.
I remain a huge fan of the “diamond-shaped” creative process lens: understand the problem deeply through primary source material, brainstorm solutions, narrow down options, then polish the final product. Happily, I feel like I have learned to work with AI tools at every stage. But how I work with AI to get the best results — from them and from me — is evolving into a new personal playbook that works.
Seeking to Understand: The Sponge
At the start of any thought process, I let the AI act as a sponge. I just ask it to soak up all the context, all the research, and all the ideas. Then I ask it to summarize the situation. Where do we stand?
This is the first point on the diamond — the narrowest place at the start.
Say I need to review a component in the codebase that I haven’t looked at in a while. Instead of diving straight into making changes, I’ll ask the AI (3.5 Sonnet via Cursor, Windsurf, Roo Code, or Claude Web) to explain the component's full specification and control flow.
This review gives me a quick and accurate overview of the situation, it also “teaches” the AI something about the context we’ll be working on in this session. Both are important. Both the AI and I need to get up to speed.
I've found this move particularly valuable when starting on a customer-sourced idea or jumping back into code I wrote months (which feels like decades) ago. Instead of trying to reconstruct the logic in my head, I can have a conversation with the AI about what the code does and why certain decisions were made, or the details and context of the customer issue or request.
Do not pass Go, do not collect $200 until and the AI and I both have a clear understanding of what needs to be done, and why.
Brainstorm Solutions: The Vibe and Jam
Once we both understand the situation, I’m a huge fan of the traditional (weeks-old) "vibe and jam" routine with AI. This is just a form of rapid prototyping through natural conversation. With AI tools like those I mentioned, I can use plain language to talk about different things we might do. The AI can give me the code blocks for each idea I give it, but Claude can also build a prototype for me right there in the thread. And I can tilt my head and squint… deciding quickly to accept it, iterate on it, or toss it.
This is the widest part of the diamond, where we’re exploring all the possibilities for how we might approach the problem.
Just last week, I did a session like this with the Day.ai team on custom property definitions. We vibed and jammed our way to a working implementation, then slept on it. Then we threw it all out the next day.
That might sound inefficient, but it's actually the opposite — we learned more in that quick, rapid prototyping session than we would have in hours of theoretical discussion. Or in manual prototyping that might have taken weeks to complete in the past. The final version that we can live with for years only took a couple hours, given how much we’d learned with the brainstormed (fully vibe-d and jamm-y) prototype.
Narrow Down Options: The Car Wash
After the creative phase, you often end up with code that works, and is the approach you want to take, but it needs to get cleaned up — standardized and brought up to the level of the rest of your code.
That's when I use what I call "the car wash" — when you have some clear, repetitive cleanup tasks that need to be applied consistently across the new code. Like running a dirty car through an automated car wash. You need to give it a good scrub.
The car wash works great for tasks that —like cleaning a car — are straightforward, the standards are known and clear, and the task mostly amounts to a lot of elbow grease and repetition.
For this stage of the process, I keep a collection of "notepads" — markdown files with standard instructions for some of my common cleanup tasks. Cursor does a nice job, recently, supporting these natively.
For example, we recently identified a pattern where certain component styles were causing unnecessary re-renders. Instead of manually finding and fixing each instance, I created a notepad with clear instructions for how to correct the pattern we saw. Now I can run any component through that "car wash" and get consistently clean results.
Car washes are just simple, mechanical updates that would be tedious to do manually, but that AI handles beautifully, reliably, fast.
The Final Polish: Gilding the Lily
The trickiest move is what I call "gilding the lily" — those final touches that take something from good to great. This is the part of the process that can be the biggest hurdle. You can spend days just trying to cover that last, hardest mile.
Just last night, I was working on a last-mile project like this. I wanted an interaction to work a specific way, but it just wasn’t ending up the right way on the screen. In these situations, I've learned that AI will do one of two things: either solve the problem elegantly or send me down a deep rabbit hole.
There's rarely a middle ground here. I've wasted entire days chasing down that last bit of polish. So I've learned to recognize quickly which path I'm on. If the AI starts to struggle at this stage, it's often better to either do it myself or do without. You have to know when to cut bait and move on.
Building Better Together
So in different ways, I work with various AI tools as partners and collaborators at each development stage. And I’ve learned some lessons about how I can be the best collaborator to the AI to get the best end result. It comes down to focusing on what I can control and what I can’t.
What’s under my control and I’m getting better at: how well I express my ideas, how specific my requirements are, how clearly I communicate what I actually want. These are all things I can improve on, and that will have a material impact on the quality of my results.
Other things will always be outside my control, like how much the AI knows about a certain domain area, or how well it’s performing at that particular time. I've learned to focus on what I can control, and be flexible about what I can't.
The point isn't to replace my (or my team’s) good human judgment, but to augment and accelerate it in ways that make us better at what we do. Sometimes that means letting the AI take the lead, and sometimes it means thanking it for its help and finishing the job on our own.
That's the real art of working with AI — when you realize it’s not about learning to think more like a computer. It’s about learning to be a more thoughtful human at work.