Back to blog

I'm a Frontend Engineer Who Learned to Build with AI — Here's What Actually Changed

If you're a frontend engineer who's been treating AI as a fancy autocomplete, it might be worth reconsidering what you're actually using it for.

6 min read
AIFrontend DevelopmentSoftware EngineeringAI ToolsSide Projects

I've been writing React and TypeScript for years. Design systems, component libraries, mobile apps with React Native and Expo, the whole thing. I know how to build. That was never the problem.

The problem was always everything around the building. The script for a video idea I had. The backend logic I didn't want to context-switch into. The image I needed but couldn't design. The gap between "I have an idea" and "this thing actually exists."

AI closed that gap for me. But not in the way I expected.


The Mindset Shift Nobody Talks About

When I first started using AI tools seriously, I treated them like a faster Stack Overflow. Ask a question, get an answer, move on. That's fine, but it's also kind of missing the point.

The shift that actually changed how I work is this: I stopped thinking about AI as a tool that answers questions and started thinking about it as a collaborator that helps me build across disciplines I'm not an expert in.

That sounds abstract, so here's what it looks like in practice.

I'm building an app called Leftover Chef. The idea is simple: you tell it what ingredients you have, and it suggests recipes. The frontend is React Native with Expo Router, which is squarely in my lane. But the matching logic? Figuring out how to take a messy, unstructured list of ingredients and map them to recipes in a way that actually makes sense? That's where things get interesting.

I'm not a machine learning engineer. I don't have a background in NLP. But I know enough to describe the problem clearly, and it turns out that's most of the work. I worked through the matching logic with Claude, treating it less like a chatbot and more like a technical partner who happens to know things I don't. The key was being specific about constraints: "I want to prioritize recipes where the user has at least 70% of the ingredients, deprioritize recipes with a missing protein if the user explicitly said they have no meat," and so on.

That's not prompt engineering in the buzzword sense. It's just clear communication. Which is something engineers are supposed to be good at anyway.


What's Actually in My Stack

For anyone who wants the concrete version, here's what I'm using on the coding side and why:

Claude is my main thinking partner for engineering work. Architecture decisions, writing logic I'm not sure about, talking through tradeoffs, reviewing code structure. The reason it works better than just searching docs is that it holds context across a conversation. I can say "okay now make that validation also handle the case where the timestamp is server-synced" and it knows what I'm talking about without me re-explaining the whole feature. That continuity is underrated.

OpenAI's API is what I'm using for the matching logic in Leftover Chef. I experimented with both Claude and GPT-4o for this and landed on OpenAI for that specific use case based on how each one handled structured output formatting. The point being: I'm not loyal to a single model, I'm picking the right tool for the job. That's a different way of thinking than most frontend engineers are used to.

Cursor changed how I write code day to day. The autocomplete isn't the interesting part. The interesting part is being able to highlight a block of code, describe what's wrong with it, and get a targeted fix without breaking context across the file. It's especially useful in a React Native codebase where you're juggling navigation, state, and native modules all at once.

TypeScript + strong typing as a forcing function isn't a tool exactly, but it matters more when you're shipping AI-generated code. If the types are tight, you catch the places where Claude or Cursor went slightly off faster. It's like having a second review pass baked into the build. Engineers who are sloppy with types and leaning on AI generation are quietly accumulating bugs they won't find until production.


What Didn't Change (And Why That Matters)

Here's the thing I want to be honest about: AI didn't make me a better engineer in the traditional sense. I'm not writing better React hooks because of Claude. My component architecture isn't sharper because of Midjourney.

What changed is the surface area of what I can ship.

Before, I could build great frontend experiences. After, I can build frontend experiences and produce video content and write the backend logic and design the system prompt for an AI feature and generate the visuals for marketing. The engineering fundamentals are the same. The range is just wider now.

And honestly, that range is becoming a real differentiator. There's a version of a software engineer that's great at one thing and a version that can move fluidly across a product from the data layer to the user's eyes. AI is making the second version much more achievable for people who are willing to put in the reps to learn how to use these tools well.


The Part People Skip

A lot of tutorials about AI-assisted development make it sound like you just describe what you want and magic happens. That's not really how it works, at least not consistently.

The engineers I've seen get the most out of AI tools are the ones who bring real problem-solving skills to the conversation. You still need to know what good looks like. You still need to recognize when the output is wrong. You still need to debug, refine, and take ownership of what gets shipped.

AI is good at getting you to a first draft fast. The gap between a first draft and something you'd actually be proud to ship is still yours to close.

That's not a criticism. It's actually kind of reassuring. The skills that made you a good engineer don't become worthless because Claude exists. They become the thing that lets you use Claude well.


The honest summary is this: I build more, ship faster, and operate across more disciplines than I could before. The job title is the same. The scope of what's possible isn't.

If you're a frontend engineer who's been treating AI as a fancy autocomplete, it might be worth reconsidering what you're actually using it for.