Kirill Zorin

What Actually Happens When You Code with AI

As I continue to experiment with LLM-powered development tools in my pet projects, I'm documenting both the impressive capabilities and the surprising challenges these AI assistants bring to software development. What follows are my unfiltered observations from weeks of integrating these tools into real projects - lessons that could save you time, money, and frustration.

Architecture Matters More Than Ever

Fortunately, we can skip a lot of boring coding, but as soon as your product becomes more complex, LLM-driven changes lead to more mistakes.

You must be more accurate with prompting and careful with underlying instructions (like in .cursorrules). For example, classic development trap (especially with a monolithic architecture and a lot of spaghetti-code) - one change breaks another part of the app, and fixing it breaks yet another feature. It appears pretty often with LLM-generated code.

If you're trying to solve it iteratively with LLM tools, you send more and more requests, which also means money.

So my point here is that proper system design and architecture are a very important part of building a product. Especially if you want to rely on a fancy vibe-coding approach. Because it helps to bring clear constraints and avoid uncontrolled broken changes.

The Snowball Effect Is Real

Yesterday, I ran a simple code refactoring via Cursor in agent mode to reorganize API endpoints. But it produced 50+ code changes and ruined everything, even the UI. There were some small bugs or dependent changes at each step, and LLM started "improving" it like a snowball. It was funny to observe. Be careful what you wish for with LLM agents.

One of the most annoying glitches of AI-driven coding is when these agents replace/modify existing code blocks that do not relate to the required changes. Collateral bugging.

Finding the Right Balance

Development of an LLM-agents design requires a pretty big effort, so I'm glad that I can delegate almost all the UI and API development also to LLMs. It's just necessary to build rich apps. No worries about developers losing their jobs because of AI.

The reality is that these tools require significant oversight. They're powerful accelerators when properly directed, but can quickly spiral into complexity when given too much freedom. As developers, we need to maintain control of architecture and design decisions while leveraging LLMs for implementation details.

The future isn't about replacing developers—it's about augmenting them with new superpowers while demanding even greater attention to system design fundamentals.

Kirill Zorin © 2024