Insights

What Building a 68,000-Line System Taught Me About Steering AI

By Bård Windingstad, Founder of Aioli

I have spent 20 years as a product owner building digital ecosystems — at Retriever, EVO, SNØ, and Friio. Some of those projects went very well. Others less so. Through that experience I learned things that turned out to be decisive when I started building with AI: the importance of data models, master data management, single data entry, don't repeat yourself, and above all — not building too much too fast.

I took all of that with me when I started. What I did not understand was what I was actually working with.

·

The nailgun that learns

In the beginning, I thought of the AI agent as a tool — like a nailgun for a carpenter. Point it at the problem, pull the trigger, move on. And the analogy partly holds. But there is one critical difference: this nailgun can learn. Traditional tools do not.

That realization changed everything. It meant the tool could get better over time — but only if I taught it what "better" meant.

·

Swept away by speed

The first shock was that the AI agent was trained to drive progress. After every iteration, it suggested a next step. In the beginning, I let myself get swept along by that momentum.

I also underestimated the raw speed of the coding itself. When I had properly outlined and documented what needed to be built, the implementation happened so fast it was almost disorienting. I was genuinely taken by storm.

But speed without direction is just expensive chaos.

·

The cost of fixing without learning

Early on, I did not know the tool well enough. When I found a bug, I asked the agent to fix it. It did — quickly. Problem solved, or so I thought.

What I eventually understood was that the bug itself was never the real problem. Bugs will always happen. What matters is that you learn from them. I had to start instructing the agent differently: don't just fix this error — find similar errors across the codebase, and make sure we never make this mistake again.

Before I reached that understanding, I lost many hours and many credits. It cost me real money. But it was through those expensive failures that I developed the FORGE protocol — a structured method to prevent the same mistakes from repeating.

·

From document to living skills

When I started building the flagship, I had heard about skills — persistent instructions that shape how the AI agent works. I knew I needed to establish a way for us to collaborate. But instead of using the skill system, I wrote a document I called "working principles."

It was only quite late in the development process that I discovered the skill creator built into the platform. When I did, I spent considerable time migrating my working principles into proper skills.

That was the moment the nailgun stopped being a static tool. Skills are what make it get smarter over time — and what might eventually allow it to drive more of the work on its own. Not because it guesses, but because it has been taught.

·

Discipline over speed

I have built websites with AI in 15 minutes. I uploaded a well-prepared presentation, and minutes later I had a finished site. That was impressive. I built another one with a demo animation reminiscent of ordering an Uber. Also impressive.

But I am not in doubt: if you want to use AI tools properly, you must steer with discipline. Speed must be secondary. You can build a website in a quarter of an hour, and it can be good enough — if the input to the agent is good enough.

When you are building a complex digital ecosystem, however, it is essential to hold back on speed and ensure human-in-the-loop governance. The human must make the important decisions.

·

What I know now

The most important insight is that steering an AI agent is remarkably similar to steering a development team. Many of the same principles apply. The team must understand what friction to solve — not how to solve it. You tell them what, not how.

I also spent time understanding the resistance in the market toward agentic AI. It is significant. Many developers feel threatened by an AI that can code a hundred or a thousand times faster than they can. That resistance is real, and it is worth acknowledging.

But my deepest takeaway is this: it is not dangerous to make mistakes, as long as you learn from them. Hold back on speed. Make sure you are the one making every important decision. Make sure you understand the decisions you are making. And make sure the agent does not just fix errors — but builds a method that prevents them from happening again.

That is what Aioli is built on. Not theory. Experience.