The first time I tried — and why I came back
I had used coding assistants at the beginning of 2025 but after a few tryouts I decided it was still not the right time. Somehow I was unable to control the agent that well — maybe my approach was wrong or my expectations were too high — but it was slowing me down. I knew how to code efficiently, so I discontinued using it. Early 2026 I wanted to give them another chance — Claude hype was building up and it seemed worth a proper test.
The codebase I used as the test
I was working on a side project: a video processing platform. Quite a lot of code, organised in different modules, each taking care of a different part of the system, assembled to work as a modular monolith. All of it written manually (no AI assistance) in .NET, backed by ABP Framework.
I had been coding without reviewing the code for some time. It was a good real test — I wrote every line of it, I knew what was in there, and I remembered what needed fixing. Claude couldn't trick me with wrong findings.
How Claude mapped the codebase
I introduced the codebase to Claude. It did a good job — it created a CLAUDE.md covering build commands, frontend, architecture (DDD layers and key directories), deployable services, key technologies, authentication, module communication, and configuration.
The review plan
I knew the review would take multiple sessions and I could easily lose context if I wasn't careful, so I asked for a code review plan based on module dependencies. It came back with a dependency-tiered plan: foundation modules with no dependencies first, then single-dependency modules, up to the high-integration tier. Each module had a review checklist covering DDD compliance, security, null safety, naming consistency, error handling, validation, and unused code. Progress was tracked per module.
It identified the dependencies, prioritised them, and took responsibility for tracking progress — exactly what I needed to go through it one module at a time.
How it actually reviewed the code
As each module review started, Claude produced a findings file. Findings were organised into five severity tiers (Critical, High, Medium, Low, Architecture), each with a unique ID. The report was treated as a living document, updated across sessions rather than produced as a one-shot output.
Then we went through the fixes one by one — Claude implementing, me auditing each change before it was committed. We fixed them one by one in order of severity — security-critical first, then working down through the tiers. Architectural concerns were the only ones deferred to a later phase. Not every finding was enforced — we decided to accept certain risks when the tradeoff was reasonable. Several findings were resolved by removing the problematic code entirely rather than patching it. Resolutions were tied to specific commits, making the whole process auditable and the changes verifiable.
Key takeaways
I was able to use it and get the output I wanted. Without AI I would have postponed this task to a much later stage. After that I started integrating AI assistance more often into my work. If you have not tried it yet, try it — and try it on something you have control of.
What's next
Right now I am working on a project where all the code is being written by Claude. I am monitoring the process, learning proper planning, not using agents or skills yet. Trying to discover the language of the assistant — forcing it to do the work the way I would, with proper guidance. I took down old books to remember forgotten terminology so I can communicate better. Trying to clarify the process in words and discover what is possible, and how.
Written by me, edited with Claude.

Comments
Post a Comment