Tech3 hrs ago

AI Coding Agents Turn Engineers into Planners and Overseers

AI assistants are reducing coding time for engineers, moving focus to planning, review, and managing AI output. Learn the implications for software development.

Alex Mercer/3 min/GB

Senior Tech Correspondent

TweetLinkedIn
AI Agents: Planning vs. Reviewing

AI Agents: Planning vs. Reviewing

Source: StartuphubOriginal source

AI coding agents are reducing the time engineers spend typing, but the saved minutes are being re‑allocated to planning, reviewing and supervising the tools.

Context At the AI Engineer Europe conference, Louis Knight‑Webb—founder of Vibe Kanban and AI Tinkers London—asked a simple question: *What are we even going to do all day?* His talk traced how AI assistants are reshaping the daily workflow of software developers in the UK and beyond.

Key Facts Knight‑Webb reminded the audience that a traditional engineer’s day consists of planning, writing code, reviewing their own work, and reviewing others’ code. Historical data showed that writing code dominated that schedule. Early AI helpers such as GitHub Copilot could only suggest single lines, leaving most of the work untouched. Newer models—Original Cursor and Claude Code—can now generate whole files or large code blocks without human prompting.

He outlined two contrasting strategies for working with these agents. A *plan‑heavy* approach invests time up front in detailed specifications, allowing the AI to run longer with minimal interruption and reducing later review effort. A *review‑heavy* approach relies on looser prompts and frequent human feedback, delivering faster initial output but demanding more back‑and‑forth checking. Knight‑Webb suggested the former suits refactoring or migration projects, while the latter fits exploratory feature work.

The rise of long‑running agents means engineers spend less time typing and more time managing AI output. Knight‑Webb questioned whether the time saved on coding truly returns to engineers or simply shifts to new tasks like writing precise prompts, quality‑assuring APIs, and shepherding AI‑generated code through deployment.

What It Means Engineers are becoming de‑facto managers of autonomous coding agents. Skills in deep focus, strategic planning and clear task definition are now as valuable as language proficiency. Human code review remains essential to catch logical errors and security flaws that AI may overlook. Companies will need to redesign workflows to allocate time for AI oversight, quality assurance and continuous integration of AI‑produced artifacts.

The next frontier will be measuring how much of the traditional coding workload AI can absorb before human intervention becomes a bottleneck. Watching the evolution of prompt‑engineering tools and AI‑driven testing frameworks will indicate how quickly the balance tips from manual coding to AI supervision.

TweetLinkedIn

More in this thread

Reader notes

Loading comments...