AI in Business

Onboarding the AI Was the Easy Part

Why your agent’s job description is already obsolete by Tuesday, and what to give it instead.

HBR is right that AI agents need onboarding plans. It is also wrong about when onboarding ends.

In its March 2026 issue, Harvard Business Review published “Create an Onboarding Plan for AI Agents.” The argument is simple and correct: companies are deploying autonomous agents the way they used to install software, and that is why most of them fail. Agents need job descriptions. They need defined responsibilities, authority limits, and a clear answer to the question of when to escalate to a human. “The big challenge in adopting agentic AI,” HBR argues, “isn’t figuring out how to adapt to a new and important technology. It’s primarily about managing work.”

We agree. We also think the analogy is half-built.

What HBR gets right, and where it stops

Treat your agent like a new hire. Write its job description, set its guardrails, define its escalation path. The numbers say organisations that skip this step fail more often than they succeed.

Pertama Partners’ 2026 review of agentic AI projects found that 88% of AI agents never reach production. Of the survivors, only one in five organisations has a mature governance model. MIT Media Lab puts the figure starker: 95% of generative AI pilots produce no measurable ROI.

The reflexive response to those numbers is “write better plans.” Define the role more carefully. Add more guardrails. Spend longer on week one.

That is where the new-hire analogy starts to mislead. New human hires do not actually onboard from the document. They onboard from the daily exposure to live context. The first week is paperwork. Weeks two through six are sitting in meetings, watching decisions get made and reversed, hearing the customer call that changed the roadmap, noticing which deadline slipped and why. By month three, the new hire knows the team’s real situation — not because they read it, but because they were in it.

Your AI agent has no week two.

Where the analogy breaks

Most agent onboarding is a one-time document dump: a handbook, a few example tickets, a list of API endpoints, a guardrail policy. The agent reads it once, gets sent into the wild, and from that moment its understanding of the team is a fixed snapshot of the day it was deployed.

Then Tuesday happens. Scope changes in a customer call. A deadline moves. The product team renames the project. The CFO declines next quarter’s hiring plan. Engineering quietly drops a feature.

None of this reaches the agent. The handbook does not update itself. The document the agent was onboarded with is now wrong, and the agent has no idea. So it does what its training made it do: it writes a confident, well-formatted output based on last week’s reality. That output looks fine. It is fine, until someone downstream notices it is working from a stale picture and spends two hours untangling it.

This is the Coordination Tax in a new costume: the hidden cost of keeping everyone aligned when reality moves faster than your artefacts (see The 2-Hour Tax). The agent is no longer just generating noisy output. It is making decisions and taking actions on a snapshot that is three days old. The risk has gone up, not down.

Gartner’s 2026 Hype Cycle for Agentic AI puts it bluntly: “Context is emerging as one of the most critical differentiators for successful agent deployments,” and the firm predicts that by 2030, half of all agent deployment failures will be governance and runtime context failures. Not capability failures. Context failures.

What real onboarding looks like

If you accept that onboarding is continuous for humans, you have to accept it is continuous for agents too. The unit of onboarding is not a document. It is access to a living record of decisions, plans, and drift.

A new human hire becomes useful by reading what the team is currently dealing with — the open decisions, the contested priorities, the action items from yesterday’s meeting that nobody wrote down properly. That information already exists. It is just scattered across Slack threads, Notion pages, transcripts the notetaker captured, and Jira tickets that lag the meeting by a week. The new hire reconstructs it manually.

We have measured this cost on the human side: knowledge workers in our customer base spend four to eight weeks doing onboarding archaeology before they are operating at full capacity. About 95% of decisions never reach the systems of record where an agent could find them.

Your agent is doing the same archaeology. Less successfully, because it cannot ask the team why a decision got reversed.

The fix is not a better job description. It is a single live record the agent can read continuously: what was decided, who owns it, what changed today, what is drifting.

We call this shared context. It sits between the messy reality of meetings and the tidied-up artefacts in Notion or Atlassian, and it updates with every decision the team actually makes. Same Claude. Same Copilot. Same Cursor. Onboarded once, current forever.

What to stop doing, and what to start

Stop writing onboarding plans for your agents as if they were a one-time event. The plan is the easy part. It is also the part that does not matter past Tuesday.

Start treating context as the actual deliverable. Three concrete moves.

First, make your decisions legible. Most teams hold them in heads, in DMs, and in the gap between the meeting and the project tracker. An agent cannot read any of those. A shared context layer captures the decision the moment it is made, not the moment someone finds time to update Jira.

Second, wire the agent to the same record your humans use. If your team’s reality lives in three meetings a day, the agent should be reading from those meetings, not from a quarterly handbook. An MCP-native context layer makes this practical without ripping out the tools you already pay for.

Third, measure the agent the way you would measure a new hire: on whether its outputs match the team’s current situation, not on whether they match the playbook from week one.

The HBR piece is the floor, not the ceiling

HBR is right. Companies should write onboarding plans for their AI agents. They should define responsibilities, authority, and escalation paths. They should treat agents like co-workers, not software.

That is the floor. The ceiling is whether the agent has the same week-two-through-six experience your humans do — continuous access to a current record of what the team is actually deciding, building, and changing. Without it, every agent in your company is operating on the day-one snapshot, watching reality drift further away with every meeting that does not reach it.

The onboarding plan is the easy part. The hard part is making sure your agent never has to be onboarded twice.

Book a walkthrough — see how In Parallel makes your AI 10× more relevant →

Related articles