The Knowability Problem
A response to Gary Hamel's question about AI and the legible organization.
Earlier this year, Gary Hamel posted a question on LinkedIn that reads like a thought experiment but lands like a diagnosis. Is there a possibility that AI will dramatically increase the “knowability” of an organization to everyone working therein? Imagine, he wrote, what happens if an LLM ingested every email, Slack conversation, project review, planning document, milestone, and meeting summary in your company.
It’s a seductive image. It is also a question hiding a much harder one underneath.
What the data already says about your company
Here is the awkward starting point. In its landmark study of knowledge work, McKinsey Global Institute found that “interaction workers” — the managers, professionals, and operators who run modern companies — spend 19% of their workweek searching for information and another 14% coordinating internally with colleagues. Add the 28% spent on email and only about 39% of the average week actually goes to role-specific work.
Put differently: in every five-day workweek, roughly one full day is spent hunting for things that already exist somewhere inside the organization. The information is there. People simply cannot see it.
The picture has not improved with newer tools. Microsoft’s 2024 Work Trend Index found that 60% of a typical knowledge worker’s time now goes to email, chat, and meetings — leaving 40% for the actual creation work that supposedly justifies their job. Sixty-eight percent of those workers report feeling overwhelmed by the pace and volume of work. Seventy-five percent are already using generative AI on their own to cope.
This is the data Hamel’s question runs into. Long before AI, organizations were not knowable to themselves — not because no one had captured the data, but because the data was never shaped into anything readable. Every status update, decision, and dependency vanished the moment the meeting ended. People did not lack access. They lacked structure.
Hamel’s three reasons, rewritten
Hamel has long argued that there are only three reasons one human needs to “manage” another: a lack of competence, a lack of context, or a lack of conscientiousness. He suspects AI will compress all three. He is right about competence — copilots are visibly closing skill gaps. He is right about conscientiousness, in a quieter way: visibility tends to nudge behavior more than supervision ever did.
But the one that matters most, and the one his question quietly circles, is context.
Context is where modern management actually lives. A team is rarely blocked because someone is incompetent or lazy. They are blocked because they do not know what was decided in the meeting they were not in, which deliverable shifted yesterday, who else needs the same engineer next week, or whether the strategy upstairs still wants the thing they are currently building. Context is not a soft skill. It is the coordination signal that tells everyone what reality looks like right now.
When that signal is missing, what fills the gap is more management. More status meetings. More update threads. More executive offsites trying to reconstruct what should have been visible the entire time. We call this the Coordination Tax, and it is the single largest hidden line item on most companies’ P&Ls. It is also exactly the cost Hamel intuits AI could collapse.
He is right that it could. But only if it has something to read.
Why “ingest everything” doesn’t make a company knowable
This is the part of his question that needs sharpening. You can hand an LLM every Slack channel, every Gmail thread, and every meeting transcript in the company, and the result will not be a knowable organization. It will be a very expensive search engine over conversational exhaust.
Conversations are not decisions. Threads are not plans. A meeting summary is not a status update. The reason organizations feel illegible is not that the data is uncaptured — most of it now is. It is that the structured artifacts that would let anyone — human or model — answer questions like “what did we commit to?”, “what changed since last week?”, or “who is currently blocked on what?” do not exist as data. They exist as memories, dispersed across the people who happened to be in the room.
A meeting is not a communication event. It is an execution event — a moment where new commitments are made, dependencies shift, and status changes. If those outputs are not captured as structured signal, the company has just paid the highest-priced human time on the planet to produce information it will lose by Friday.
Knowability, in other words, is not an ingestion problem. It is an architecture problem.
What a self-reading organization actually looks like
This is exactly the gap an Intelligent Management System is built to close. Instead of asking AI to reconstruct a coherent reality from a mountain of unstructured text after the fact, an IMS captures the outputs of every meeting, decision, and review as structured artifacts in real time — decisions, commitments, dependencies, risks, status — and rolls them into a Living Plan for every team, project, and initiative.
A Living Plan is not a document. It is the current best answer to three questions, maintained continuously: what are we trying to achieve, how are we trying to achieve it given what we know today, and where do we actually stand right now? When any of those answers change — a decision in a meeting, a slipped dependency, a new risk — the plan recalculates automatically.
When an organization runs this way, Hamel’s question stops being hypothetical. The organization actually is knowable to anyone working in it, because the data already lives in a shape anyone — and any AI — can read. You do not need an LLM to scrape your past to figure out what is true. The truth is being maintained as you go.
That is the difference between an LLM that has ingested your company and an organization that knows itself.
The question worth asking
Hamel ends his post with “probably unlikely, but maybe not.” We would flip the framing. AI making your organization knowable is not unlikely — it is table stakes for the next decade. But it will not happen because models got bigger. It will happen because companies finally decided to produce the kind of signal AI can actually read.
Knowability is an architecture decision before it is an AI decision. The companies that figure that out will not just have better dashboards. They will have less management — and the management that remains will finally be doing the work it was supposed to do all along.
Get insights like this in your inbox
One email per week on execution intelligence, team coordination, and enterprise AI. No fluff.
By subscribing you agree to our Privacy Policy. Unsubscribe anytime.