On Uncanny Valley and AI Capabilities
AI looks human-like enough to fool us — but it's interpolation, not intelligence. Are we in the risk of automating the work that made us good at work?
In January 2026, an essay called Something Big Is Happening went viral. Written by AI founder and investor Matt Shumer, it spread through LinkedIn, got reprinted by Fortune, and was called a must-read by Inc. Millions of people read it as a wake-up call. The formula was familiar: reluctant insider, escalating urgency, COVID comparison, call to follow the author on X.
On LinkedIn, Fyma CEO Karen K. Burns pointed out that in September 2024, Shumer had released an AI model he called "the world's top open-source model." Within days, independent researchers found they couldn't replicate his results — and that the model appeared to be running competitors' products under a different name. When prompted, it identified itself as OpenAI's language model. He was publicly accused of fraud. He said he had "gotten ahead of himself." The corrected model was never delivered.
Seventeen months later, millions of people were treating his judgment about their careers as authoritative.
In the consequent discussion, Anna Haverinen pointed me to Helen Edwards's essay at the Artificiality Institute on the same thread — on unpredictability and the limits of probabilistic systems.[1] Reading both pieces together, something clicked. Parts of Shumer's essay are absolutely right. We can automate enormous amounts of work that used to consume significant time. This is what Edwards calls combinatorial creativity — the recombination of existing knowledge into new arrangements — and as pattern-matching probability engines, LLMs are genuinely good at this. But they work only from what they know. The fallacy is that 80% of McKinsey-style knowledge work, including design thinking and lean startup playbooks, does exactly this too. And because LLM outputs look so much like human reasoning, we confuse the two — mistaking shape and quantity for actual quality.
This is the uncanny valley of artificial intelligence — not the robotics kind, where synthetic faces creep us out, but a cognitive one. The gap between what AI does and what we believe it does has become dangerously blurry.
On automation, expertise, and the scaffold we are removing
Here is the uncomfortable truth very few say out loud: roughly 80% of what passes for elite knowledge work — the slide decks, the market analyses, the competitive benchmarks, the standard e-commerce design patterns — is combinatorial creativity. Recombining existing knowledge into new arrangements. McKinsey's own research makes the point uncomfortably well: their 2023 analysis found that generative AI's potential to automate "the application of expertise" jumped 34 percentage points in a single cycle, and the activities most exposed were precisely those previously considered immune from automation.[2]
LLMs do this kind of work well. They work from what they know — the statistical patterns of their training data. They pattern-match across vast corpora, synthesize information, and produce polished outputs that would have taken a junior associate days to assemble. When a machine can do this at scale, in seconds, for near-zero marginal cost, the value of the human doing the same work collapses. Not all of it, but most. The first tasks to go are exactly those that used to be the training ground for early-career professionals: research synthesis, first-pass analysis, the summary memo.
The implications for how we develop expertise are enormous, and genuinely unresolved. The traditional path to professional judgment ran through the work AI now handles first. A junior consultant learned to think strategically by doing research synthesis badly, getting it corrected, doing it better. A junior designer developed taste by executing briefs they hadn't shaped, noticing what worked and why. A junior lawyer built judgment by drafting the memo, having it redlined, internalising the gap. The apprenticeship model was not just about learning tasks: it was about developing the instincts that live underneath the tasks. The pattern recognition that eventually becomes wisdom.
Remove the tasks, and you remove the scaffold. Which raises a question that nobody in the AI acceleration conversation seems willing to sit with: how do you develop the senior judgment we still need, when the junior work that used to produce it is gone? You cannot skip straight to the top of the pyramid and expect it to hold. The final 5–10% — the actual expertise — is not a set of facts you can download. It is a capability that grows through struggle, failure, and correction over time. The very messiness of early-career work was the point.
So to rephrase: the organizations that will struggle are not those that automate too much — but those that automate too fast without asking how expertise gets built in the first place.
What remains stubbornly resistant to automation is everything outside the probability distribution: the novel insight that comes from lived experience in a specific context, the eye-brain connection trained on what makes a design land emotionally, the ability to read a room and sense what is not being said. Human decisions are always emotional at some level — and that doesn't scale through pattern matching.
Probability engines and the illusion of sentience
As a species, humans are very good at anthropomorphization, attributing human characteristics, emotions, intentions, or behaviors to non-human entities and lifeless objects, including technology. We do know that LLMs are sophisticated prediction machines estimating the most likely next token based on everything that came before. They are not intelligent, and definitely not sentient beings, at least for now.
And yet human cognition relies on similar mechanisms far more than we like to admit — we pattern-match constantly, reach for heuristics, take shortcuts and mistake familiarity for understanding. When an LLM produces output that feels insightful, what is actually happening is a collision between the model's statistical fluency and our own cognitive biases. We are wired to detect agency and intention in patterns. The LLM provides the pattern; our brains supply the rest.
This is what the Shumer essay illustrates so well. It is well-written, cites real data, and feels authoritative. It also does exactly what LLMs do — pattern-matches across existing discourse and produces confident output that projects understanding it does not actually have. The viral spread is not incidental. It is the mechanism. Fluency triggers trust. Trust bypasses scrutiny. And a man who could not explain what happened to his own AI model gets reprinted by Fortune as an insider authority on the future of your career.
It is a bit of a cliché to reach for the Cynefin framework here, but it fits: we are treating AI outputs as if they belong in the "clear" or "complicated" domains, where cause and effect are knowable and best practices apply. Most of the real problems we face with AI sit in the "complex" domain — emergent, nonlinear, context-dependent. The map is not the territory, and in complex systems, understanding only comes through emergence, from engaging with the actual feedback loop. The map is shaped by the territory, in real time.
Amplification over automation
The real opportunity is not in automation but in amplification. The distinction matters more than it looks. Automation replaces human effort with machine effort. Amplification enhances human capability by providing context-specific, reliable information at the right moment. These are different design problems with different implications: one optimizes for cost reduction, the other for decision quality and removing the cognitive load.
Amplification is also harder. As Edwards puts it, it requires systems that understand your specific situation rather than a generic one — systems that provide trustworthy outputs and track the actual pace and shape of your work.[1-1] This is the territory we are exploring at In Parallel — building systems that make human work better, more informed, and more effective rather than simply replacing it with a cheaper alternative.
The automation narrative is seductive because it is simple. The amplification narrative is where the transformational potential actually lives. When you give a professional exactly the right context at exactly the right moment — not a generic summary but a situated, trustworthy insight — you do not just save time. You change the quality of decisions being made. You change what is possible. The system adapts to the work rather than forcing the work to adapt to the system. That inversion is the design challenge worth pursuing.
The skills that compound
As AI handles the combinatorial layer, the human capabilities become the scarce resource — not the soft ones like communication, but the hard, experiential ones: judgment and ability to act under uncertainty, ethical reasoning, the ability to hold conflicting perspectives and go for it anyway. In many cases these were always underneath the pretty pictures, synthesis and the slide decks. We just had the surface in the way.
Investing in these new core capabilities is not nostalgic. It is strategic. The organizations that figure out how to combine AI's pattern-matching power with human judgment — in ways that make both better — are building something that cannot be easily replicated. That combination is a business problem of the highest order. It does not fit neatly into a feature backlog or a sprint cycle. It requires the kind of systemic thinking that most organizations, with their built-in incentives, are structurally unprepared to do.
Navigating the valley
The uncanny valley of AI capability is not a problem to be solved. It is a condition, and a territory to be navigated. We will be here for a while — possibly a long while — in a space where AI is powerful enough to reshape every profession but not capable of replacing the situated, contextual intelligence the most important work requires.
That means being honest about what LLMs actually are and are not. They are tools for combinatorial creativity and pattern synthesis. They are not reasoning engines. Holding both truths simultaneously — that AI is genuinely powerful and genuinely limited — is the prerequisite for making any good decision about where and how to deploy it.
Shumer's essay ends with a call to follow him on X. That is the tell. The people who understand this moment most clearly are not the ones selling urgency — they are the ones doing the slow, difficult work of building systems that actually help. The world is becoming a wicked problem by itself, where everything we knew might or might not matter anymore. In that complexity, the ability to make sense of what is happening, to hold multiple futures simultaneously, and to act with clarity despite uncertainty is not a nice-to-have.
It is the work.
This article is written in human-AI collaboration with Claude and Obsidian through an experimental agentic pipeline I use to save, expand and connect my numerous in-the-moment notes and scribbles into overarching themes, and finally blog posts and articles like this. The final words and the voice is mine, and grammar by my colleague Claude.
- On Unpredictability and the Work of Being Human, Helen Edwards, Artificiality Institute
- The economic potential of generative AI: The next productivity frontier, McKinsey Global Institute, June 2023
Get insights like this in your inbox
One email per week on execution intelligence, team coordination, and enterprise AI. No fluff.
By subscribing you agree to our Privacy Policy. Unsubscribe anytime.