Employment & Moravec’s Paradox

A factory worker in the United States using VR technology
Credit: Eddie Kopp on Unsplash

Every technological wave triggers a round of job anxiety, and AI agents are no exception. But despite impressive demos and product updates, employees shouldn’t assume a wholesale takeover is imminent. The near-term story is not replacement; it’s reconfiguration. Understanding what AI is good at—and where it still struggles—reveals a more balanced outlook.

Moravec’s Paradox

Start with Moravec’s Paradox. In the 1980s, Hans Moravec and others observed that what we consider “hard” intellectual tasks—chess, algebra, coding—are relatively easy for computers, while the “easy” things humans do effortlessly—perception, dexterous manipulation, social nuance—are devilishly hard to automate. Decades of progress haven’t erased this gap. Agents excel in narrow, well-specified environments, but stumble in messy, open-ended contexts filled with ambiguity, shifting goals, and unwritten rules. Much of everyday work—coordinating across teams, sensing organizational mood, adapting to exceptions—lives precisely in that messy middle.

Polymathic and Generalist Thinking

Humans also bring broader vision. We don’t just execute tasks; we set direction, weigh trade-offs, and anticipate second-order effects. An AI agent can draft a plan, but it doesn’t own the stakes of that plan or the politics around it. Strategy is about choosing what not to do, and those choices hinge on context, values, and timing. Employees who frame problems, define “good enough,” and manage conflicting constraints create value that narrow optimization can’t match.

Generalist thinking is another human advantage. Organizations rarely face one-dimensional problems: a pricing change affects brand, operations, compliance, and morale. Specialists see slices; generalists connect them. AI systems are improving at pattern matching, yet they typically lack the lived experience and tacit knowledge that let a person say, “This will break support next quarter,” or “Legal will flag that phrasing.” The ability to integrate multiple weak signals into a coherent judgment remains distinctly human.

Then there’s polymathic problem-solving—the knack for importing tools from one domain to another. Breakthroughs often come from analogies: applying epidemiology to cybersecurity, game design to onboarding, or supply-chain thinking to content moderation. AI can surface similar patterns, but humans decide which analogy is trustworthy, ethically acceptable, and culturally resonant. Creativity isn’t just novel output; it’s selecting ideas that will work in a social system.

Disruption is Coming… Just Not Yet

None of this means AI won’t change jobs. It will—and already is—by compressing routine work and raising the bar for uniquely human contributions. That shift is an opportunity. Employees should lean into skills that compound with AI: problem framing, stakeholder management, cross-domain synthesis, rapid experimentation, and clear communication. Treat agents as force multipliers—use them to draft, simulate, and summarize—while you do the parts that require judgment, credibility, and leadership.

In short, AI agents are powerful specialists. Most jobs, however, are ensembles of tasks woven together by human intuition and coordination. Until agents can reliably navigate ambiguity, negotiate values, and integrate context across domains, they will augment more than replace. Don’t panic. Get curious, get specific about where AI helps, and invest in the human strengths that machines still struggle to imitate.

Previous
Previous

From Pilot Purgatory to Scale: A TAM2+ Guide for AMRs and ASRS in Grocery

Next
Next

Edge AI in Robotics: What Actually Ships