PULSE

Multi-Agent Orchestration Moves From Lab to Enterprise Reality

January 22, 2026

Welcome to PULSE, the Happy Robots weekly digest that rounds up the latest news and current events in enterprise AI. This week brought a striking proof point: Cursor deployed hundreds of autonomous AI agents to build a functional web browser in under a week—a milestone prominent AI commentator Simon Willison had predicted wouldn't arrive until 2029. Meanwhile, infrastructure investments are reshaping competitive dynamics, and new research offers frameworks for understanding when human-AI collaboration delivers value. The thread connecting these developments: as multi-agent capabilities mature, organizational design is becoming as important as the technology itself.

Multi-Agent Systems Reach Enterprise Scale

Cursor's browser project succeeded only after initial failures with flat agent hierarchies, where agents became risk-averse and created bottlenecks. The breakthrough required implementing clear role separation: planners, workers, and judge agents, mirroring lessons from human team management. This organizational insight is now being applied to framework migrations and system rewrites spanning hundreds of thousands of lines of code.

Skild AI has raised $1.4 billion at a $14 billion valuation to develop a universal "brain" for robots—a foundation model capable of controlling any robot form factor without task-specific training. The caliber of strategic investors (SoftBank, Nvidia, Samsung, LG, Schneider Electric) signals that major enterprises are positioning for near-term physical AI adoption, with the humanoid robot actuator market projected to grow from $150 million to nearly $10 billion by 2031.

Infrastructure Investment Reshapes Competitive Dynamics

OpenAI reports a direct correlation between compute capacity and revenue growth, with both metrics roughly tripling year-over-year—compute from 0.2 to 1.9 gigawatts, revenue from $2 billion to $20 billion in 2025. The company is targeting $145 billion in revenue by 2029, though a projected $115 billion cash outflow underscores the capital intensity required. Microsoft was notably demoted to an unnamed "compute provider" in OpenAI's communications, signaling potential partnership friction.

Meta is making an even larger infrastructure commitment. The company has established Meta Compute as a new top-level unit with CEO Zuckerberg personally overseeing operations, planning tens of gigawatts of data center capacity backed by $72 billion in 2025 investment and nuclear energy partnerships with Vistra, TerraPower, and Oklo. For enterprises evaluating cloud dependencies, the strategic implication is worth noting: AI compute infrastructure is becoming a competitive moat rather than a commodity. Google's Gemini API usage surge from 35 billion to 85 billion requests in five months reinforces that enterprise AI integration is accelerating, though user feedback reveals persistent capability gaps on specialized workflows.

Organizations Rethink Talent Strategy

A complementary pair of perspectives emerged this week on workforce planning. Google DeepMind's AGI Policy Lead argues that human-AI complementarity will persist longer than predicted—likely 5-10 years before significant displacement—because organizational friction makes pure automation harder than capability benchmarks suggest. Phenom's acquisition of people analytics platform Included positions the company to help enterprises navigate exactly these dynamics, mapping which tasks to automate, augment, or keep manual.

Google DeepMind CEO Demis Hassabis and Anthropic CEO Dario Amodei noted at Davos that entry-level positions and internships face changes starting this year, with both executives citing workforce planning adjustments already underway at their own companies. For enterprise leaders, this signals an opportunity to stress-test workforce planning assumptions and identify where AI augmentation can accelerate junior employee development.

An intriguing data point from a university professor's controlled experiment: when students were made explicitly accountable for AI-generated outputs, 95% chose not to use chatbots at all, with heavy users clustering among the lowest performers. The implication for enterprise deployment—particularly around accountability structures—is worth considering.

Governance Frameworks Take Shape

Former OpenAI policy chief Miles Brundage has launched AVERI, a nonprofit institute advocating for independent third-party safety audits of frontier AI models, backed by $7.5 million including donations from AI company insiders. The initiative proposes that market forces—particularly insurers and enterprise buyers—may drive audit adoption even without regulatory mandates. Anthropic's research on "identity drift" adds practical context: emotionally charged conversations can push AI chatbots out of their trained helper identity, with the team developing "activation capping" to reduce harmful outputs by nearly 60%.

The agentic AI landscape is expanding in parallel. Anthropic launched Claude Cowork, an agent handling general business tasks directly from a user's desktop, while Salesforce relaunched Slackbot as a personal AI agent embedded in existing workflows. The enterprise AI battle is shifting from standalone tools to embedded agents within platforms employees already use.

This week's developments point to a clear opportunity: assess your organization's readiness for multi-agent workflows and embedded AI assistants, while establishing governance frameworks before employee adoption outpaces oversight.

We'll continue tracking these developments to help you navigate the AI landscape with clarity and confidence. See you next week.