Richard Socher's Lab Wants AI to Improve Itself Forever. GV, Nvidia, and AMD Backed It With $650 Million.

Richard Socher is one of the most‑cited researchers in the history of natural language processing. His work at Stanford, where he led the lab that produced GloVe word vectors and TreeRNNs, established foundations that later models built on. His tenure as Chief Scientist and EVP at Salesforce, where he built and led the company's entire AI research and product stack, gave him direct experience of deploying AI at enterprise scale. He launched You.com, an AI search company that reached a $1.5 billion valuation. He has been at or near the center of the most important developments in the AI industry for more than a decade.
The company he founded now wants to build AI that eventually does not need people like him anymore.
On May 13, 2026, Recursive Superintelligence emerged from stealth after months of reported fundraising discussions. The company announced it had raised $650 million in funding at a $4.65 billion valuation, led by GV, Google's venture arm, and Greycroft, with participation from AMD Ventures and Nvidia. The final round closed above the $500 million figure that the Financial Times first reported in April 2026, confirming the round had been oversubscribed.
The founding team Socher assembled reads like a composite of the most significant AI research institutions in the world. Co‑founder Tim Rocktäschel is a professor of AI at University College London and was previously a director at Google DeepMind, where he led research into open‑ended learning. Co‑founders Josh Tobin, Jeff Clune, and Tim Shi all come from OpenAI, with Clune having also developed the Darwin Gödel Machine at Sakana AI, a system that demonstrated self‑rewriting agents improving performance on coding benchmarks. Other team members have backgrounds at Meta AI, Salesforce AI, and Uber AI. The current headcount is over 25 researchers and engineers across San Francisco and London.
What Self‑Improving AI Actually Means
The phrase "self‑improving AI" is used loosely in the industry and with specific precision at Recursive. The company is building systems that can continuously generate and refine new capabilities in an open‑ended cycle, without requiring human researchers to define what capabilities to improve or how to improve them.
Socher described the ambition as the "third and perhaps final stage of neural networks." The first stage was manually designed feature engineering. The second, the current stage, is learning from data using neural architectures that humans design. The third stage, Recursive's target, is systems that automate the design process itself, discovering better learning algorithms, better architectures, and better training procedures without requiring human researchers to propose and test each one.
Rocktäschel grounds this in a concept from science fiction that captures the genuine technical ambition: Stanisław Lem's notion of an "information barrier," the point where available knowledge grows so fast that humans can no longer keep up with it or meaningfully integrate it. Recursive wants to break through that barrier by fully automating the scientific method, starting with AI research itself, then extending the approach to other scientific disciplines.
The company compares its approach to biological evolution, where discoveries accumulate over time to create increasingly advanced forms of intelligence. Just as evolution did not require a designer, Recursive's self‑improving system aims to discover increasingly capable AI through automated cycles of evaluation, selection, and improvement.
The practical path toward this ambition starts with a "Level 1" autonomous training run, which the company defines as an AI system that can improve itself across standard evaluation benchmarks without human intervention at each step. This is a testable, verifiable milestone with clear commercial relevance. An AI system that can identify its own weaknesses and design better training procedures to address them reduces the human research cost of frontier model improvement, which is currently enormous.
Why GV and Nvidia Made This Bet
The competitive field for this vision is specific and growing. AMI Labs, founded by Yann LeCun, is pursuing world models as a path toward AI systems that understand physics and causality better than current large language models. Ineffable Intelligence, founded by DeepMind's David Silver, is focused on reinforcement learning as the mechanism for achieving general capability. Safe Superintelligence, Ilya Sutskever's company, is taking a safety‑first approach to the same ultimate destination. What separates Recursive is scope and mechanism: others are building toward something specific; Recursive wants to automate the process of building itself.
Google's GV decision to lead a round of this size for a company with no public product is a specific bet on the founding team's ability to deliver on a technical vision that the company's backers believe is correct. GV's portfolio history includes investments in companies that have achieved genuinely category‑defining outcomes. Its decision to lead Recursive's round, rather than follow another investor, signals a conviction about the founding team's capability that financial participation alone would not.
Nvidia's participation is commercially strategic. Every AI research breakthrough that reduces the number of human researcher hours required per capability gain is simultaneously a bet that the compute required to replace those human hours will be large and valuable. A system that autonomously runs millions of training experiments to find better learning algorithms is a system that consumes enormous amounts of GPU compute. Nvidia's investment in Recursive is partly a bet that Recursive's compute consumption will be significant.
AMD Ventures' participation mirrors that logic while also representing AMD's strategy of backing AI research organizations early enough to establish software and framework relationships before commercial contracts are signed.
A public launch is targeted for mid‑2026, with the company actively hiring across research and engineering in both San Francisco and London.





