DeepSeek Eyes $7.35 Billion in Its First‑Ever Funding Round — and the Open‑Source Era of Chinese AI Is Shifting

For more than two years, Liang Wenfeng said no.
The billionaire founder and CEO of DeepSeek, the Hangzhou‑based AI lab that shocked the global technology industry in early 2025 with the release of models that matched frontier performance at a fraction of frontier cost, declined outside investment consistently and publicly. His reasoning was philosophical and specific: DeepSeek was funded by High‑Flyer, his quantitative hedge fund, and that arrangement allowed the lab to pursue a research‑first culture without the quarterly revenue pressure that external investors inevitably introduce. DeepSeek was not a startup trying to become a business. It was a research organization that happened to also produce models people wanted to use.
That arrangement is now changing.
The Information reported on May 8, 2026, citing two people with direct knowledge of the discussions, that DeepSeek is seeking to raise up to 50 billion yuan, equivalent to approximately $7.35 billion at current exchange rates, in its first external funding round. Liang Wenfeng plans to contribute the largest share of capital in the round personally. China's national AI fund, the National Integrated Circuit Industry Investment Fund, is in discussions to serve as the lead external investor. Chinese technology conglomerate Tencent is also reportedly in talks to participate with a contribution in the range of $3 to $4 billion.
The Financial Times and Bloomberg have separately confirmed the broad outline of the discussions, though the $45 billion target valuation in some reports and the specific fundraising target have not been formally announced by DeepSeek itself.
What Changed DeepSeek's Mind About Outside Capital
DeepSeek's self‑funding arrangement with High‑Flyer was genuinely unusual in the AI industry. Most AI labs, regardless of their philosophical position on commercial versus research orientation, require more capital than any single fund or corporation can supply indefinitely because the cost of training frontier models at competitive scale is so high. A single frontier training run at the scale required to produce a model like DeepSeek V3 cost tens of millions of dollars. Serving that model to the users who accessed it at the near‑zero prices DeepSeek charged required ongoing GPU procurement and data center investment that High‑Flyer's trading profits funded but that would grow without a natural ceiling as adoption expanded.
The Information's reporting identifies two specific shifts driving the decision to raise externally. The first is the funding effort itself accelerating DeepSeek's plans to generate revenue and achieve commercial viability, suggesting that the round is not primarily about survival but about the transition from a research‑first organization to one with a more conventional commercial model. The second is the plan to increase the frequency of model releases to align more closely with industry standards, a change that implies DeepSeek has committed to competing on the commercial product development cadence of OpenAI and Anthropic rather than the academic publication cadence it has operated on until now.
What DeepSeek Has Actually Built
Understanding why $7.35 billion is being raised into a company with no known commercial revenue requires understanding what DeepSeek has built and why it is considered strategically significant enough for the National Integrated Circuit Industry Investment Fund, a state‑backed vehicle with 60 billion yuan in committed capital, to consider making its first‑ever investment in a Chinese LLM company.
DeepSeek's technical contributions to the AI field have been genuinely consequential and globally influential, not just commercially interesting within China. The V2 model, released in May 2024, introduced a mixture‑of‑experts architecture that demonstrated frontier‑class reasoning performance at substantially lower inference cost than dense transformer designs. The V3 model, released in December 2024 and trained on approximately 2,048 Nvidia H800 chips, achieved benchmark performance competitive with Claude 3.5 Sonnet and GPT‑4o at training costs estimated at approximately $5‑6 million per run, compared to $40‑100 million for comparable Western frontier models.
The efficiency advantage is not a marketing claim. It is a reproducible technical result that researchers at OpenAI, Google, and Anthropic have acknowledged in public papers and commentary. DeepSeek's training methods, particularly its mixture‑of‑experts routing and its novel optimization algorithms including an improved version of Muon published alongside the V3 release, have been adopted or cited by AI researchers globally.
The V4 model, which DeepSeek plans to release an update to in June 2026, is expected to continue this trajectory. According to the two sources cited by The Information, one of the commitments DeepSeek has made to prospective investors is that model release frequency will increase from its current irregular cadence to something closer to the monthly or quarterly release cycles that Anthropic and OpenAI have demonstrated commercially.
The National IC Fund's Strategic Logic
The potential participation of China's National Integrated Circuit Industry Investment Fund in a DeepSeek funding round would represent a significant departure from the fund's historical focus. The fund was established with 60 billion yuan primarily to support Chinese semiconductor companies in developing domestic chip manufacturing and design capabilities that can compete with TSMC, Samsung, and Nvidia under export control conditions.
An investment in a software‑side AI lab rather than a hardware company reflects a specific strategic judgment: that DeepSeek's model efficiency techniques are as strategically significant for China's AI independence as hardware chip capacity. A Chinese AI lab that has demonstrated the ability to train competitive frontier models on export‑compliant H800 chips, rather than requiring the H100 or H200 hardware that US export controls restrict, is reducing China's dependency on foreign hardware faster than any domestic chip manufacturer can match.
DeepSeek's open‑weight releases have also had a direct impact on Western AI market dynamics. When DeepSeek V3 demonstrated that a model trained at a fraction of OpenAI's cost could achieve competitive performance, it triggered a significant decline in Nvidia's stock price as investors reassessed the compute demand assumptions underlying their AI infrastructure positions. A company that can move global semiconductor markets with a research paper is, from a national strategic perspective, as important as a chip factory.
The Commercial Transition That $7.35 Billion Funds
Gary Marcus, the AI researcher and NYU professor emeritus, offered a pointed critique of DeepSeek's commercial position in a piece published in April: "The company charges almost nothing per token and releases weights for free. That is defensible as a research mission but not as a commercial thesis at this price." His critique anticipated exactly the transition that the funding round and its accompanying revenue push are designed to execute.
DeepSeek's monetization shift will likely follow the pattern established by Moonshot AI's Kimi platform: tiered consumer subscription pricing, enterprise API access, and specialized agents. The infrastructure for all three exists in DeepSeek's current model architecture. What has been missing is the organizational commitment to treating commercial revenue as a priority rather than a by‑product of research.
The $7.35 billion, if raised at the terms The Information describes, funds that organizational transition: more compute for training faster, larger future models; more enterprise sales infrastructure to convert developer adoption into contracted API revenue; more customer success resources to support the enterprise clients that will pay premium prices for custom fine‑tuned models and dedicated serving infrastructure.
The open‑source era of DeepSeek, where models were released freely because the research mission required broad external validation, is not ending. The open weights will continue. But the commercial era, where DeepSeek competes for the enterprise AI contracts that OpenAI, Anthropic, and Google are accumulating, is beginning. The $7.35 billion is the starting capital for that competition.
More at deepseek.com





