Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More The AI landscape continues to evolve at a rapid pace, with recent developments challenging established paradigms. Early in 2025, Chinese AI lab DeepSeek unveiled a new model that sent shockwaves through the AI industry and resulted in a 17% drop in Nvidia’s stock, along with other stocks related to AI data center demand.
This market reaction was widely reported to stem from DeepSeek’s apparent ability to deliver high-performance models at a fraction of the cost of rivals in the U.S., sparking discussion about the implications for AI data centers .
To contextualize DeepSeek’s disruption, we think it’s useful to consider a broader shift in the AI landscape being driven by the scarcity of additional training data. Because the major AI labs have now already trained their models on much of the available public data on the internet, data scarcity is slowing further improvements in pre-training . As a result, model providers are looking to “test-time compute” (TTC) where reasoning models (such as Open AI’s “o” series of models) “think” before responding to a question at inference time, as an alternative method to improve overall model performance.
The current thinking is that TTC may exhibit scaling-law improvements similar to those that once propelled pre-training, potentially enabling the next wave of transformative AI advancements. These developments indicate two significant shifts: First, labs operating on smaller (reported) budgets are now capable of releasing state-of-the-art models. The second shift is the focus on TTC as the next potential driver of AI progress.
Below we unpack both of these trends and the potential implications for the competitive landscape and broader AI market. Implications for the AI industry We believe that the shift towards TTC and the increased competition among reasoning models may have a number of implications for the wider AI landscape across hardware, cloud platforms, foundation models and enterprise software. 1.
Hardware (GPUs, dedicated chips and compute infrastructure) 2. Cloud platforms: Hyperscalers (AWS, Azure, GCP) and cloud compute 3. Foundation model providers (OpenAI, Anthropic, Cohere, DeepSeek, Mistral) 4.
Enterprise AI adoption and SaaS (application layer) However, if advancements in train-time compute are indeed plateauing, the threat of rapid displacement diminishes. In a world where gains in model performance come from TTC optimizations, new opportunities may open up for application-layer players. Innovations in domain-specific post-training algorithms — such as structured prompt optimization , latency-aware reasoning strategies and efficient sampling techniques — may provide significant performance improvements within targeted verticals.
Any performance improvement would be especially relevant in the context of reasoning-focused models like OpenAI’s GPT-4o and DeepSeek-R1, which often exhibit multi-second response times. In real-time applications, reducing latency and improving the quality of inference within a given domain could provide a competitive advantage. As a result, application-layer companies with domain expertise may play a pivotal role in optimizing inference efficiency and fine-tuning outputs.
DeepSeek demonstrates a declining emphasis on ever-increasing amounts of pre-training as the sole driver of model quality. Instead, the development underscores the growing importance of TTC. While the direct adoption of DeepSeek models in enterprise software applications remains uncertain due to ongoing scrutiny, their impact on driving improvements in other existing models is becoming clearer.
We believe that DeepSeek’s advancements have prompted established AI labs to incorporate similar techniques into their engineering and research processes, supplementing their existing hardware advantages. The resulting reduction in model costs, as predicted, appears to be contributing to increased model usage, aligning with the principles of Jevons Paradox. Pashootan Vaezipoor is technical lead at Georgian.
If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing.
Check out more VB newsletters here . An error occured..
Technology
DeepSeek jolts AI industry: Why AI’s next leap may not come from more data, but more compute at inference

To contextualize DeepSeek’s disruption, let's consider the broader shift in AI being driven by the scarcity of training data.