The shift away from cloud-based LLMs to on-device ones is happening faster than many in the industry anticipated, according to a top tech analyst.

Chipmaking giants like TSMC and Nvidia said they expected on-device AI trends to pick up pace by 2026, but it seems DeepSeek’s explosive growth has significantly sped-up the transition, Ming-Chi Kuo said in a note on Monday.

DeepSeek has fueled a burgeoning interest in localized LLM deployment, as increasing data security concerns with cloud-based models push users to seek alternatives.

Various tools and hardware specifications have emerged to support this on-device trend, accommodating devices from low-end laptops to high-end PCs equipped with Nvidia GPUs.

While applications are currently limited to niche users, the trend suggests a potential paradigm shift that could gradually chip away at Nvidia’s dominance in cloud computing, according to Kuo.

Though Nvidia has intertwined DeepSeek’s potential with its offerings, the rise of on-device AI is creating fierce competition and cloud growth might slow, which could temporarily affect investor sentiment until further clarity emerges from advancements like Nvidia’s GB200 NVL72 mass production and its commercialization ventures in AI across sectors like robotics and autonomous driving, the analyst said.

Meanwhile, Taiwanese chip giant TSMC that supplies to the likes of Tesla and Apple, stands to gain as a key beneficiary from the on-device AI surge due to related processor upgrades.

Kuo notes that both on-cloud and on-device LLMs would continue to grow simultaneously as part of a comprehensive ecoystem but the question would be where the bigger growth segment lies.

This forecast by Kuo echoes sentiments by other industry experts, such as Ashwath Damodaran, who previously indicated skepticism about Nvidia’s continued AI market dominance following DeepSeek’s entry.

Meanwhile, there is also no dearth of cloud-based solutions to the concerns related to DeepSeek’s native app compromising privacy and posing geopolitical challenges.

Beside Nvidia, Aravind Srinivas’ Perplexity for example has integrated the open-sourced LLM into its service.