Perplexity, the AI-driven search platform, has now added Google’s Gemini 2.0 Flash model to its arsenal, making it available to all Perplexity Pro users.
Arav,ind Srinivas, CEO of Perplexity, announced on social media that the Gemini 2.0 Flash, is a groundbreaking multimodal, cost-efficient model.
This development follows Google’s decision to elevate Gemini to production status, swiftly followed by Perplexity’s integration.
Gemini 2.0 Flash would be available for Perplexity Pro users across platforms, including web, iOS, Android, and Mac.
The model can be accessed by updating the Perplexity app and navigating to the “rewrite” option at the end of an answer or within the settings.
This move mirrors Perplexity’s rapid adoption of the DeepSeek reasoning model R1, which ruffled a few feathers due to the company’s China-based origins.
To quell security concerns, Perplexity rallied Jimmy O. Yang, famously known as Jian-Yang from HBO “Silicon Valley” for promotions.
Perplexity has emphasized that all servers hosting the DeepSeek model are located in the Western world, and there are no security or privacy risks associated with using R1 on its platform like there would be when using DeepSeek’s native apps.
Gemini 2.0 Flash is the first ever Google LLM available on Perplexity.
Google parent Alphabet’s stock tanked on Wednesday, crashing 7% to $192.91 in the aftermath of its earnings report that showed a revenue lag.
The tech giant has been trying to allay concerns related to it falling behind in the AI race, but hasn’t managed to impress the mainstream audience.
Nevertheless, Google has come up with some impressive LLMs of late, especially the Gemini 2.0 Flash Thinking model, in the author’s view, which is currently only available at an experimental stage to developers.
Beside competing on benchmark, Google models provide a better value compared to Sam Altman’s OpenAI or Elon Musk’s Grok on pricing and providing very large context windows in the models.