Google, seen as a laggard in the artificial intelligence race, until three months ago, now is so supercharged that it just can’t stop shipping.
The search engine giant on Thursday released the ‘Gemini 2.0 Flash Thinking’ large language model (LLM) on an experimental basis.
The LLM is available in the Google AI Studio and Vertex APIs for developers.
The release comes as an extension of the Gemini 2.0 series Google began releasing on Dec. 11 to the public, which includes Flash and Flash Advanced, in experimental stage as well.
Google engineers shared examples of them trying the model for tricky probability questions as the model’s expertise is primarily in being able to engage in detailed and intense reasoning.
Curious how it works? Check out this demo where the model solves a tricky probability problem. pic.twitter.com/F3kJv4R9Gy
— Noam Shazeer (@NoamShazeer) December 19, 2024
Google CEO Sundar Pichai said the model was its most “thoughtful” one yet — engaging in pun around the model’s intended behavior.
It’s still an early version, but check out how the model handles a challenging puzzle involving both visual and textual clues: (2/3) pic.twitter.com/JltHeK7Fo7
— Logan Kilpatrick (@OfficialLoganK) December 19, 2024
Based on the prompts that Dzambhala tried to match between the new thinking Gemini model and competing models such as O1 Pro from OpenAI, the ability to answer the tricky questions seems to be similar based on a small sample but the actual “thinking” process shared by Gemini Flash 2.0 Thinking is an added interesting feature.
The model almost lists out its thinking in as much as detail as it is going through the process. This is likely to be helpful in learning use-cases across verticals.