Google, seen as a laggard in the artificial intelligence race, until three months ago, now is so supercharged that it just can’t stop shipping.

The search engine giant on Thursday released the ‘Gemini 2.0 Flash Thinking’ large language model (LLM) on an experimental basis.

The LLM is available in the Google AI Studio and Vertex APIs for developers.

The release comes as an extension of the Gemini 2.0 series Google began releasing on Dec. 11 to the public, which includes Flash and Flash Advanced, in experimental stage as well.

Google engineers shared examples of them trying the model for tricky probability questions as the model’s expertise is primarily in being able to engage in detailed and intense reasoning.

Google CEO Sundar Pichai said the model was its most “thoughtful” one yet — engaging in pun around the model’s intended behavior.

Based on the prompts that Dzambhala tried to match between the new thinking Gemini model and competing models such as O1 Pro from OpenAI, the ability to answer the tricky questions seems to be similar based on a small sample but the actual “thinking” process shared by Gemini Flash 2.0 Thinking is an added interesting feature.

A puzzle we gave to Gemini 2.0 to try out its thinking capcaity.

The model almost lists out its thinking in as much as detail as it is going through the process. This is likely to be helpful in learning use-cases across verticals.