If you’ve been tapping your fingers waiting for ChatGPT’s response to finally come in today, you’re not alone.
Users across the board have been experiencing sluggish response times for a prolonged period, and OpenAI has confirmed they’re investigating the issue.
An official incident report was posted two hours ago from the time of writing.
What could be causing the slowdown? The introduction of OpenAI’s new o3-Mini model might be adding some stress to the system.
This next-gen reasoning model, which includes three reasoning effort levels, was designed to enhance STEM-related capabilities such as science, math, and coding.
The o3-mini model is available to free users — a surprising move that made sense in the context of DeepSeek’s popularity, but it may now be overwhelming OpenAI’s servers.
DeepSeek’s reasoning model R1 shook the world, available at a fraction of the costs of OpenAI’s o1, while also outperforming its benchmarks at the same time.
The Chinese company had to shut off new signups temporarily as it dealt with the surge in demand and is yet to resume new registrations for its API service, after it faced record downtimes.
OpenAI brought its response to DeepSeek in o3-mini series as fast as it could, alongside two agents — Operator and Deep Rssearch.
Planning your financial journey can be daunting but it doesn't have to be. Fire Fast by Dzambhala helps you understand and plan effectively.
Join the vibrant privacy-ensured Dzambhala community on
Want to give feedback on this story? Write to us.