OpenAI recently released its newest AI model, o3. Today’s release is a significant step in the company’s drive to make generative AI a powerful, productive engine. For more resource-constrained environments, the o3 model releases together with the small version, o4-mini. As purpose built for reading and interpreting visual information, o3 operates best on rudimentary sketches, diagrams, and even images of little quality. Like all the other creative features, this groundbreaking model is currently rolled out to subscribers of ChatGPT Plus, Pro, and Team plans.
With o3, OpenAI introduces a groundbreaking feature: the ability for AI to “think with images.” This model is the first from OpenAI to be conversational. One thing is clear, it very naturally incorporates visual information into its reasoning loops. The platform is incredibly flexible, as the company lets users upload any and all images like whiteboard notes, scribbles, etc., themselves. O3 goes on to critique and contextualize these images powerfully.
In addition, OpenAI has vigorously stress-tested o3 through its most stringent safety program to date. That’s a sign of the organization’s advanced commitment to safe AI deployment. OpenAI’s first reasoning model, o1, released earlier this year, focused on solving complicated issues through multi-step thinking. Now, the introduction of o3 further expands on that foundation.
OpenAI faces strong competition from other large players such as Google/DeepMind, Anthropic and Elon Musk’s xAI. To continue that leadership position, it is going beyond a text-based experience with AI. The company considers o3 to be a major leap forward from its past products.
“They don’t just see an image; they can integrate visual information directly into the reasoning chain,” said OpenAI representatives.
Furthermore, o3 allows for independent use of all ChatGPT tools for the first time, enabling features such as web browsing, Python coding, image understanding, and image generation. This news marks a significant milestone in OpenAI’s plan to continue expanding the limits of generative AI and what it’s capable of.
In light of rapidly evolving technology and the potential risks associated with AI development, OpenAI has indicated that it may alter its safety protocols if “another frontier AI developer releases a high-risk system without comparable safeguards.” This is yet another sign of the company’s eagerness to portray itself as being on the leading edge of efforts in AI safety.