OpenAI has now announced the release of a new AI model, designated as o1, internally referred to as “Strawberry,” and what is said to be the first in a planned series of “reasoning” models.

The newly unveiled o1 model is designed to tackle intricate tasks that require advanced reasoning. Unlike its predecessors, o1 is engineered to dedicate more time to processing and evaluating queries before generating responses. This extended computation phase aims to enhance the model’s ability to handle multi-step problems, such as complex mathematical equations and sophisticated coding challenges. This approach, known as “chain of thought” prompting, involves the AI methodically analyzing various aspects of a problem before arriving at a conclusion. This technique mimics human cognitive processes, potentially leading to more accurate and nuanced outputs.

Subscribe to TP Daily for updates on the latest and greatest in Tech

In the context of this release, the company revealed that o1 is still in its early stages. “As an early model, it doesn’t yet have many of the features that make ChatGPT useful, like browsing the web for information and uploading files and images,” OpenAI said. “But for complex reasoning tasks this is a significant advancement and represents a new level of AI capability. Given this, we are resetting the counter back to 1 and naming this series OpenAI o1.”

With o1, OpenAI has employed reinforcement learning techniques, which reward the model for accurate problem-solving steps rather than simply for correct answers. This approach aims to refine the AI’s problem-solving process, enhancing its ability to handle complex queries effectively. Additionally, the model uses a “chain of thought” approach, which enables it to break down problems into smaller, manageable steps, thereby improving overall accuracy and reliability.

Subscribe to TP Daily for updates on the latest and greatest in Tech

The performance of the o1 model has been demonstrated through various tests and benchmarks. In particular, o1 has shown remarkable proficiency in solving complex problems in mathematics and coding. For instance, during a qualifying exam for the International Mathematics Olympiad, o1 achieved an accuracy rate of 83%, a substantial improvement over the 13% accuracy rate of its predecessor, GPT-4o. This enhanced performance extends to competitive programming scenarios, where o1 has reached the 89th percentile of participants.

The introduction of o1 also includes a scaled-down version, o1-mini, which is designed for more cost-effective code generation tasks. Both versions are being made available to ChatGPT Plus and Team users initially, with broader access planned for educational and enterprise users in the near future. The pricing for developer access to o1 is notably higher than previous models. “We’re releasing OpenAI o1-mini, a cost-efficient reasoning model. o1-mini excels at STEM, especially math and coding—nearly matching the performance of OpenAI o1 on evaluation benchmarks such as AIME and Codeforces. We expect o1-mini will be a faster, cost-effective model for applications that require reasoning without broad world knowledge,” OpenAI noted in a blog post.