OpenAI is now bringing its “reasoning” model, o1, to select developers through its application programming interface (API). This will provide developers with access to one of the company’s most advanced AI models, which is designed to handle complex, multi-step reasoning tasks. The rollout began on December 17, and will initially be limited to developers in OpenAI’s “Tier 5” usage category, those who meet certain spending and account age requirements.

To qualify for Tier 5 access, developers must have spent at least $1,000 with OpenAI and have an account that is older than 30 days from their first successful payment. This restriction ensures that only those with a lot of engagement on the platform can begin utilizing the powerful capabilities of o1. Given the model’s resource-intensive nature, it comes with a high price tag, with OpenAI charging $15 for every 750,000 words the model analyzes and $60 for every 750,000 words it generates. These costs are significantly higher than those associated with other OpenAI models, such as GPT-4o, which are more pocket-friendly.

“OpenAI o1⁠, our reasoning model designed to handle complex multi-step tasks with advanced accuracy, is rolling out to developers on usage tier 5⁠(opens in a new window) in the API. o1 is the successor to OpenAI o1-preview⁠, which developers have already used to build agentic applications to streamline customer support, optimize supply chain decisions, and forecast complex financial trends,” OpenAI noted.

One of the key features of the o1 model is the addition of a “reasoning_effort” parameter, which allows developers to control how long the model spends thinking through a problem before delivering a response. In addition to this, the o1 model will also come with support for function calling, allowing developers to link the model to external APIs and databases, making it easier to integrate into real-world applications. Additionally, o1 also comes with image analysis capabilities, which expands the range of potential use cases for the model. This will allow the model to process visual inputs, opening new possibilities in fields like manufacturing, science, and even coding.

Compared to the earlier o1-preview model, the new version of o1, labeled “o1-2024-12-17,” is said to come with improvements in both performance and accuracy as well. OpenAI reported gains in the model’s ability to handle various tasks, including coding, mathematics, and visual reasoning. For example, on coding benchmarks such as SWE-bench Verified, o1’s score improved from 41.3 to 48.9, and on the AIME test, which measures mathematical problem-solving, the score jumped from 42 to 79.2. This means that developers can now rely on o1 for tasks that require precision and careful analysis, such as customer support automation, while the model itself is said to be less prone to “hallucinations.”

In addition to o1, OpenAI has also improved its Realtime API, integrating WebRTC into the API. With this, developers will be able to build smoother voice interfaces. “Our WebRTC integration is designed to enable smooth and responsive interactions in real-world conditions, even with variable network quality,” OpenAI wrote in its official statement. “It handles audio encoding, streaming, noise suppression, and congestion control.” Furthermore, OpenAI is bringing pricing reductions for certain services, including a 60% decrease in the cost of audio tokens for o1 and a 90% reduction for GPT-4o mini tokens.