OpenAI, a leading company in the field of artificial intelligence, recently announced that it has launched a new "inference" AI model called o1-pro in its developer API. Compared with the existing O1 model, this model invests more computing resources and aims to provide more stable and high-quality response. OpenAI said that the launch of o1-pro is to meet developers' needs for higher-performance AI models, especially to provide more reliable solutions when dealing with complex tasks.

However, this high-performance model is currently only available to a number of specific developers, specifically those who have spent at least $5 on the OpenAI API service. What's more striking is that the price of the o1-pro is also quite expensive. OpenAI charges $150 per million token for texts input to the model (billed by token, about 750,000 words equivalent to 1 million tokens), while the text generated by the model is $600 per million token. This price is twice the input price of another advanced model of OpenAI, GPT-4.5, and ten times the price of generating the ordinary o1 model.
Despite its high price, OpenAI has high hopes for its o1-pro performance and firmly believes that its outstanding performance will convince developers to pay for it. An OpenAI spokesperson told TechCrunch. “O1-pro in the API is a version of o1, which uses more computations to think more deeply, providing better answers to the most difficult questions. After receiving many requests from our developer community, we are happy to introduce it to the API to provide a more reliable response.”
It is worth noting that o1-pro was previously opened to ChatGPT Pro subscription users in OpenAI's AI chatbot platform ChatGPT in December last year. However, early users' impressions of o1-pro were mixed. Some users reported that the model performs poorly when solving Sudoku puzzles and may even be stumped by simple visual illusion jokes. In addition, some internal benchmarks conducted by OpenAI at the end of last year showed that o1-pro's performance in coding and mathematical problems was only slightly better than the standard version of o1. However, these benchmarks also found that o1-pro is more reliable when answering these questions.
OpenAI is trying to improve the performance and reliability of AI models in complex tasks by investing higher computing costs, although this is directly reflected in its high pricing strategy. Whether developers are willing to pay such a high price for this performance improvement remains to be tested by the market. OpenAI's o1-pro model undoubtedly represents an important advance in AI technology, but the balance between its high cost and actual performance will be the key factor in determining its market success.
Key points: OpenAI has released a more powerful AI model, o1-pro, aiming to provide better reasoning capabilities. The price of o1-pro is extremely expensive, with the input price twice that of GPT-4.5 and the generation price ten times that of ordinary o1. Early user feedback and internal testing showed that the o1-pro performed poorly in some aspects, but was more reliable in coding and mathematical problems.