OpenAI has recently launched the most powerful version of GPT to date ——GPT-4.5, currently available in research preview mode for global Pro users and developers.
GPT-4.5 builds on its predecessor by further enhancing the scale of pre-training and post-training. Through the expansion of unsupervised learning capabilities, it is better at identifying patterns, understanding relationships, and possesses more creative generation abilities.
Early tests indicate that GPT-4.5 offers a significantly more natural and fluent communication experience. With a broader knowledge base, more precise understanding of user intent, and higher "emotional intelligence" (EQ), it can more effectively assist with writing, programming, and solving real-life problems.
Breakthrough in scaling unsupervised learning
OpenAI enhances AI capabilities by expanding two complementary artificial intelligence training paradigms:unsupervised learningandreasoning ability. These two paradigms form the two dimensions of intelligence.
Scaling reasoning abilityteaches models to "think" before answering questions and form complete reasoning chains, enabling AI to solve complex scientific, technical, engineering, mathematics (STEM), and logical problems. OpenAI'so1ando3-minimodels are representatives of this paradigm.
GPT-4.5represents the paradigm ofscaling unsupervised learning. By significantly increasing computational resources and data scale, combined with innovations in architecture and optimization methods, GPT-4.5 achieves higher accuracy and deeper intuitive understanding of the world.
GPT-4.5 was trained on Microsoft Azure AI supercomputers, resulting in a model with broader knowledge reserves and deeper world comprehension, effectively reducing "hallucinations" and demonstrating higher reliability across various use cases.
Deeper, broader knowledge reserves
SimpleQAis an evaluation method used to measure the knowledge accuracy of large language models (LLMs). It assesses the AI model's accurate grasp of real-world information through a series of seemingly simple yet challenging knowledge questions.

The newly releasedGPT-4.5performed exceptionally well in theSimpleQAtest, demonstrating deeper and broader knowledge reserves, providing more accurate and truthful answers across multiple domains, significantly reducing common factual errors ("hallucinations") in previous models, offering users more reliable and trustworthy responses.

Enhancing "emotional intelligence"
As language models continue to grow in scale, the problems they can solve become increasingly complex, making it crucial for them to better understand human needs and intentions.

GPT-4.5To address this, OpenAI has developed new, scalable training methods that extract data from smaller models to help train more powerful large models. This approach significantly improves GPT-4.5's guidance, understanding of subtle differences, and natural conversational levels.

By combining deep world understanding with stronger collaborative abilities, GPT-4.5 can more naturally and warmly integrate ideas into interactions with humans, closely resembling authentic human communication scenarios. Specifically, GPT-4.5:
more accurately understands what humans truly mean, capturing subtle hints or latent expectations, possessing higher "emotional intelligence"; possesses stronger aesthetic intuition and creativity, excelling particularly in writing and design.
Performance in standard academic benchmark testing
Testing on some conventional academic benchmarks. These benchmarks are typically used to evaluate a model's ability to solve traditional reasoning problems.
Although GPT-4.5 only scales up unsupervised learning without specifically reinforcing reasoning abilities, its performance across various tasks significantly surpasses that of the previous GPT-4o model.

GPT-4.5 practical application cases
GPT-4.5 not only has broader and deeper knowledge but also knows how to interact appropriately with users: when to provide more information and when to guide users to further engage in conversation.
🌱 Accompanying you through tough times
When users fall into emotional lows or face setbacks, GPT-4.5 can sensitively detect subtle emotional signals, express concern and empathy at the right time, patiently listen and gently guide users to open up topics, helping them relieve stress and overcome difficulties.

🖼️ Easily identifying works of art
When users provide an unknown painting image or description, GPT-4.5 can quickly identify the style, genre, artist, and creation period of the artwork, providing detailed information while knowing when to ask about the user's interests, creating a rich and engaging artistic exchange experience.

🚀 Exploring the vast universe
In response to various questions users have about space exploration, whether it be cosmic mysteries, astronomical knowledge, or specific progress in space missions, GPT-4.5 can precisely understand what users truly want to know, providing detailed and authoritative information; simultaneously, it can cleverly judge the user's conversational intent, deciding whether to delve deeper into discussions about space exploration or provide direct and rich information, allowing users to efficiently obtain what they need.

Safety
Each enhancement in AI capability is an opportunity to strengthen model safety.
The training of GPT-4.5 introduced new supervision techniques, combining traditional supervised fine-tuning (SFT) and reinforcement learning from human feedback (RLHF) methods, which were also applied to the previous GPT-4o.
Comprehensive safety stress test results show that this capability expansion significantly improves the overall performance and reliability of the model.

How to use GPT-4.5?
🌐 Using in ChatGPT
ChatGPT Pro users can now select GPT-4.5 from the model selector on the web, mobile, and desktop versions starting today. Starting next week, it will gradually be made available to ChatGPT Plus and team users, and the following week, it will be opened to enterprise and education edition users. GPT-4.5 supports online search for the latest information, file and image uploads, and can utilize the canvas feature for efficient writing and programming collaboration. Currently, GPT-4.5 does not support voice mode (Voice Mode), video, or screen sharing multimodal functions.
🛠️ Using via API
It is also available to all paid user tier developers for preview in the following APIs:
Chat Completions API Assistants API Batch API
GPT-4.5 supports function calls, structured outputs, streaming, system messages, and visual capabilities for image input.
Based on early tests, GPT-4.5 is especially suitable for applications requiring higher emotional intelligence and creativity, such as:
Writing assistance Communication Learning guidance Psychological counseling Creative brainstorming
Additionally, it performs outstandingly in multi-step coding workflows and complex task automation in AI Agent scenarios.
Model cost and future planning
GPT-4.5 is a highly compute-intensive model, thus costing more than GPT-4o and will not replace GPT-4o. We will evaluate whether to continue offering GPT-4.5 in long-term API services, striving to balance existing capabilities with future model development.

Trial




