OpenAI
https://openai.comAI research lab focused on developing safe and beneficial artificial general intelligence
Models by OpenAI (21)
GPT-4o
128K ctxOpenAI's flagship multimodal model with advanced reasoning, vision, and audio capabilities. Fast and versatile for most tasks.
GPT-4o Mini
128K ctxAffordable and intelligent small model for fast, lightweight tasks. Best cost-efficiency in its class.
o1
200K ctxOpenAI's reasoning model designed to solve hard problems across science, coding, and math using chain-of-thought reasoning.
o3-mini
200K ctxFast, cost-efficient reasoning model excelling at STEM tasks. Offers adjustable reasoning effort levels.
DALL-E 3
4K ctxOpenAI's latest image generation model with improved prompt following and photorealistic outputs. Integrated with ChatGPT.
Whisper
0OpenAI's automatic speech recognition model supporting 99 languages. Robust to accents, background noise, and technical language.
Sora
0OpenAI's text-to-video model capable of generating realistic videos up to one minute. Currently in limited release.
GPT-4.1
1000K ctxOpenAI's latest flagship model with improved coding, instruction following, and long-context understanding. Excels at complex multi-step tasks with a 1M token context window.
GPT-4.1 mini
1000K ctxA smaller, faster, and more affordable version of GPT-4.1. Ideal for tasks requiring quick responses while maintaining strong performance.
GPT-4.1 nano
1000K ctxThe most efficient GPT-4.1 variant, optimized for high-volume, low-latency applications. Best for simple tasks and real-time applications.
o3
200K ctxOpenAI's most powerful reasoning model. Uses extended thinking time to solve complex problems in math, science, and coding. Achieves expert-level performance on technical benchmarks.
o4-mini
200K ctxA cost-effective reasoning model that balances strong logical capabilities with faster response times. Great for everyday reasoning tasks.
GPT-4 Turbo
128K ctxAn optimized version of GPT-4 with vision capabilities and improved performance. Supports both text and image inputs with a 128K context window.
GPT-4
8K ctxOpenAI's original GPT-4 model. A highly capable large language model for complex tasks requiring advanced reasoning and broad knowledge.
GPT-3.5 Turbo
16K ctxA fast and cost-effective model suitable for many everyday tasks. Good balance of capability and affordability for simpler use cases.
o1-mini
128K ctxA smaller, faster reasoning model optimized for coding and STEM tasks. Offers strong logical capabilities at a lower cost than o1.
o1-pro
200K ctxThe enhanced version of o1 with more compute for complex reasoning. Best for the most challenging problems requiring deep analysis.
GPT-5
1000K ctxOpenAI's most advanced language model to date. Features unprecedented reasoning, creativity, and multimodal understanding. Represents a major leap in AI capabilities across all domains.
GPT-5.1
1000K ctxThe latest iteration of GPT-5 with improved instruction following, reduced hallucinations, and enhanced safety. Offers the best balance of capability and reliability for production use.
Codex
4K ctxOpenAI Codex is an AI system that translates natural language to code. Codex powers GitHub Copilot and is a descendant of GPT-3, trained on both natural language and billions of lines of code from publicly available sources, including GitHub repositories.
GPT-5.2
GPT-5.2 is OpenAI's flagship model series for 2025, achieving unprecedented performance in reasoning, coding, and mathematics. Available in three variants—Instant (optimized for speed), Thinking (step-by-step reasoning), and Pro (maximum capability)—it sets new industry benchmarks including a perfect 100% on AIME 2025 and 55.6% on SWE-Bench Pro. The model excels at professional knowledge work including complex spreadsheets, presentations, and business documents. It demonstrates 30% fewer hallucinations than GPT-5.1 and introduces improved agentic capabilities for executing multi-step tasks with high reliability. Key improvements include enhanced tool calling, superior front-end code generation, and better long-context reasoning.