OpenAI priced GPT-5.5 at $5 per million input tokens and $30 per million output tokens in its API rollout today, at a premium over GPT-5.4 for builders running production workloads. GPT-5.5 Pro runs $30 input and $180 output per million tokens.
The sticker shock is real, but OpenAI’s efficiency argument is specific: the model completes the same Codex tasks in fewer tokens than GPT-5.4, which compresses the actual bill in agentic pipelines. On SWE-Bench Pro, it hits 58.6% accuracy resolving real GitHub issues end-to-end. Terminal-Bench 2.0 scores 82.7% on complex command-line workflows requiring multi-step planning.
Dan Shipper, Founder and CEO of Every, called it “the first coding model I’ve used that has serious conceptual clarity.” Pietro Schirano, CEO of MagicPath, watched it resolve a branch merge with hundreds of frontend and refactor changes in one shot in about 20 minutes. Michael Truell, co-founder of Cursor, said it stays on task for long-running work where GPT-5.4 used to stop early.
GPT-5.5 is rolling out to ChatGPT Plus, Pro, Business, and Enterprise today; it’s also now live in the API.
Batch pricing cuts the standard rate in half. Priority processing is 2.5x. Context window is 1M tokens.
Here’s the practical math: if GPT-5.4 output costs you $3,000/month today, expect a higher bill for GPT-5.5 before efficiency gains. Benchmark it on your specific task mix before the API gates open, so you know whether the token savings actually close that gap.
Nathan Zakhary