New ChatGPT 4.1 coding family drastically reduces costs

The GhatGTP 4.1 are smarter than 4o, but are otherwise bang on average.
The GhatGTP 4.1 are smarter than 4o, but are otherwise bang on average. (Picture: OpenAI9
Sam Altman of OpenAI teased this weekend that we were in for a week of big launches, and chatter was about an open source model that would be better than anything available.

On Monday night, though, the AI company launched ChatGPT 4.1, 4.1-mini and 4.1-nano, which at first seem a bit underwhelming.

While you might be getting tired of ChatGPT’s word salad lineup, these latest models offer mostly cost efficiency and larger input levels for coding tasks. They won’t be available in the app — but are instead rolled out on the API.

Pretty good on coding benchmarks
The models are, however, tailored especially to the coding crowd, and scores slightly lower than Google and Anthropic models do on the SWE-bench Verified coding benchmark, according to TechCrunch.

OpenAI claims the score of 54.6% on SWE-bench are an improvement of 21.4% over GPT-4o and 26.6% over GPT-4.5.

Here’s what OpenAI says about benchmarks on their launch page, though:

— While benchmarks provide valuable insights, we trained these models with a focus on real-world utility. Close collaboration and partnership with the developer community enabled us to optimize these models for the tasks that matter most to their applications.

But the real shine is in the costs and inputs.

Can tackle a lot more data
First, these models gets a 1 million token «context window» which is basically around 750,000 words of input and output — reaching what Googles Gemini 2.5 Pro, Claude 3.7 Sonnet and DeepSeek’s latest already offer.

This means you can work on really, really large documents or code bases.

— These improvements enable developers to build agents that are considerably better at real-world software engineering tasks, an OpenAI spokesperson tells TechCrunch.

A whole lot cheaper
Secondly, while 1 million tokens were available on ChatGPT 4.5, the costs have been drastically reduced — all the way down to $0.10 per million input tokens on GPT 4.1-nano, compared to a whooping $75 per million tokens on GPT 4.5.

GPT 4.1 straight costs $2 per million inputs, while GPT 4.1-mini costs $0.40.

This compares to the $1.25 cost of running Gemini 2.5 Pro on a million tokens.

So if you do a whole of coding on a whole lot of really big data sets, the GPT 4.1 family might seem attractive to you.

A whole week coming
This was just Monday. And OpenAI said they would continue all week with «exciting» launches, so while this launch was targeted at coders rather than consumers, and wont available in the chat interfaces, it will be interesting to see what the rest of the week brings.

OpenAI also said they would discontinue ChatGPT 4.5 on the API offering, most like due to the excessive cost of running it.

Read more: OpenAI launch page, TechCrunch, Ars Technica, and r/singularity