Gpt-oss: OpenAI releases open weights model after extensive testing

The models perform at about the level of o3 and o4-mini.
Two agentic, reasoning models will fit on high en consumer hardware — if you have the specs. (Picture: Adobe)
The two models will run on a high-end laptop or a phone, and perform at the level of o4-mini.

Sam Altman says «We believe far more good than bad will come from it,» choosing to release the models after a series of delays and worry about the weights.

After «billions of dollars of research» and extensive red-team testing, they were found no more dangerous than the o3-model and won’t move the needle on chemistry or biology.

Available under an Apache 2.0 license, which allows extensive modification and commercialization, there are two models released. One is the gpt-oss-120b, made to work on a perfomant 80GB ram laptop, and the gpt-oss-20b, made to work on a 16GB ram phone.

Both of them are solely text-based reasoning and agentic models capable of using tools, like Python or searching the web — and are optimized for use on consumer hardware.

Pretty good performance
Benchmarks place them slightly below o3 and o4-mini in most tests, but the real issue is that people can now modify the models themselves, release new modifications, and enjoy the privacy of not going through OpenAIs servers for each query.

— We are quite hopeful that this release will enable new kinds of research and the creation of new kinds of products. We expect a meaningful uptick in the rate of innovation in our field, and for many more people to do important work than were able to before, Sam Altman said at the release.

The models are available for download at Hugging Face.

Someone also put together gpt-oss.com where you can take the models for a spin before you download.

Read more: OpenAI launch post, OpenAI’s risk analysis, Launch thread on x.com, and writeups on Wired and The Verge.