OpenAI goes open-weight for the first time since GPT-2
Product AnnouncementReleased gpt-oss-120b and gpt-oss-20b, OpenAI's first open-weight models since GPT-2 (2019), under Apache 2.0 license. The 120B MoE model matches o4-mini on reasoning benchmarks while fitting on a single H100; the 20B runs on consumer hardware with 16GB RAM.
gpt-oss-120b (117B parameters) and gpt-oss-20b (21B parameters) are mixture-of-experts models released under the Apache 2.0 license. Both support instruction following, tool use, and reasoning — capabilities previously locked behind OpenAI's API.
The gpt-oss-120b achieves near-parity with o4-mini on core reasoning benchmarks using 4-bit quantization (MXFP4), enabling deployment on a single 80GB H100 GPU. The 20B model runs on consumer hardware with just 16GB of memory.
The models were trained using reinforcement learning and techniques informed by OpenAI's most advanced internal models, including o3 and other frontier systems. This represents a new approach to open-source: distilling frontier capabilities into deployable open-weight models.