OpenAI: gpt-oss-20b
gpt-oss-20b is an open-weight 21B parameter model released by OpenAI under the Apache 2.0 license. It uses a Mixture-of-Experts (MoE) architecture with 3.6B active parameters per forward pass, optimized for lower-latency inference and deployability on consumer or single-GPU hardware. The model is trained in OpenAI’s Harmony response format and supports reasoning level configuration, fine-tuning, and agentic capabilities including function calling, tool use, and structured outputs.
Description
gpt-oss-20b is an open-weight 21B parameter model released by OpenAI under the Apache 2.0 license. It uses a Mixture-of-Experts (MoE) architecture with 3.6B active parameters per forward pass, optimized for lower-latency inference and deployability on consumer or single-GPU hardware. The model is trained in OpenAI’s Harmony response format and supports reasoning level configuration, fine-tuning, and agentic capabilities including function calling, tool use, and structured outputs.
ArchitectureАрхитектура
- Modality:
- text->text
- InputModalities:
- text
- OutputModalities:
- text
- Tokenizer:
- GPT
ContextAndLimits
- ContextLength:
- 131072 Tokens
- MaxResponseTokens:
- 0 Tokens
- Moderation:
- Disabled
PricingRUB
- Request:
- ₽
- Image:
- ₽
- WebSearch:
- ₽
- InternalReasoning:
- ₽
- Prompt1KTokens:
- ₽
- Completion1KTokens:
- ₽
DefaultParameters
- Temperature:
- 0
UserComments