TNG: DeepSeek R1T Chimera
DeepSeek-R1T-Chimera is created by merging DeepSeek-R1 and DeepSeek-V3 (0324), combining the reasoning capabilities of R1 with the token efficiency improvements of V3. It is based on a DeepSeek-MoE Transformer architecture and is optimized for general text generation tasks. The model merges pretrained weights from both source models to balance performance across reasoning, efficiency, and instruction-following tasks. It is released under the MIT license and intended for research and commercial use.
Description
DeepSeek-R1T-Chimera is created by merging DeepSeek-R1 and DeepSeek-V3 (0324), combining the reasoning capabilities of R1 with the token efficiency improvements of V3. It is based on a DeepSeek-MoE Transformer architecture and is optimized for general text generation tasks. The model merges pretrained weights from both source models to balance performance across reasoning, efficiency, and instruction-following tasks. It is released under the MIT license and intended for research and commercial use.
ArchitectureАрхитектура
- Modality:
- text->text
- InputModalities:
- text
- OutputModalities:
- text
- Tokenizer:
- DeepSeek
ContextAndLimits
- ContextLength:
- 163840 Tokens
- MaxResponseTokens:
- 163840 Tokens
- Moderation:
- Disabled
PricingRUB
- Request:
- ₽
- Image:
- ₽
- WebSearch:
- ₽
- InternalReasoning:
- ₽
- Prompt1KTokens:
- ₽
- Completion1KTokens:
- ₽
DefaultParameters
- Temperature:
- 0
UserComments