Mistral: Mixtral 8x7B Instruct

Mixtral 8x7B Instruct is a pretrained generative Sparse Mixture of Experts, by Mistral AI, for chat and instruction use. Incorporates 8 experts (feed-forward networks) for a total of 47 billion parameters. Instruct model fine-tuned by Mistral. #moe

Description

Mixtral 8x7B Instruct is a pretrained generative Sparse Mixture of Experts, by Mistral AI, for chat and instruction use. Incorporates 8 experts (feed-forward networks) for a total of 47 billion parameters. Instruct model fine-tuned by Mistral. #moe

ArchitectureАрхитектура

Modality:
text->text
InputModalities:
text
OutputModalities:
text
Tokenizer:
Mistral
InstructionType:
mistral

ContextAndLimits

ContextLength:
32768 Tokens
MaxResponseTokens:
16384 Tokens
Moderation:
Disabled

PricingRUB

Request:
Image:
WebSearch:
InternalReasoning:
Prompt1KTokens:
Completion1KTokens:

DefaultParameters

Temperature:
0.3
StartChatWith Mistral: Mixtral 8x7B Instruct

UserComments