// model_detail
legacy open-weight
Mixtral 8x7B
by Mistral
Pioneering open-weight MoE model. 8 experts, 7B each. 32K context.
Context window 32K 32K tokens
Max output 8K 8K tokens
Pricing /1M tok $0.10 in / $0.30 out per 1M tokens (OpenRouter)
Released Dec 11, 2023
// specifications
Specs & capabilities
API details
- API model ID
open-mixtral-8x7b- Internal ID
mixtral-8x7b- Status
- legacy
- Type
- open-weight
Capabilities
- Vision
- No
- Function calling
- 🔧 Yes
- Knowledge cutoff
- 📅 2023-12
Pricing
- Input
- $0.10 / 1M tokens
- Output
- $0.30 / 1M tokens
- Source
- OpenRouter pricing
// data_confidence
Data notes
⚠️ Pricing shown via OpenRouter, not first-party
// notes
Additional notes
Pioneering MoE model
// more_from_mistral
More Mistral models
Devstral Small 2 active coding
Mistral's latest coding-focused model. Agentic coding capabilities.
Magistral Medium 1.1 active reasoning
Mistral's flagship reasoning model with chain-of-thought capabilities. 128K context.
Magistral Small 1.1 active reasoning
Smaller reasoning model. Open-source. 128K context.
Devstral Medium 1.0 active coding
Medium-size coding model. Balanced capability and cost.
Mistral Small 3.1 active open-weight
Open-weight small model. 24B parameters, 128K context, vision support.
Codestral 25.01 active coding
Mistral's coding specialist. 256K context, strong code generation.
// external_links