Skip Navigation

Mistral debuts Mistral Small 3.1, a 24B-parameter multimodal and multilingual open-source model it says outperforms Gemma 3 and GPT-4o-mini and runs on 32GB RAM.

Today we announce Mistral Small 3.1: the best model in its weight class.

Building on Mistral Small 3, this new model comes with improved text performance, multimodal understanding, and an expanded context window of up to 128k tokens. The model outperforms comparable models like Gemma 3 and GPT-4o Mini, while delivering inference speeds of 150 tokens per second.

Buy European @feddit.uk

Mistral Small 3.1 | Mistral AI

55 11
Free Open-Source Artificial Intelligence @lemmy.world

Mistral Small 3.1 | Mistral AI

15 0
Le Chat @feddit.org

Mistral Small 3.1 | Mistral AI

6 0
0 comments

No comments