Mistral Small 3.1 AI Model With Improved Text and Multimodal Performance Released

Mistral Small 3.1 AI Model With Improved Text and Multimodal Performance Released


Mistral Small 3.1 artificial intelligence (AI) model was released on Monday. The Paris-based AI firm introduced two open-source variants of the latest model — chat and instruct. The model comes as the successor to the Mistral Small 3, and offers improved text performance and multimodal understanding. The company claims that it outperforms comparable models such as Google’s Gemma 3 and OpenAI’s GPT-4o mini on several benchmarks. One of the key advantages of the newly introduced model is its rapid response times.

Mistral Small 3.1 AI Model Released

In a newsroom post, the AI firm detailed the new models. The Mistral Small 3.1 comes with an expanded context window of up to 1,28,000 tokens and is said to deliver inference speeds of 150 tokens per second. This essentially means the response time of the AI model is quite fast. It arrives in two variants of chat and instruct. The former works as a typical chatbot whereas the latter is fine-tuned to follow user instructions and is useful when building an application with a specific purpose.

mistral small 3 1 benchamrk Mistral Small 3 benchmark

Mistral Small 3.1 benchmark
Photo Credit: Mistral

 

Similar to its previous releases, the Mistral Small 3.1 is available in the public domain. The open weights can be downloaded from the firm’s Hugging Face listing. The AI model comes with an Apache 2.0 licence which allows academic and research usage but forbids commercial use cases.

Mistral said that the large language model (LLM) is optimised to run on a single Nvidia RTX 4090 GPU or a Mac device with 32GB RAM. This means enthusiasts without an expensive setup to run AI models can also download and access it. The model also offers low-latency function calling and function execution which can be useful for building automation and agentic workflows. The company also allows developers to fine-tune the Mistral Small 3.1 to fit the use cases of specialised domains.

Coming to performance, the AI firm shared various benchmark scores based on internal testing. The Mistral Small 3.1 is said to outperform Gemma 3 and GPT-4o mini on the Graduate-Level Google-Proof Q&A (GPQA) Main and Diamond, HumanEval, MathVista, and the DocVQA benchmarks. However, GPT-4o mini performed better on the Massive Multitask Language Understanding (MMLU) benchmark, and Gemma 3 outperformed it on the MATH benchmark.

Apart from Hugging Face, the new model is also available via the application programming interface (API) on Mistral AI’s developer playground La Plateforme, as well as on Google Cloud’s Vertex AI. It will also be made available on Nvidia’s NIM and Microsoft’s Azure AI Foundry in the coming weeks.



Originally Appeared Here