French AI startup Mistral is releasing a new AI model, Mistral Medium 3, that’s focused on efficiency without compromising performance.
Available in Mistral’s API priced at $0.40 per million input tokens and $20.80 per million output tokens, Mistral Medium 3 performs “at or above” 90% of Anthropic’s costlier Claude Sonnet 3.7 model on “benchmarks across the board,” claims Mistral. It also surpasses recent open models including Meta’s Llama 4 Maverick and Cohere’s Command A on popular AI performance evaluations.
Tokens are the raw bits of data models work with, with a million tokens equivalent to about 750,000 words (roughly 163,000 words longer than “War and Peace”).
“Mistral Medium 3 can […] be deployed on any cloud, including self-hosted environments of four GPUs and above,” explained Mistral in a blog post sent to TechCrunch. “On pricing, the model beats cost leaders such as DeepSeek v3, both in API and self-deployed systems.”

Mistral, founded in 2023, is a frontier model lab, aiming to build a range of AI-powered services including a chatbot platform, Le Chat, and mobile apps. It’s backed by VCs including General Catalyst, and has raised over €1.1 billion (roughly $1.24 billion) to date. Mistral’s customers include BNP Paribas, AXA, and Mirakl.
According to Mistral, Mistral Medium 3 is best for coding and STEM tasks, and excels at multimodal understanding. The company says that clients in financial services, energy, and healthcare have been beta testing the model for use cases like customer service, workflow automation, and analyzing complex data sets.
In addition to Mistral’s API, where enterprise customers can work with Mistral to fine-tune it, Mistral Medium 3 is available on Amazon’s Sagemaker platform starting Wednesday. It’ll soon come to other hosts, including Microsoft’s Azure AI Foundry and Google’s Vertex AI platforms, the company added.
Techcrunch event
Berkeley, CA
|
June 5
The launch of Mistral Medium 3 follows on the heels of Mistral’s Mistral Small 3.1 in March. In its blog post, the company teased the release of a much larger model in the coming weeks.