Paris-based startup Mistral AI, which recently claimed a $2 billion valuation, has released Mixtral, an open large language model (LLM) that it says outperforms OpenAI’s GPT 3.5 in several benchmarks while being much more efficient.
Mistral drew a substantial Series A investment from Andreessen Horowitz (a16z), a venture capital firm renowned for its strategic investments in transformative technology sectors, especially AI. Other tech giants like Nvidia and Salesforce also participated in the funding round.
“Mistral is at the center of a small but passionate developer community growing up around open source AI,” Andreessen Horowitz said in its funding announcement. “Community fine-tuned models now routinely dominate open source leaderboards (and even beat closed source models on some tasks).”
Mixtral uses a technique called sparse mixture of experts (MoE), which Mistral says makes the model more powerful and efficient than its predecessor, Mistral 7b—and even its more powerful competitors.
A Mixture of experts (MoE) is a machine learning technique in which developers train or set up multiple virtual expert models to solve complex problems. Each expert model is trained on
Go to Source to See Full Article
Author: Jose Antonio Lanz
Tip BTC Newswire with Cryptocurrency