The new 3B "fully open source" model from AMD
The new 3B "fully open source" model from AMD
AMD is excited to announce Instella, a family of fully open state-of-the-art 3-billion-parameter language models (LMs). , In this blog we explain how the Instella models were trained, and how to access them.
This is again a big win on the red team at least for me. They developed a "fully open" 3B parameters model family trained from scratch on AMD Instinct™ MI300X GPUs.
AMD is excited to announce Instella, a family of fully open state-of-the-art 3-billion-parameter language models (LMs) [...]. Instella models outperform existing fully open models of similar sizes and achieve competitive performance compared to state-of-the-art open-weight models such as Llama-3.2-3B, Gemma-2-2B, and Qwen-2.5-3B [...].
As shown in this image (https://rocm.blogs.amd.com/_images/scaling_perf_instruct.png) this model outperforms current other "fully open" models, coming next to open weight only models.
A step further, thank you AMD.
PS : not doing AMD propaganda but thanks them to help and contribute to the Open Source World.