Open-source models from AMD GenAI on Hugging Face
State-of-the-art 32B reasoning model fine-tuned on 14K synthetic math samples, outperforming models trained on 5-50x larger datasets
32B reasoning model achieving 78.33% on AIME25, surpassing Qwen3-32B with only 27K synthetic training samples
SFT version of Instella-3B optimized for mathematical reasoning tasks
AMD's first fully open reasoning model trained with long chain-of-thought RL on MI300X GPUs
Instruction-tuned 3B model supporting 128K context length for long-context tasks
Instruction-tuned version of Instella-3B for chat and instruction following
Supervised fine-tuned version of Instella-3B base model
Fully open 3B language model trained from scratch on MI300X GPUs
Stage 1 pre-training checkpoint of Instella-3B model
DPO-aligned version of AMD-OLMo-1B for improved helpfulness and safety
Supervised fine-tuned version of AMD-OLMo-1B
Fully open 1B language model trained on MI250 GPUs with 1.3T tokens