An open 30B MoE model from NVIDIA with 3B activated parameters that delivers strong reasoning and agentic capabilities.
30.7K Pulls 3 Tags Updated 1 week ago
MiniMax's M2-series model for coding, agentic workflows, and professional productivity.
34K Pulls 1 Tag Updated 1 week ago
Qwen 3.5 is a family of open-source multimodal models that delivers exceptional utility and performance.
3.7M Pulls 30 Tags Updated 3 weeks ago
LFM2 is a family of hybrid models designed for on-device deployment. LFM2-24B-A2B is the largest model in the family, scaling the architecture to 24 billion parameters while keeping inference efficient.
999.1K Pulls 6 Tags Updated 1 month ago
Qwen3-Coder-Next is a coding-focused language model from Alibaba's Qwen team, optimized for agentic coding workflows and local development.
920.8K Pulls 4 Tags Updated 1 month ago
LFM2.5 is a new family of hybrid models designed for on-device deployment.
1M Pulls 5 Tags Updated 2 months ago
As the strongest model in the 30B class, GLM-4.7-Flash offers a new option for lightweight deployment that balances performance and efficiency.
912K Pulls 4 Tags Updated 2 months ago
A new collection of open translation models built on Gemma 3, helping people communicate across 55 languages.
822.8K Pulls 13 Tags Updated 2 months ago
The most powerful vision-language model in the Qwen model family to date.
2.6M Pulls 59 Tags Updated 4 months ago
The Ministral 3 family is designed for edge deployment, capable of running on a wide range of hardware.
761K Pulls 16 Tags Updated 3 months ago
24B model that excels at using tools to explore codebases, editing multiple files and power software engineering agents.
696.9K Pulls 6 Tags Updated 3 months ago
Granite 4 features improved instruction following (IF) and tool-calling capabilities, making them more effective in enterprise applications.
987.3K Pulls 17 Tags Updated 4 months ago
MiniMax-M2.5 is a state-of-the-art large language model designed for real-world productivity and coding tasks.
146.7K Pulls 1 Tag Updated 1 month ago
NVIDIA Nemotron 3 Super is a 120B open MoE model activating just 12B parameters to deliver maximum compute efficiency and accuracy for complex multi-agent applications.
106.2K Pulls 7 Tags Updated 2 weeks ago
The first installment in the Qwen3-Next series with strong performance in terms of both parameter efficiency and inference speed.
446.5K Pulls 10 Tags Updated 3 months ago
A strong reasoning and agentic model from Z.ai with 744B total parameters (40B active), built for complex systems engineering and long-horizon tasks.
140.5K Pulls 1 Tag Updated 1 month ago
Kimi K2.5 is an open-source, native multimodal agentic model that seamlessly integrates vision and language understanding with advanced agentic capabilities, instant and thinking modes, as well as conversational and agentic paradigms.
192.2K Pulls 1 Tag Updated 2 months ago
Rnj-1 is a family of 8B parameter open-weight, dense models trained from scratch by Essential AI, optimized for code and STEM with capabilities on par with SOTA open-weight models.
398.5K Pulls 6 Tags Updated 3 months ago
GLM-OCR is a multimodal OCR model for complex document understanding, built on the GLM-V encoder–decoder architecture.
161.1K Pulls 3 Tags Updated 1 month ago
Nemotron-3-Nano is a new Standard for Efficient, Open, and Intelligent Agentic Models, now updated with a 4B parameter count model.
301.7K Pulls 9 Tags Updated 1 week ago