๐ All Models
79 models ยท Page 3 of 3
Qwen3 235B A22B
Qwen3 235B A22B is Alibaba's flagship mixture-of-experts model with 235B total parameters and 22B active per token. Delivers frontier-level performance on coding, reasoning, and multilingual tasks at significantly lower inference cost.
Falcon 180B
Falcon 180B is one of the largest openly available language models, trained on 3.5 trillion tokens with TII's custom RefinedWeb dataset. Excels at reasoning, summarization, and generation tasks at state-of-the-art quality for open models.
Databricks DBRX Instruct
DBRX Instruct is an open, general-purpose LLM from Databricks. Built with a fine-grained mixture-of-experts (MoE) architecture, it was the most capable open LLM at launch and excels at code, math, and language tasks.
Stable Diffusion 3.5 Large
Stable Diffusion 3.5 Large is Stability AI's most capable text-to-image model, delivering photorealistic and creative imagery with excellent prompt adherence and detail. Features multimodal diffusion transformer architecture.
Microsoft Phi-4 Mini
Microsoft Phi-4 Mini is a 3.8B parameter compact model from Microsoft. Delivers impressive reasoning capabilities for edge and mobile deployment scenarios, with strong performance on math and coding tasks relative to its size.
IBM Granite 3.0 2B Instruct
IBM Granite 3.0 2B Instruct is an ultra-compact enterprise model excelling at summarization, extraction, and classification. The smallest model in the Granite family, suitable for edge deployments and constrained environments.
IBM Granite 3.0 8B Instruct
IBM Granite 3.0 8B Instruct is a lightweight enterprise-grade language model trained on a carefully curated enterprise corpus and optimized for RAG, summarization, classification, and code generation in business contexts.
