๐ All Models
1,316 models ยท Page 28 of 37
Qwen/Qwen2.5-3B-Instruct
Qwen/Qwen2.5-3B-Instruct is a text generation model on Hugging Face with ~8,750,507 monthly downloads. Open access.
meta-llama/Llama-3.1-8B-Instruct
meta-llama/Llama-3.1-8B-Instruct is a text generation model on Hugging Face with ~9,266,275 monthly downloads. (Gated access โ requires HuggingFace login.)
Qwen/Qwen2.5-1.5B-Instruct
Qwen/Qwen2.5-1.5B-Instruct is a text generation model on Hugging Face with ~9,915,572 monthly downloads. Open access.
Qwen/Qwen2.5-7B-Instruct
Qwen/Qwen2.5-7B-Instruct is a text generation model on Hugging Face with ~12,936,213 monthly downloads. Open access.
gpt2
Open-source gpt2 model from openai-community โ available for download and self-hosting on Hugging Face.
Qwen/Qwen3-0.6B
Qwen/Qwen3-0.6B is a text generation model on Hugging Face with ~15,133,638 monthly downloads. Open access.
Samsung Gauss 2 54B Instruct
Samsung Gauss 2 is Samsung's large language model optimized for on-device and cloud workloads. Trained on multilingual data with a focus on Korean and English, covering general conversation, summarization, and code assistance.
Qwen3 235B A22B
Qwen3 235B A22B is Alibaba's flagship mixture-of-experts model with 235B total parameters and 22B active per token. Delivers frontier-level performance on coding, reasoning, and multilingual tasks at significantly lower inference cost.
Falcon 180B
Falcon 180B is one of the largest openly available language models, trained on 3.5 trillion tokens with TII's custom RefinedWeb dataset. Excels at reasoning, summarization, and generation tasks at state-of-the-art quality for open models.
Databricks DBRX Instruct
DBRX Instruct is an open, general-purpose LLM from Databricks. Built with a fine-grained mixture-of-experts (MoE) architecture, it was the most capable open LLM at launch and excels at code, math, and language tasks.
NVIDIA Nemotron-4 340B Instruct
NVIDIA Nemotron-4 340B Instruct is a large open language model trained to generate diverse synthetic data for training other LLMs. Strong at following instructions, classification, and generating reward model training data.
Stable Diffusion 3.5 Large
Stable Diffusion 3.5 Large is Stability AI's most capable text-to-image model, delivering photorealistic and creative imagery with excellent prompt adherence and detail. Features multimodal diffusion transformer architecture.
AI21 Jamba 1.6 Mini
AI21 Jamba 1.6 Mini is a lightweight Mamba-Transformer hybrid optimized for cost-effective, high-throughput inference with an impressive 256K context window. An excellent choice for document-heavy workloads on a budget.
AI21 Jamba 1.6 Large
AI21 Jamba 1.6 Large uses a hybrid Mamba-Transformer architecture offering low memory footprint and high throughput compared to equivalent Transformer models. Features 256K context at a fraction of the inference cost.
Microsoft Phi-4 Mini
Microsoft Phi-4 Mini is a 3.8B parameter compact model from Microsoft. Delivers impressive reasoning capabilities for edge and mobile deployment scenarios, with strong performance on math and coding tasks relative to its size.
IBM Granite 3.0 2B Instruct
IBM Granite 3.0 2B Instruct is an ultra-compact enterprise model excelling at summarization, extraction, and classification. The smallest model in the Granite family, suitable for edge deployments and constrained environments.
IBM Granite 3.0 8B Instruct
IBM Granite 3.0 8B Instruct is a lightweight enterprise-grade language model trained on a carefully curated enterprise corpus and optimized for RAG, summarization, classification, and code generation in business contexts.
Amazon Titan Text Express
Amazon Titan Text Express is a generative LLM for summarization, text generation, classification, open-ended Q&A, and information extraction. Optimized for enterprise workloads via AWS Bedrock.
Amazon Nova Pro
Amazon Nova Pro is a highly capable multimodal model with the best combination of accuracy, speed, and cost across a wide range of tasks. Supports text, image, and video inputs.
Amazon Nova Lite
Amazon Nova Lite is a very low-cost multimodal model that can process image, video, and text inputs. Fast and accurate for a wide range of tasks requiring visual and language understanding.
Amazon Nova Micro
Amazon Nova Micro is the fastest and most cost-effective text-only model in the Nova family, optimized for speed and low latency. Ideal for customer service, summarization, and translation at scale.
OpenAI: GPT-4
OpenAI's flagship model, GPT-4 is a large-scale multimodal language model capable of solving difficult problems with greater accuracy than previous models due to its broader general knowledge and advanced reasoning...
OpenAI: GPT-3.5 Turbo
GPT-3.5 Turbo is OpenAI's fastest model. It can understand and generate natural language or code, and is optimized for chat and traditional completion tasks. Training data up to Sep 2021.
OpenAI: GPT-4 (older v0314)
GPT-4-0314 is the first version of GPT-4 released, with a context length of 8,192 tokens, and was supported until June 14. Training data: up to Sep 2021.
MythoMax 13B
One of the highest performing and most popular fine-tunes of Llama 2 13B, with rich descriptions and roleplay. #merge
ReMM SLERP 13B
A recreation trial of the original MythoMax-L2-B13 but with updated models. #merge
Mancer: Weaver (alpha)
An attempt to recreate Claude-style verbosity, but don't expect the same level of coherence or memory. Meant for use in roleplay/narrative situations.
OpenAI: GPT-3.5 Turbo 16k
This model offers four times the context length of gpt-3.5-turbo, allowing it to support approximately 20 pages of text in a single request at a higher cost. Training data: up...
OpenAI: GPT-3.5 Turbo Instruct
This model is a variant of GPT-3.5 Turbo tuned for instructional prompts and omitting chat-related optimizations. Training data: up to Sep 2021.
Mistral: Mistral 7B Instruct v0.1
A 7.3B parameter model that outperforms Llama 2 13B on all benchmarks, with optimizations for speed and context length.
OpenAI: GPT-4 Turbo (older v1106)
The latest GPT-4 Turbo model with vision capabilities. Vision requests can now use JSON mode and function calling. Training data: up to April 2023.
Auto Router
Your prompt will be processed by a meta-model and routed to one of dozens of models (see below), optimizing for the best possible output. To see which model was used,...
Goliath 120B
A large LLM created by combining two fine-tuned Llama 70B models into one 120B model. Combines Xwin and Euryale. Credits to - [@chargoddard](https://huggingface.co/chargoddard) for developing the framework used to merge...
Mistral: Mixtral 8x7B Instruct
Mixtral 8x7B Instruct is a pretrained generative Sparse Mixture of Experts, by Mistral AI, for chat and instruction use. Incorporates 8 experts (feed-forward networks) for a total of 47 billion...
OpenAI: GPT-4 Turbo Preview
The preview GPT-4 model with improved instruction following, JSON mode, reproducible outputs, parallel function calling, and more. Training data: up to Dec 2023. **Note:** heavily rate limited by OpenAI while...
OpenAI: GPT-3.5 Turbo (older v0613)
GPT-3.5 Turbo is OpenAI's fastest model. It can understand and generate natural language or code, and is optimized for chat and traditional completion tasks. Training data up to Sep 2021.
