modelstop.top
Home/All Models

AI Model Catalogue

Browse 287 models across providers, modalities, and use cases.

🌐 All Models

287 models · Page 5 of 8

EleutherAI: Llemma 7b

eleutherai

Llemma 7B is a language model for mathematics. It was initialized with Code Llama 7B weights, and trained on the Proof-Pile-2 for 200B tokens. Llemma models are particularly strong at...

codecheap
4,096 ctx$0.80/1M in
Explore specs and pricingView details →

OpenAI: GPT-4.1 Nano

openai

For tasks that demand low latency, GPT‑4.1 nano is the fastest and cheapest model in the GPT-4.1 series. It delivers exceptional performance at a small size with its 1 million...

textvisionmultimodal
1,047,576 ctx$0.10/1M in
Explore specs and pricingView details →

OpenAI: GPT-4.1 Mini

openai

GPT-4.1 Mini is a mid-sized model delivering performance competitive with GPT-4o at substantially lower latency and cost. It retains a 1 million token context window and scores 45.1% on hard...

textvisionmultimodal
1,047,576 ctx$0.40/1M in
Explore specs and pricingView details →

Qwen: Qwen2.5 Coder 7B Instruct

qwen

Qwen2.5-Coder-7B-Instruct is a 7B parameter instruction-tuned language model optimized for code-related tasks such as code generation, reasoning, and bug fixing. Based on the Qwen2.5 architecture, it incorporates enhancements like RoPE,...

codereasoninginstruct
32,768 ctx$0.03/1M in
Explore specs and pricingView details →

Qwen: Qwen3 235B A22B

qwen

Qwen3-235B-A22B is a 235B parameter mixture-of-experts (MoE) model developed by Qwen, activating 22B parameters per forward pass. It supports seamless switching between a "thinking" mode for complex reasoning, math, and...

textreasoningcheap
131,072 ctx$0.46/1M in
Explore specs and pricingView details →

Qwen: Qwen3 32B

qwen

Qwen3-32B is a dense 32.8B parameter causal language model from the Qwen3 series, optimized for both complex reasoning and efficient dialogue. It supports seamless switching between a "thinking" mode for...

textreasoningcheap
40,960 ctx$0.08/1M in
Explore specs and pricingView details →

Qwen: Qwen3 14B

qwen

Qwen3-14B is a dense 14.8B parameter causal language model from the Qwen3 series, designed for both complex reasoning and efficient dialogue. It supports seamless switching between a "thinking" mode for...

textreasoningcheap
40,960 ctx$0.06/1M in
Explore specs and pricingView details →

Qwen: Qwen3 8B

qwen

Qwen3-8B is a dense 8.2B parameter causal language model from the Qwen3 series, designed for both reasoning-heavy tasks and efficient dialogue. It supports seamless switching between "thinking" mode for math,...

textreasoningcheap
40,960 ctx$0.05/1M in
Explore specs and pricingView details →

Qwen: Qwen3 30B A3B

qwen

Qwen3, the latest generation in the Qwen large language model series, features both dense and mixture-of-experts (MoE) architectures to excel in reasoning, multilingual support, and advanced agent tasks. Its unique...

textreasoningagents
40,960 ctx$0.08/1M in
Explore specs and pricingView details →

Meta: Llama Guard 4 12B

meta-llama

Llama Guard 4 is a Llama 4 Scout-derived multimodal pretrained model, fine-tuned for content safety classification. Similar to previous versions, it can be used to classify content in both LLM...

textvisionmultimodal
163,840 ctx$0.18/1M in
Explore specs and pricingView details →

Inception: Mercury Coder

inception

Mercury Coder is the first diffusion large language model (dLLM). Applying a breakthrough discrete diffusion approach, the model runs 5-10x faster than even speed optimized models like Claude 3.5 Haiku...

codecheaplong-context
128,000 ctx$0.25/1M in
Explore specs and pricingView details →

Arcee AI: Coder Large

arcee-ai

Coder‑Large is a 32 B‑parameter offspring of Qwen 2.5‑Instruct that has been further trained on permissively‑licensed GitHub, CodeSearchNet and synthetic bug‑fix corpora. It supports a 32k context window, enabling multi‑file...

textcodeinstruct
32,768 ctx$0.50/1M in
Explore specs and pricingView details →

Arcee AI: Virtuoso Large

arcee-ai

Virtuoso‑Large is Arcee's top‑tier general‑purpose LLM at 72 B parameters, tuned to tackle cross‑domain reasoning, creative writing and enterprise QA. Unlike many 70 B peers, it retains the 128 k...

textreasoningcheap
131,072 ctx$0.75/1M in
Explore specs and pricingView details →

Arcee AI: Maestro Reasoning

arcee-ai

Maestro Reasoning is Arcee's flagship analysis model: a 32 B‑parameter derivative of Qwen 2.5‑32 B tuned with DPO and chain‑of‑thought RL for step‑by‑step logic. Compared to the earlier 7 B...

textreasoningcheap
131,072 ctx$0.90/1M in
Explore specs and pricingView details →

Arcee AI: Spotlight

arcee-ai

Spotlight is a 7‑billion‑parameter vision‑language model derived from Qwen 2.5‑VL and fine‑tuned by Arcee AI for tight image‑text grounding tasks. It offers a 32 k‑token context window, enabling rich multimodal...

textvisionmultimodal
131,072 ctx$0.18/1M in
Explore specs and pricingView details →

Mistral: Mistral Medium 3

mistralai

Mistral Medium 3 is a high-performance enterprise-grade language model designed to deliver frontier-level capabilities at significantly reduced operational cost. It balances state-of-the-art reasoning and multimodal performance with 8× lower cost...

textvisionmultimodal
131,072 ctx$0.40/1M in
Explore specs and pricingView details →

DeepSeek: R1 0528

deepseek

May 28th update to the [original DeepSeek R1](/deepseek/deepseek-r1) Performance on par with [OpenAI o1](/openai/o1), but open-sourced and with fully open reasoning tokens. It's 671B parameters in size, with 37B active...

textreasoningcheap
163,840 ctx$0.45/1M in
Explore specs and pricingView details →

xAI: Grok 3 Mini

x-ai

A lightweight model that thinks before responding. Fast, smart, and great for logic-based tasks that do not require deep domain knowledge. The raw thinking traces are accessible.

textreasoningcheap
131,072 ctx$0.30/1M in
Explore specs and pricingView details →

Google: Gemini 2.5 Flash

google

Gemini 2.5 Flash is Google's state-of-the-art workhorse model, specifically designed for advanced reasoning, coding, mathematics, and scientific tasks. It includes built-in "thinking" capabilities, enabling it to provide responses with greater...

textvisionmultimodal
1,048,576 ctx$0.30/1M in
Explore specs and pricingView details →

MiniMax: MiniMax M1

minimax

MiniMax-M1 is a large-scale, open-weight reasoning model designed for extended context and high-efficiency inference. It leverages a hybrid Mixture-of-Experts (MoE) architecture paired with a custom "lightning attention" mechanism, allowing it...

textreasoningcheap
1,000,000 ctx$0.40/1M in
Explore specs and pricingView details →

Mistral: Mistral Small 3.2 24B

mistralai

Mistral-Small-3.2-24B-Instruct-2506 is an updated 24B parameter model from Mistral optimized for instruction following, repetition reduction, and improved function calling. Compared to the 3.1 release, version 3.2 significantly improves accuracy on...

textvisionmultimodal
128,000 ctx$0.07/1M in
Explore specs and pricingView details →

Inception: Mercury

inception

Mercury is the first diffusion large language model (dLLM). Applying a breakthrough discrete diffusion approach, the model runs 5-10x faster than even speed optimized models like GPT-4.1 Nano and Claude...

cheaplong-context
128,000 ctx$0.25/1M in
Explore specs and pricingView details →

Baidu: ERNIE 4.5 300B A47B

baidu

ERNIE-4.5-300B-A47B is a 300B parameter Mixture-of-Experts (MoE) language model developed by Baidu as part of the ERNIE 4.5 series. It activates 47B parameters per token and supports text generation in...

textcheaplong-context
123,000 ctx$0.28/1M in
Explore specs and pricingView details →

Baidu: ERNIE 4.5 VL 424B A47B

baidu

ERNIE-4.5-VL-424B-A47B is a multimodal Mixture-of-Experts (MoE) model from Baidu’s ERNIE 4.5 series, featuring 424B total parameters with 47B active per token. It is trained jointly on text and image data...

textvisionmultimodal
123,000 ctx$0.42/1M in
Explore specs and pricingView details →

Morph: Morph V3 Fast

morph

Morph's fastest apply model for code edits. ~10,500 tokens/sec with 96% accuracy for rapid code transformations. The model requires the prompt to be in the following format: <instruction>{instruction}</instruction> <code>{initial_code}</code> <update>{edit_snippet}</update>...

textcodecheap
81,920 ctx$0.80/1M in
Explore specs and pricingView details →

Morph: Morph V3 Large

morph

Morph's high-accuracy apply model for complex code edits. ~4,500 tokens/sec with 98% accuracy for precise code transformations. The model requires the prompt to be in the following format: <instruction>{instruction}</instruction> <code>{initial_code}</code>...

textcodecheap
262,144 ctx$0.90/1M in
Explore specs and pricingView details →

TNG: DeepSeek R1T2 Chimera

tngtech

DeepSeek-TNG-R1T2-Chimera is the second-generation Chimera model from TNG Tech. It is a 671 B-parameter mixture-of-experts text-generation model assembled from DeepSeek-AI’s R1-0528, R1, and V3-0324 checkpoints with an Assembly-of-Experts merge. The...

textcheaplong-context
163,840 ctx$0.30/1M in
Explore specs and pricingView details →

Tencent: Hunyuan A13B Instruct

tencent

Hunyuan-A13B is a 13B active parameter Mixture-of-Experts (MoE) language model developed by Tencent, with a total parameter count of 80B and support for reasoning via Chain-of-Thought. It offers competitive benchmark...

textreasoninginstruct
131,072 ctx$0.14/1M in
Explore specs and pricingView details →

Mistral: Devstral Small 1.1

mistralai

Devstral Small 1.1 is a 24B parameter open-weight language model for software engineering agents, developed by Mistral AI in collaboration with All Hands AI. Finetuned from Mistral Small 3.1 and...

textagentscheap
131,072 ctx$0.10/1M in
Explore specs and pricingView details →

Mistral: Devstral Medium

mistralai

Devstral Medium is a high-performance code generation and agentic reasoning model developed jointly by Mistral AI and All Hands AI. Positioned as a step up from Devstral Small, it achieves...

textcodereasoning
131,072 ctx$0.40/1M in
Explore specs and pricingView details →

MoonshotAI: Kimi K2 0711

moonshotai

Kimi K2 Instruct is a large-scale Mixture-of-Experts (MoE) language model developed by Moonshot AI, featuring 1 trillion total parameters with 32 billion active per forward pass. It is optimized for...

textinstructcheap
131,072 ctx$0.57/1M in
Explore specs and pricingView details →

Switchpoint Router

switchpoint

Switchpoint AI's router instantly analyzes your request and directs it to the optimal AI from an ever-evolving library. As the world of LLMs advances, our router gets smarter, ensuring you...

textcheaplong-context
131,072 ctx$0.85/1M in
Explore specs and pricingView details →

Qwen: Qwen3 235B A22B Instruct 2507

qwen

Qwen3-235B-A22B-Instruct-2507 is a multilingual, instruction-tuned mixture-of-experts language model based on the Qwen3-235B architecture, with 22B active parameters per forward pass. It is optimized for general-purpose text generation, including instruction following,...

textmultilingualinstruct
262,144 ctx$0.07/1M in
Explore specs and pricingView details →

Google: Gemini 2.5 Flash Lite

google

Gemini 2.5 Flash-Lite is a lightweight reasoning model in the Gemini 2.5 family, optimized for ultra-low latency and cost efficiency. It offers improved throughput, faster token generation, and better performance...

textvisionimage
1,048,576 ctx$0.10/1M in
Explore specs and pricingView details →

ByteDance: UI-TARS 7B

bytedance

UI-TARS-1.5 is a multimodal vision-language agent optimized for GUI-based environments, including desktop interfaces, web browsers, mobile systems, and games. Built by ByteDance, it builds upon the UI-TARS framework with reinforcement...

textvisionmultimodal
128,000 ctx$0.10/1M in
Explore specs and pricingView details →

Z.ai: GLM 4 32B

z-ai

GLM 4 32B is a cost-effective foundation language model. It can efficiently perform complex tasks and has significantly enhanced capabilities in tool use, online search, and code-related intelligent tasks. It...

textcodeagents
128,000 ctx$0.10/1M in
Explore specs and pricingView details →