๐ All Models
1,316 models ยท Page 10 of 37
Qwen/Qwen3.5-397B-A17B
mistralai/Mixtral-8x7B-Instruct-v0.1
Mixtral 8ร7B Instruct on DeepInfra โ popular MoE model with 32K context and strong multilingual performance.
embed-english-light-v3.0-image
thenlper/gte-base
thenlper/gte-large
black-forest-labs/FLUX-pro
black-forest-labs/FLUX-2-klein-4b
Qwen/Qwen2.5-VL-32B-Instruct
embed-english-v3.0
State-of-the-art English text embedding model for semantic search, clustering, and classification.
embed-english-v3.0-image
Qwen/Qwen3-Embedding-0.6B-batch
google/gemma-3-27b-it
Gemma 3 introduces multimodality, supporting vision-language input and text outputs. It handles context windows up to 128k tokens, understands over 140 languages, and offers improved math, reasoning, and chat capabilities,...
intfloat/e5-base-v2
Qwen/Qwen3-VL-30B-A3B-Instruct
Qwen3-VL-30B-A3B-Instruct is a multimodal model that unifies strong text generation with visual understanding for images and videos. Its Instruct variant optimizes instruction-following for general multimodal tasks. It excels in perception...
Wan-AI/Wan2.6-T2I
nvidia/NVIDIA-Nemotron-3-Super-120B-A12B
Bria/expand
meta-llama/Meta-Llama-3.1-8B-Instruct
Meta Llama 3.1 8B Instruct on DeepInfra โ fast, affordable open-source model with 128K context.
stepfun-ai/Step-3.5-Flash
Qwen/Qwen3-235B-A22B-Thinking-2507
Qwen3-235B-A22B-Thinking-2507 is a high-performance, open-weight Mixture-of-Experts (MoE) language model optimized for complex reasoning tasks. It activates 22B of its 235B parameters per forward pass and natively supports up to 262,144...
