๐ Free & Open
837 models ยท Page 8 of 24
zai-org/GLM-4.7
allenai/Olmo-3.1-32B-Instruct
google/gemini-2.5-flash
Bria/expand
nvidia/Nemotron-3-Nano-30B-A3B
NVIDIA Nemotron 3 Nano 30B A3B is a small language MoE model with highest compute efficiency and accuracy for developers to build specialized agentic AI systems. The model is fully...
meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8
Qwen/Qwen3-Embedding-4B-batch
meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo
ClarityAI/creative
NousResearch/Hermes-3-Llama-3.1-405B
zai-org/GLM-5.1
google/gemini-1.5-flash-8b
mistralai/Mixtral-8x7B-Instruct-v0.1
Mixtral 8ร7B Instruct on DeepInfra โ popular MoE model with 32K context and strong multilingual performance.
ClarityAI/crystal
nvidia/NVIDIA-Nemotron-Nano-9B-v2
Qwen/Qwen3.5-35B-A3B
nvidia/NVIDIA-Nemotron-Nano-12B-v2-VL
mistralai/Mistral-Small-3.2-24B-Instruct-2506
embed-multilingual-light-v3.0-image
black-forest-labs/FLUX-pro
Qwen/Qwen3-Embedding-0.6B
PaddlePaddle/PaddleOCR-VL-0.9B
Qwen/Qwen3-Embedding-0.6B-batch
google/gemma-3-27b-it
Gemma 3 introduces multimodality, supporting vision-language input and text outputs. It handles context windows up to 128k tokens, understands over 140 languages, and offers improved math, reasoning, and chat capabilities,...
Sao10K/L3-8B-Lunaris-v1-Turbo
Bria/fibo_edit
embed-english-light-v3.0
Qwen/Qwen2.5-VL-32B-Instruct
thenlper/gte-large
Wan-AI/Wan2.6-Image-Edit
BAAI/bge-m3-multi
black-forest-labs/FLUX-2-klein-4b
embed-multilingual-light-v3.0
black-forest-labs/FLUX-1-dev
openai/gpt-oss-120b
gpt-oss-120b is an open-weight, 117B-parameter Mixture-of-Experts (MoE) language model from OpenAI designed for high-reasoning, agentic, and general-purpose production use cases. It activates 5.1B parameters per forward pass and is optimized...
