๐ฌ Text Generation
850 models ยท Page 6 of 24
Kling 1.6 Standard
Rime Labs Arcana v3 Turbo
Deepgram Nova 3 Multilingual
nim/meta/llama-3.1-8b-instruct
nim/meta/llama-3.1-8b-instruct โ Meta's Llama open-source language model, one of the most widely deployed open models.
Google Veo 3.0
Cogito V1 Preview Llama 8B
Cogito V1 Preview Llama 8B โ Meta's Llama open-source language model, one of the most widely deployed open models.
DeepSeek R1 Distill Qwen 1.5B
Qwen2.5 32B
Qwen2.5 32B โ Alibaba's Qwen series language model with strong multilingual and coding capabilities.
MiniMax M2.5 FP4
Qwen 2 (1.5B)
Qwen 2 (1.5B) โ Alibaba's Qwen series language model with strong multilingual and coding capabilities.
Deepgram Flux
Qwen3 235B A22B Thinking 2507 FP8
Qwen3-235B-A22B-Thinking-2507 is a high-performance, open-weight Mixture-of-Experts (MoE) language model optimized for complex reasoning tasks. It activates 22B of its 235B parameters per forward pass and natively supports up to 262,144...
Deepgram Aura 2
Mixtral-8x7B Instruct v0.1
Deepgram Nova 3
Kling 1.6 Pro
Cogito V1 Preview Llama 70B Turbo
Sora 2
Vidu Q3 Turbo
GLM 5 Fp4
intfloat/e5-large-v2
google/gemma-3-4b-it
Gemma 3 introduces multimodality, supporting vision-language input and text outputs. It handles context windows up to 128k tokens, understands over 140 languages, and offers improved math, reasoning, and chat capabilities,...
sentence-transformers/multi-qa-mpnet-base-dot-v1
Qwen/Qwen3-32B
Qwen3-32B is a dense 32.8B parameter causal language model from the Qwen3 series, optimized for both complex reasoning and efficient dialogue. It supports seamless switching between a "thinking" mode for...
nvidia/Llama-3.3-Nemotron-Super-49B-v1.5
anthropic/claude-4-sonnet
black-forest-labs/FLUX-2-klein-9b
meta-llama/Meta-Llama-3.1-70B-Instruct
Meta Llama 3.1 70B Instruct on DeepInfra โ powerful open-source model for complex reasoning tasks.
PrunaAI/p-image
Qwen/Qwen3-235B-A22B-Instruct-2507
deepseek-ai/DeepSeek-R1-Distill-Llama-70B
Qwen/Qwen3-30B-A3B
Qwen3, the latest generation in the Qwen large language model series, features both dense and mixture-of-experts (MoE) architectures to excel in reasoning, multilingual support, and advanced agent tasks. Its unique...
