meta-llama/llama-prompt-guard-2-86m
groq
meta-llama/llama-prompt-guard-2-86m — Meta's Llama open-source language model, one of the most widely deployed open models.
- Context window
- 512 tokens
- Input cost
- —
- Output cost
- —
- Latency (p50)
- —
Run side-by-side checks for pricing, context window, and latency.
groq
meta-llama/llama-prompt-guard-2-86m — Meta's Llama open-source language model, one of the most widely deployed open models.