Simple, usage-based
pricing
Start free. Scale to millions. You only pay for verified answers.
No credit card required · Upgrade or downgrade anytime
Get started
- ✓ 500 Guard calls/mo
- ✓ 100 RAG queries/mo
- ✓ 50 LLM chat/mo
- ✓ All verification modes
- ✓ Community support
For developers shipping AI
- ✓ 5,000 Guard calls/mo
- ✓ 200 RAG queries/mo
- ✓ 100 LLM chat/mo
- ✓ All verification modes
- ✓ Priority support
- ✓ Full audit trail
For teams in production
- ✓ 20,000 Guard calls/mo
- ✓ 2,000 RAG queries/mo
- ✓ 1,000 LLM chat/mo
- ✓ Premium models included
- ✓ Priority support
- ✓ Full observability
For growing startups
- ✓ 100,000 Guard calls/mo
- ✓ 10,000 RAG queries/mo
- ✓ 5,000 LLM chat/mo
- ✓ Premium models included
- ✓ Dedicated support
- ✓ SLA 99.9%
For large organizations
- ✓ Unlimited everything
- ✓ Dedicated infrastructure
- ✓ Custom SLA
- ✓ Custom integrations
Guard is free to verify
Guard lexical mode costs nothing. 26% of all queries are answered in <100ms without any LLM call. You only pay when value is delivered.
Frequently asked questions
Any POST to /v1/fact-check. Lexical mode is <1ms and the most efficient. Hybrid and semantic modes use BGE embeddings for deeper verification.
Yes. Guard verifies outputs from OpenAI, Claude, Gemini, Llama, Mistral — any model. It checks the output, not the provider.
Requests return a 429 status. Upgrade anytime from your RapidAPI dashboard — no downtime, changes take effect immediately.
No. Monthly limits reset each billing cycle. Pick the plan that matches your expected usage.
Guard verifies any LLM output against source text you provide. RAG stores your documents and returns verified answers. Guard is the verification layer; RAG is the full pipeline.
Start catching hallucinations
Free to start. No credit card. Upgrade when you're ready.