01 Quick start
Three steps. Sixty seconds. No code.
Create an agent
Go to + New agent. Give it a name (e.g. support-bot) and instructions ("answer strictly from these docs, refuse off-topic").
Drop your docs
Drag in PDF, Markdown, or text files. They get indexed in your private tenant. Your agent will only answer from these.
Ask questions
Type a question. Get back: a grounded answer, a support_score, claims with citations, and a verdict. If the docs don't cover it, the agent says so.
02 How it works under the hood
Three layers run on every question.
Retrieval
Your question is embedded and matched against your indexed documents (BM25 + dense vector hybrid). The top relevant chunks are pulled into the LLM's context.
Grounded answer
The LLM is told to answer only from the retrieved chunks. If the chunks don't support an answer, it says "I don't have that information in the provided context."
Fact-checking
The answer is split into atomic claims. Each claim is verified against the source chunks. The result is a verdict:
- SAFE — every claim is supported by your documents.
- PARTIAL — some claims are supported, others aren't.
- UNVERIFIED — no source context was retrieved (likely off-topic or missing docs).
- CONFLICT — a claim contradicts your documents.
03 Agent examples
Five patterns we see working in the wild. Copy any setup as a starting point.
Customer support bot
Answers FAQs, billing, refunds, plans — strictly from your help center. Refuses to invent policies.
New-hire onboarding agent
Answers HR / IT / culture questions for new joiners from your handbook + setup guides.
Pricing & product Q&A
Pre-sales agent that answers prospects from your pricing page, comparison docs, security PDFs.
Compliance & legal helper
Answers strictly from policy / regulation docs. Hallucinations here = liability.
Runbook / SOP agent
On-call helper. Answers "how do I rotate the X key?" / "what's the rollback playbook?" from your runbooks.
Personal research agent
Project-scoped agent on top of papers, transcripts, notes. Lets you ask "where did I read X?" with citations.
04 What your agent can & can't do
Studio agents are RAG-grounded by design. No external tools, no web access — just your docs and a verified answer.
| Can do | Can't do (yet) |
|---|---|
| Answer from your docs with grounded citations | Browse the web in real time |
| Refuse off-topic questions when given the right system prompt | Run code or call external APIs |
| Quote source paragraphs verbatim with file + section | Update its own knowledge base — re-upload to refresh |
| Fact-check itself claim by claim, with a verdict | Send emails, post to Slack, hit your DB |
| Detect contradictions between your own docs | Remember past conversations across runs (each run is independent) |
| Refuse to invent when the docs don't cover a question | Ingest images / video — text only for now |
05 Limits
| Resource | Trial | Pro · $29/mo |
|---|---|---|
| Agents | 2 | 5 |
| Runs (rolling 24h / 30d) | 10 / day | 500 / month |
| Document upload | 10 MB total | 100 MB total |
| Per-file size | 10 MB | 10 MB |
| Verification (SAFE / PARTIAL / UNVERIFIED / CONFLICT) | ✓ included | ✓ included |
| Source citations on every answer | ✓ included | ✓ included |
| Stats & verdict distribution dashboard | ✓ after 5 runs | ✓ included |
| Email support | — | contact@wauldo.com |
Beyond Pro, the same engine is exposed via the Wauldo Agents API with a 30-tool runtime, parallel workflows, and per-tenant rate limits.
06 Tips for great agents
Write the system prompt like a job description
Be explicit about scope, tone, refusals. Bad: "answer questions". Good: "answer billing questions strictly from billing.pdf. Quote section numbers. Refuse anything outside billing — route to a human."
Curate your docs hard
Three clean, current PDFs beat thirty stale ones. Outdated docs = wrong answers (with high confidence). Re-upload after every policy change.
Test with adversarial questions
After deploy, run 5–10 prompts you know the answer to. Then 2–3 you know are not covered. Verify the agent refuses cleanly on the second batch — that's where trust is earned.
Read the verdict, not just the answer
SAFE 0.95 + 5/5 claims supported = trust the answer. PARTIAL 0.6 + 2/5 supported = read the answer carefully. CONFLICT = your docs disagree with each other; fix them, not the agent.
Use one agent per use case
Don't pile customer-support docs and engineering runbooks into the same agent. The retrieval gets fuzzy. Two focused agents beat one overloaded one.
07 Tools (Agents API)
Studio agents are RAG-only. The full Wauldo Agents API gives your agent a 30-tool runtime: web search, SQL, Slack, email, GitHub, scheduler, file system. Ship it via /v1/agents when chat-with-docs isn't enough.
Compute & data
File & storage
Research & web
Communication & automation
Integrations
Dashed pills = env-gated (need an API key). The MCP bridge auto-registers any external server's tools (filesystem, memory, brave, …) via MCP_CONFIG_PATH.
Example agents that combine tools
Each row below is a real agent shape — a system prompt + a tool whitelist. The Agents API runs the workflow, retries on failure, and verifies every claim.
Reads your industry RSS feeds every morning, enriches each headline with a 1-line web search summary, posts the digest to a Slack channel.
rss_feedweb_searchslack_webhookscheduler
"Find recent papers on X, cross-reference with my notes, email me a summary with citations."
arxivwikipediarag_retrieveemail_smtp
Watches a repo's open PRs, queries your CI metrics DB for risk flags, posts a "needs review" digest to engineering Slack.
githubsql_queryslack_webhookscheduler
"Weekly recap: top 5 plans, MRR delta, churn warnings — emailed to founders Monday 9am."
sql_queryanalytics_fetcherpricing_manageremail_smtpscheduler
Indexes your policy docs, monitors Slack for risky language, alerts compliance team with the policy section that was potentially crossed.
rag_retrieveslack_webhookscheduler
08 FAQ
Click to expand.
Are my documents private?
Yes. Each Studio account is a tenant. Your docs are indexed in a tenant-scoped knowledge base — no other Studio user can retrieve them. Documents are not used to train any model.
What models does Studio use?
Studio routes through the Wauldo backend, which uses Qwen / Claude / GPT depending on the question complexity (auto-selected for cost-quality balance). All inference is server-side; the user-facing experience is consistent.
What file types are supported?
PDF, Markdown (.md), plain text (.txt). Up to 10 MB per file. PDFs are extracted to text — formatting (tables, images) may be lost.
Why did my answer come back UNVERIFIED?
Either no source context was retrieved (the docs don't cover that question), or the docs aren't indexed yet (rare — usually a transient backend issue). Try rephrasing closer to the language used in your docs.
Why did my answer come back CONFLICT?
The fact-checker found a claim that contradicts what's in your retrieved chunks. Often this means your docs disagree with each other (a v1 and v2 of the same policy both indexed). Audit the cited sources and clean up.
Can I delete an agent or my account?
Delete an agent: hit "Delete" on the agents page. Delete your whole account / data: email contact@wauldo.com — full GDPR export & delete on request.