// wauldo code · vs code extension

Code with an AI that shows its work.

Every answer from the chat, every inline suggestion, carries a verdict pill and a trust score. Click the pill to open a drawer that lists each claim, which source chunk supports it, and where the model drifted. When the shape looks hallucinated — confident absolutes, invented citations, suspiciously precise numbers — the verdict downgrades before the answer ever reaches you.

Free tier 500 requests/month · no credit card · 30-second install

// why it's different

Most coding assistants guess. This one grounds.

01 · VERDICT ON EVERY ANSWER

Every chat response returns a verdict — SAFE (grounded), PARTIAL (some claims unverified), UNVERIFIED (no sources matched), or CONFLICT (contradicts a source). The verdict is a pill you click to open the evidence drawer. No black-box "trust me".

02 · SUPPORT SCORE, NOT VIBES

A numeric support_score from 0 to 1 measures the fraction of claims in the answer that are supported by the retrieved sources. Independent of the model's stated confidence. The same score the API returns is what the extension surfaces — no dumbing down.

03 · SHAPE-AWARE DOWNGRADE

An internal classifier reads the shape of the answer — absolute language, fake citations like [Smith 2024], suspiciously precise statistics — and downgrades the verdict if the shape contradicts the grounding. F1 of 0.97 on a 50-example offline benchmark.

// what you get

Four surfaces, one verification layer.

A · INLINE COMPLETIONS

Ghost-text suggestions as you type. Configurable per-language. Fast path: 15s timeout, streams through the router, cancelled the moment you keep typing. When the completion cites external facts, you see the verdict on the companion chat message.

B · CHAT WITH VERDICT DRAWER

Command palette or sidebar. Every reply renders a verdict pill. Click it: full breakdown of claims (supported / contradicted / unverified) with the source chunk excerpt and a similarity score. The drawer is the same JSON the API returns, rendered natively.

C · MULTI-FILE CONTEXT

Attach open files, the active selection, or a workspace folder as context. The extension streams them to the tenant-scoped RAG index on the backend, then queries with your prompt. Nothing persisted beyond your session unless you explicitly upload.

D · DUAL-MODE CLIENT

Marketplace install points to the RapidAPI gateway by default (BYO key). Enterprise / self-host users can switch to a direct api.wauldo.com endpoint with a tig_live_* bearer and hit the same backend — same features, same verdicts.

// 30 seconds

Install, paste a key, ship.

# 1. Install the extension
code --install-extension wauldo.wauldo-code

# 2. Grab a free key — BASIC tier, 500 requests/month
open https://wauldo.com/pricing

# 3. Paste the key in the extension: "Wauldo: Configure API Key"
#    → Done. Open a file. Type. The verdict pill does the rest.

Keys are stored in the OS keychain via VS Code SecretStorage — never written to settings.json.

Full step-by-step tutorial →

// pricing

Same tiers as the API. No extra surcharge for the extension.

BASIC

$0 / month

500 requests / month. Full verdict pill, full drawer, full shape-downgrade. No feature gating — just a monthly quota. Good for prototypes, weekend projects, evaluation.

PRO

$29 / month

10,000 requests / month. Solo developers shipping real features. Auto-rollover on quota exhaustion with clear in-extension messaging — no silent failures.

ULTRA

$99 / month

100,000 requests / month. Recommended. Small teams, heavy RAG use, CI integrations. Pay-per-request above quota at $0.008 / request (MEGA overage plan).

Full pricing table → · Enterprise / self-host off-platform: contact@wauldo.com

// faq

VS Code users ask.

How is this different from Copilot, Cursor, or Continue?

All three produce answers. None report how grounded the answer is. Wauldo's extension surfaces the same support score and claim-level drawer the API returns, so you can tell a hedged, well-grounded suggestion apart from a confidently hallucinated one — without running the code first. For everything else — autocomplete speed, IDE integration, language coverage — it's competitive, not differentiated.

Can I self-host the backend?

Yes. The extension is dual-mode: point wauldo.apiUrl to your own api.wauldo.com equivalent and supply a tig_live_* bearer. The backend is a Rust workspace — deployable on Fly.io, Render, or bare metal. Enterprise licensing and deployment support off-platform.

What data leaves my machine?

By default: your prompt, any files you attach or select, and metadata (language, request ID). Nothing else is read from your workspace. API keys live in VS Code SecretStorage (OS keychain), not in settings. Telemetry is off by default — first launch shows a modal asking explicit consent before any anonymous usage event is sent.

Does the verdict add latency?

Streaming chat: no perceived latency — tokens arrive as usual, the verdict pill fills in once the response is complete. Inline completions: the source-grounded check runs in parallel with the generation, so the suggestion appears as fast as any other assistant. The shape-downgrade classifier is sub-millisecond after warmup.

What happens when the key is invalid or the API is down?

The extension fails loudly, not silently. Invalid key: clear error with a link to paste a new one. API down: the status pill turns yellow and the chat shows a retry banner. No cached answers served with stale verdicts — a wrong score is worse than no score.

Install it. Read the verdict. Trust less, ship more.

Free tier. 500 verifications/month. No credit card. 30-second install.

$ code --install-extension wauldo.wauldo-code