A microsite of wot.technology · Grounded in the graph · EU AI Act Article 86 by construction

wot.if

What could be — with receipts.
An LLM paired with a graph it cannot escape. Every claim carries a because-line back to the signed thoughts it drew on. If the line is missing, the answer is flagged — and you can see exactly where.


Conversational AI that cites its sources. Every answer traceable. Every claim auditable.

You are already using AI for real work -- summaries, briefs, drafts, recommendations. The problem is not the AI. The problem is that when someone asks "how did you get there?" the answer is "the model said so." That is not good enough for regulated industries. It is not good enough for board papers. And under the EU AI Act Article 86 right to explanation, it may not be legal.

If your AI cannot explain how it reached its answer, you are liable. Not eventually. Now. The Act is in force, the fines scale to revenue, and "we use a third-party model" is not a defence.

wot.if pairs the LLM with a graph it cannot escape. The model reads from a content-addressed thought graph. Every claim it generates is required to cite its sources by because-line. The output is a signed thought that includes the because-lines back to the source data the LLM drew on. If the lines are missing, the answer is flagged -- and the user knows exactly where the gap is. The maths handles the trust. You get the creative reasoning of AI without the hallucination risk.

Valkyries dispatch Ravens -- the agent model. Voice in, conversation through, action out, receipts the whole way. The explanation is not a post-hoc reconstruction. The explanation IS the output.


Built for anyone using AI for work that has consequences.

If you're using AI for real work — summaries, briefs, drafts, recommendations — and need to defend every claim to whoever reads it, this is built for you. wot.if cites every source; the brief comes with because-lines, not hand-waving. For regulated and compliance-facing businesses where EU AI Act Article 86 right-to-explanation is an audit risk on the balance sheet, the explanation IS the output. For operators dispatching AI agents — voice in, Valkyrie out, Ravens doing the signed work — every action carries a because-line back to your original intent. For Solid-adjacent and decentralised-web deployments, local inference and signed audit trails are structural, not optional.


Worked example

Voice in. Signed action out. Receipts the whole way.

You say:

"Take this RSS feed and put it in my pod."

The wot.if assistant matches the intent against a known stemline (rss-to-solid-dts), spawns a Valkyrie that dispatches three single-purpose Ravens: a feed-poller, a thought-writer, and a Solid-pod adapter. Each Raven signs its work. Every new RSS item lands in your Solid pod as a signed ThoughtSchema record with a because-line back to the feed source, the operator intent, and the stemline that ran.

Want to know why a particular item is in your pod? Walk the because-line: feed identity, poll trace, fetch event, ingest thought, pod write. Every step a signed receipt.

Want to revoke it? Say "forget that one." FlowSpace publishes a rework with valid_until=now and the nestled-key is closed. GDPR Article 17 right to erasure -- satisfied structurally, not by a policy promise you hope someone follows.

The same pattern works for any "do this for me" request. The agent picks the stemline, the Ravens execute, the receipts accumulate in your graph. Every environment is different, but the pattern is always the same: intent in, action out, because-lines connecting them.