CLADE: Building Proof into Every Decision

The world runs on claims.
From product reviews to research papers, every statement carries an assumption and a leap of faith.
We trust what sounds confident, not always what’s proven.
CLADE challenges that: it’s designed to separate belief from evidence.
But how many of those statements survive real scrutiny? The problem isn’t a lack of information, it's the lack of traceability. We rarely know why a decision was made, or what evidence supports it. That’s where CLADE comes in, the new reasoning layer Shinkai is developing.
CLADE stands for Claims, Links, Alternatives, Decisions, Evidence. Behind the acronym is a simple but radical idea: every claim should be backed, versioned, and evolved like living knowledge. It’s a “living proof-of-decision”.
While other systems store results, CLADE stores reasoning. Each claim carries its evidence, each decision its alternatives, and everything can be audited like code. You can literally run a git diff on a decision.
In today’s AI and research landscape, information repeats itself without context. One paper says one thing, another contradicts it, and the AI averages them both. CLADE breaks that cycle: it automatically extracts the core claims from a document, evaluates their evidence, and measures consistency across sources.
This isn’t limited to papers. Imagine a living Wikipedia, where articles correct each other when a claim loses support, or an AI capable of citing and auditing its own reasoning. CLADE turns knowledge from a static archive into a dynamic network of verified statements.
Its modular design lets small proofs connect into larger arguments the way atoms form molecules and molecules fold into proteins. Many small, verifiable claims grow into systems of reasoning you can defend to customers, boards, or regulators. No more “trust us”: every decision carries a cryptographic receipt of why it exists.
Within the Shinkai ecosystem, this marks the next logical step. Shinkai Agents already moved AI from talk to action agents that can code, fetch data, and execute workflows. CLADE extends that evolution from action to justification. Each action can be backed by a verifiable chain of claims and evidence, making Shinkai’s agents not only capable, but accountable.
CLADE is currently under active development, an ambitious project in research that’s shaping the foundation of verifiable reasoning.Stay tuned for its upcoming release: a framework that will redefine how truth is structured, proven, and shared within AI systems.
Shinkai isn’t just building tools for automation.
It’s building the logic of trust itself.