Causal Reasoning
The Reasoning Layer supports causal reasoning based on Judea Pearl’s causal hierarchy. This enables moving beyond correlation to understand cause and effect — answering not just “what happened?” but “why did it happen?” and “what would happen if…?”
Why causal reasoning?
Most data systems can only answer associational questions: “Patients who took drug X had better outcomes.” But association is not causation — maybe healthier patients chose drug X in the first place.
Causal reasoning lets you answer harder questions:
- Intervention: “If I give drug X to this patient, will they recover?” (accounting for confounders)
- Counterfactual: “Would this patient have recovered if they had taken drug X?” (reasoning about alternative histories)
- Root cause: “What actually caused this system outage?” (tracing backwards through a causal graph)
These questions require a causal model — a directed graph where edges represent cause-and-effect relationships, not just correlations.
Pearl’s causal hierarchy
Pearl’s Ladder of Causation describes three levels of causal reasoning, each more powerful than the last:
Level 1: Association (seeing)
“What is?” — Observational data and correlations.
At this level you observe patterns in data. For example, “patients who take drug X tend to recover.” This is what standard queries and fuzzy search already provide through the existing SDK endpoints.
Level 2: Intervention (doing)
“What if I do?” — Active manipulation of variables.
Interventions go beyond observation. Rather than asking “do people who take drug X recover?”, you ask “if I give drug X to a patient, will they recover?” This accounts for confounding variables and selection bias.
Level 3: Counterfactual (imagining)
“What if I had done differently?” — Reasoning about alternative histories.
Counterfactuals answer questions like “would the patient have recovered if they had taken drug X, given that they did not?” This is the most powerful level, enabling attribution, explanation, and blame assignment.
Using the Causal Client
The causal reasoning endpoints are available through client.causal.* methods.
Adding causal relations
Build a causal model by adding directed relationships between variables:
import { ReasoningLayerClient } from '@kortexya/reasoninglayer';
const client = new ReasoningLayerClient({ baseUrl: 'https://platform.ovh.reasoninglayer.ai', tenantId: 'my-tenant-uuid', auth: { mode: 'cookie' },});
// Add causal relations with certaintyawait client.causal.addRelation({ cause: 'smoking', effect: 'lung_cancer', certainty: 0.85, mechanism: 'carcinogen exposure',});
await client.causal.addRelation({ cause: 'asbestos', effect: 'lung_cancer', certainty: 0.70,});Querying the causal model
// Get the full causal model (all variables and edges)const model = await client.causal.getModel();console.log(`Variables: ${model.variables.join(', ')}`);console.log(`Total relations: ${model.totalRelations}`);
for (const edge of model.edges) { console.log(`${edge.cause} -> ${edge.effect} (certainty: ${edge.certainty})`);}Checking causation
// Check direct causationconst direct = await client.causal.checkCauses({ cause: 'smoking', effect: 'lung_cancer',});console.log(`Causes: ${direct.causes}, Certainty: ${direct.certainty}`);
// Check ancestry (transitive causation)const ancestry = await client.causal.checkAncestor({ ancestor: 'smoking', descendant: 'lung_cancer',});console.log(`Is ancestor: ${ancestry.isAncestor}`);console.log(`Path: ${ancestry.path.join(' -> ')}`);Interventions (Level 2)
Apply do-calculus interventions to simulate active manipulation:
const result = await client.causal.intervene({ variable: 'smoking', value: false, queryVariable: 'lung_cancer',});
console.log(`Success: ${result.success}`);console.log(`Disabled rules: ${result.disabledRulesCount}`);console.log(`Explanation: ${result.explanation}`);Counterfactuals (Level 3)
Evaluate counterfactual queries using Pearl’s three-step algorithm (abduction, action, prediction):
const cf = await client.causal.counterfactual({ antecedentVariable: 'smoking', antecedentValue: false, consequentVariable: 'lung_cancer', evidence: { smoking: true, lung_cancer: true },});
console.log(`Query: ${cf.query}`);console.log(`Counterfactual value: ${cf.counterfactualValue}`);console.log(`Trace:`);console.log(` Abduction: ${cf.trace.abduction}`);console.log(` Action: ${cf.trace.action}`);console.log(` Prediction: ${cf.trace.prediction}`);D-Separation
Check conditional independence between variables:
const dsep = await client.causal.checkDSeparated({ x: 'smoking', y: 'lung_cancer', conditioningSet: ['tar_deposits'],});
console.log(`D-separated: ${dsep.dSeparated}`);console.log(`Explanation: ${dsep.explanation}`);Root cause analysis
Trace backwards from an observed effect to find root causes:
const rca = await client.causal.rootCause({ effect: 'lung_cancer', maxDepth: 5,});
for (const rc of rca.rootCauses) { console.log(`Root cause: ${rc.cause} (certainty: ${rc.certainty})`); console.log(` Path: ${rc.path.join(' -> ')}`);}
// With a full proof treeconst rcaProof = await client.causal.rootCauseWithProof({ effect: 'lung_cancer', maxDepth: 5,});
console.log(`Proof nodes: ${rcaProof.proofTree.statistics.totalNodes}`);console.log(`Max depth: ${rcaProof.proofTree.statistics.maxDepth}`);