Inference
The inference engine is the computational heart of the Reasoning Layer. Rules and facts are terms, and the engine supports backward chaining (goal-directed), forward chaining (data-driven), fuzzy proving, Bayesian prediction, and negation as failure.
What is inference?
Inference means deriving new conclusions from existing knowledge. You provide facts (things that are true) and rules (logical relationships), and the engine computes what follows.
Think of it like a chain of deductions:
Fact: "Alice is an employee with salary 120000"Rule: "An employee with salary > 100000 is a high earner" ─────────────────────────────────────────────────Conclusion: "Alice is a high earner"The engine automates this process, handling thousands of rules and facts, chaining deductions through multiple steps, and reporting confidence levels.
Two directions of reasoning
Backward chaining (goal-directed): You ask a question, and the engine works backward through rules to find answers. “Who are the high earners?” → engine finds rules that conclude high_earner, then finds facts that satisfy those rules.
Forward chaining (data-driven): You provide data, and the engine applies all applicable rules to derive everything that follows. “Given these sensor readings, what alerts should fire?” → engine applies all rules and materializes new facts.
Most applications use backward chaining for queries and forward chaining for materialization.
All inference endpoints use the untagged serialization format. Build inputs with psi() and plain JS values, not Value.*.
Rules and facts are terms
In the Reasoning Layer, rules and facts are not separate constructs — they are terms. Data, code, and constraints share the same representation.
Adding facts
A fact is a ground term (a term with no variables):
import { psi, ReasoningLayerClient } from '@kortexya/reasoninglayer';
const client = new ReasoningLayerClient({ baseUrl: 'https://platform.ovh.reasoninglayer.ai', tenantId: 'your-tenant-uuid', auth: { mode: 'cookie' },});
await client.inference.addFact({ term: psi('person', { name: 'Alice', age: 30, }),});
await client.inference.addFact({ term: psi('person', { name: 'Bob', age: 45, }),});Bulk-adding facts
await client.inference.bulkAddFacts({ facts: [ psi('person', { name: 'Charlie', age: 28, }), psi('person', { name: 'Diana', age: 35, }), ],});Adding rules
A rule has a head (conclusion) and a body (antecedents). When all antecedents are satisfied, the head can be derived:
await client.inference.addRule({ term: psi('senior_employee', { name: '?Name', years: '?Years', }), antecedents: [ psi('employee', { name: '?Name', years_of_service: '?Years', }), ], certainty: 0.9,});Rules with guards
Use the guard() builder with constrained() to add constraints on variables:
import { psi, constrained, guard } from '@kortexya/reasoninglayer';
await client.inference.addRule({ term: psi('high_earner', { name: '?Name', salary: '?Salary', }), antecedents: [ psi('employee', { name: '?Name', salary: constrained('?Salary', guard('gt', 100000)), }), ],});The guard() function creates a TermInputDto representing the guard sort:
guard('gt', 100000);// Produces: { sortName: "guard_constraint", features: { op: "gt", right: 100000 } }Backward chaining
Backward chaining searches for matching data by working backwards through rules from a goal pattern. You specify what you want to prove, and the engine finds solutions.
Inline goal
const result = await client.inference.backwardChain({ goal: psi('high_earner', { name: '?Name', salary: '?Salary', }), maxSolutions: 10, timeoutMs: 5000,});
for (const solution of result.solutions) { console.log('Certainty:', solution.certainty); for (const binding of solution.substitution.bindings) { console.log(` ${binding.variableName} = ${binding.boundToDisplay}`); }}Saved goal reference
You can create a goal once and reference it by ID in subsequent queries:
// Create and save a goalconst goalResult = await client.inference.createGoal({ clauses: [ psi('high_earner', { name: '?Name', salary: '?Salary', }), ],});
// Use the saved goal IDconst result = await client.inference.backwardChain({ goalId: goalResult.goalId, maxSolutions: 10,});Dual input mode
BackwardChainRequest accepts either goal (inline TermInputDto) or goalId (reference to a saved goal), but not both. This dual-input pattern applies to FuzzyProveRequest, BayesianPredictRequest, and NafProveRequest as well.
Open-world reasoning
By default, backward chaining uses closed-world semantics (if a fact cannot be proved, it is false). Open-world mode treats unprovable facts as unknown rather than false:
const result = await client.inference.backwardChain({ goal: psi('reachable', { from: 'A', to: '?Dest', }), openWorld: true,});
// Solutions include evidenceRatio and residuatedSortsfor (const solution of result.solutions) { console.log('Evidence ratio:', solution.evidenceRatio); console.log('Residuated sorts:', solution.residuatedSorts);}Forward chaining
Forward chaining is data-driven materialization. The engine applies rules to existing facts to derive new facts:
const result = await client.inference.forwardChain({ maxIterations: 100, persistDerived: true, enableProvenanceTags: true,});
console.log(`Derived ${result.derivedCount} new facts in ${result.iterations} iterations`);console.log(`Total facts after materialization: ${result.totalFacts}`);
// Derived factsfor (const fact of result.derivedFacts) { console.log(`${fact.sortName}: ${fact.display}`);}
// Provenance tags (confidence scores for derived facts)if (result.provenanceTags) { for (const tag of result.provenanceTags) { console.log(`Fact #${tag.factIndex}: confidence ${tag.confidence}`); }}Initial facts
You can provide initial facts inline rather than using the stored fact base:
const result = await client.inference.forwardChain({ initialFacts: [psi('temperature', { value: 38 }), psi('symptom', { type: 'cough' })], maxIterations: 50,});Solutions and proofs
SolutionDto
Each solution from backward chaining contains:
| Field | Type | Description |
|---|---|---|
substitution | HomoiconicSubstitutionDto | Variable bindings |
proof | ProofDto? | Proof tree (null if proof generation disabled) |
certainty | number | Certainty score (0.0 to 1.0) |
evidenceMatched | number? | Matched antecedents (open-world only) |
evidenceRatio | number? | Matched/total antecedent ratio (open-world only) |
residuatedSorts | string[]? | Sort names that residuated (open-world only) |
BindingDto
Each binding maps a variable to its resolved value:
interface BindingDto { variableTermId: string; // Variable term UUID variableName?: string; // Human-readable name (e.g., "?Name") boundToTermId: string; // Bound-to term UUID boundToDisplay: string; // Human-readable display of the bound value}ProofDto
Proof trees are recursive structures showing derivation steps:
interface ProofDto { goalTermId: string; // Goal proved at this step ruleTermId?: string; // Rule used substitution: HomoiconicSubstitutionDto; // Substitution at this step subproofs?: ProofDto[]; // Sub-proofs for body goals certainty: number; // Certainty of this step}Guard operators
The GuardOp type defines the comparison operators available for guard constraints:
| Operator | Meaning |
|---|---|
"lt" | Less than |
"lte" | Less than or equal |
"gt" | Greater than |
"gte" | Greater than or equal |
"eq" | Equal |
"ne" | Not equal |
import { psi, constrained, guard } from '@kortexya/reasoninglayer';
const salary = constrained('?Salary', guard('gte', 50000));const age = constrained('?Age', guard('lt', 65));const status = constrained('?Status', guard('ne', 'inactive'));Allen temporal relations
The SDK supports Allen’s 13 interval algebra relations for temporal reasoning. Use the allen() builder to create temporal constraints:
import { allen } from '@kortexya/reasoninglayer';
// Event A must happen before Event Bconst constraint = allen('before', 'event_a_time', 'event-b-term-uuid');
// Meeting overlaps with work hoursconst overlap = allen('overlaps', 'meeting_time', 'work-hours-term-uuid');The 13 Allen relations:
| Relation | Meaning |
|---|---|
before | A ends before B starts |
after | A starts after B ends |
meets | A ends exactly when B starts |
met_by | A starts exactly when B ends |
overlaps | A starts before B and ends during B |
overlapped_by | B starts before A and ends during A |
during | A is entirely contained within B |
contains | B is entirely contained within A |
starts | A and B start at the same time, A ends first |
started_by | A and B start at the same time, B ends first |
finishes | A and B end at the same time, A starts later |
finished_by | A and B end at the same time, B starts later |
equals | A and B have the same start and end |
Fuzzy proving
Fuzzy proof search propagates truth degrees through rules:
const result = await client.inference.fuzzyProve({ goal: psi('comfortable_temperature', { location: '?Location', }), minDegree: 0.5, maxSolutions: 5, tnorm: 'product', // "min", "product", or "lukasiewicz"});
for (const solution of result.solutions) { console.log(`Location: ${solution.substitution.bindings[0]?.boundToDisplay}`); console.log(`Truth degree: ${solution.certainty}`);}Bayesian prediction
Bayesian prediction requires an effectVar naming the variable that holds a FuzzyNumber effect:
const result = await client.inference.bayesianPredict({ effectVar: '?Effect', goal: psi('treatment_outcome', { treatment: 'drug_a', effect: '?Effect', }), conformal: 0.9, robust: true,});Negation as failure
NAF proving supports negated literals. A negated literal succeeds when its positive form cannot be proved:
import { toTermInputDto } from '@kortexya/reasoninglayer';
const result = await client.inference.nafProve({ literals: [ { term: toTermInputDto( psi('employee', { name: '?Name', }), ), negated: false, }, { term: toTermInputDto( psi('manager', { name: '?Name', }), ), negated: true, // NOT a manager }, ], maxSolutions: 10, timeoutMs: 5000,});Goal management
Goals can be saved and reused across multiple inference requests:
// Create a saved goalconst goal = await client.inference.createGoal({ clauses: [ psi('reachable', { from: 'A', to: '?Dest', }), ], minDegree: 0.7,});
// List all saved goalsconst goals = await client.inference.listGoals();
// Use the saved goalconst result = await client.inference.backwardChain({ goalId: goal.goalId,});
// Delete the goal when doneawait client.inference.deleteGoal(goal.goalId);Retrieving and clearing facts
// Get all stored factsconst factsResult = await client.inference.getFacts();for (const fact of factsResult.facts) { console.log(`${fact.sortName}: ${fact.display}`);}
// Clear all factsconst cleared = await client.inference.clearFacts();console.log(`Cleared ${cleared.factsCleared} facts`);