Homoiconicity
Homoiconicity is one of the most powerful — and most unusual — properties of the Reasoning Layer. It means that data and logic use the same representation. Rules are data. Facts are data. Goals are data. Constraints are data. Everything is a Psi-term.
What does homoiconicity mean?
In most systems, data and logic live in separate worlds:
| System | Data | Logic |
|---|---|---|
| SQL database | Rows in tables | SQL queries / stored procedures |
| OOP application | Objects in memory | Methods / functions |
| Rule engine | Facts | Rules (separate DSL) |
In the Reasoning Layer, there is no separation. A rule is just a Psi-term with specific features (a head and antecedents). A fact is a Psi-term. A goal is a Psi-term. They all live in the same knowledge base and can be queried, modified, and reasoned about using the same tools.
Traditional system: Data layer: { name: "Alice", salary: 120000 } ← stored as data Rule layer: IF salary > 100000 THEN high_earner ← separate language
Reasoning Layer: Psi-term: employee(name: "Alice", salary: 120000) ← a term Psi-term: high_earner(?Name) :- employee(?Name, ?S > 100000) ← also a term!Why does this matter?
1. You can query your rules
Since rules are terms, you can search for them, filter them, and inspect them just like any other data:
// "Show me all rules that derive 'high_earner'"const rules = await client.query.findUnifiable({ query: TermInput.byName('high_earner', { name: FeatureInput.variable('?Name'), }),});This is impossible in systems where rules are stored in a separate format (like a Drools .drl file or SQL stored procedures).
2. You can reason about reasoning
An agent or rule can inspect the rules that produced a conclusion. This enables:
- Explainability: “This person was flagged as high-risk because rule #42 fired, which requires blood pressure > 140 and age > 60”
- Meta-reasoning: “Which rules have low certainty? Should we trust this conclusion?”
- Self-modification: An agent can learn new rules from experience and add them to its own knowledge base
3. Rules compose naturally
Because rules and facts share the same representation, a rule’s conclusion can be another rule’s input. There’s no impedance mismatch between layers.
// Rule 1: high_earner(Name) :- employee(Name, Salary > 100000)// Rule 2: bonus_eligible(Name) :- high_earner(Name), tenure(Name, Years > 5)// Rule 3: exec_track(Name) :- bonus_eligible(Name), leadership_score(Name, Score > 8)
// These chain naturally — no separate "rule chaining engine" needed4. Everything has provenance
Since rules are terms with IDs, every derived fact can point back to the rule that created it. Proof trees trace back through the exact chain of rules and facts that produced a conclusion.
const result = await client.inference.backwardChain({ goal: TermInput.byName('high_earner', { name: FeatureInput.variable('?Name'), }),});
// Each solution has a proof tree showing exactly which rules firedif (result.solutions[0]?.proof) { console.log('Derived via rule:', result.solutions[0].proof.rule_term_id);}Homoiconicity in practice
Adding a rule (it’s just a term)
// This adds a rule — but under the hood, it's creating a Psi-termconst rule = await client.inference.addRule({ term: TermInput.byName('senior_employee', { name: FeatureInput.variable('?Name'), }), antecedents: [ TermInput.byName('employee', { name: FeatureInput.variable('?Name'), years: FeatureInput.constrainedVar('?Years', guard('gte', 10)), }), ], certainty: 0.95,});
// The rule has a term ID — it's a first-class citizen in the knowledge baseconsole.log('Rule term ID:', rule.term.term_id);Querying rules as data
// Get all stored facts (which includes derived facts from forward chaining)const facts = await client.inference.getFacts();
// Each fact is a PsiTermDto with sort_name, features, and displayfor (const fact of facts) { console.log(`${fact.sort_name}: ${fact.display}`);}Forward chaining materializes new terms
When forward chaining runs, it applies rules to facts and produces new terms (derived facts). These derived facts are just like any other terms — they can be queried, unified, and used as input to further reasoning.
const result = await client.inference.forwardChain({ max_iterations: 100, persist_derived: true, // Save derived facts as terms enable_provenance_tags: true, // Track which rule produced each fact});
// Derived facts are termsfor (const fact of result.derived_facts) { console.log(`${fact.sort_name}: ${fact.display}`);}The analogy to Lisp
The term “homoiconic” comes from Lisp, where code and data are both S-expressions (lists). In Lisp, (+ 1 2) is both a computation and a data structure you can manipulate.
The Reasoning Layer applies the same idea to knowledge representation:
| Lisp | Reasoning Layer |
|---|---|
| S-expression | Psi-term |
| Code is data (lists) | Rules are data (terms) |
eval runs code | Inference engine reasons over terms |
| Macros generate code | Forward chaining generates new terms |
The key benefit is the same in both: there is no boundary between what the system knows and what it can reason about. Everything is accessible, inspectable, and composable using a single set of operations.