Skip to content

Workshop Instructor Guide

This is your guide for running the Murder at the Tech Mansion workshop. Students solve a murder mystery by building an AI detective, learning sorts, terms, inference, fuzzy logic, causal reasoning, and cognitive agents through one continuous investigation.

The workshop code is available at gitlab.com/kortexya-pub/workshop. Students run each file in order, building on the previous step’s data.

Before the Workshop

1 week before

  • Generate 20 tenant IDs + API keys (one per student, or one shared tenant)
  • Test the platform URL: https://platform.ovh.reasoninglayer.ai/api/v1
  • Prepare a shared document (Google Doc, HackMD) with credentials for each student
  • Send pre-workshop email to students:
    • Install Node.js 18+ (node --version to check)
    • Have a code editor ready (VS Code recommended)
    • Basic TypeScript knowledge required
    • No prior knowledge of logic programming, Prolog, or AI needed

Day of workshop — 30 min before

  • Verify the platform is up: curl https://platform.ovh.reasoninglayer.ai/api/v1/health
  • Open the docs site with the Workshop pages ready
  • Prepare your screen for live coding (font size 16+, dark theme)
  • Have the credentials document ready to share

Tenant strategy decision

Option A: One tenant per student (recommended)

  • Each student works in isolation — no conflicts
  • You can see what each student created
  • Requires 20 tenant IDs + API keys

Option B: One shared tenant

  • Students see each other’s sorts/terms — can be fun or chaotic
  • Sort name collisions will happen (e.g., everyone creates “person”)
  • Workaround: prefix sort names with student initials (e.g., alice_person)
  • Only 1 tenant ID + API key needed

Workshop Agenda (4 hours)

Overview

0:00 ─ 0:15 Intro & Setup
0:15 ─ 0:35 Concept: What is the Reasoning Layer?
0:35 ─ 1:05 Part 1: Sorts (guided + challenge)
1:05 ─ 1:25 Part 2: Terms (guided + challenge)
1:25 ─ 1:35 ☕ Break
1:35 ─ 2:15 Part 3: Inference — rules & backward chaining (guided + challenge)
2:15 ─ 2:30 Part 4: Forward chaining (guided)
2:30 ─ 2:50 Part 5: Negation as failure (guided)
2:50 ─ 3:00 ☕ Break
3:00 ─ 3:25 Part 6: Fuzzy logic (guided + challenge)
3:25 ─ 3:45 Part 7: Causal reasoning (guided)
3:45 ─ 4:00 Wrap-up, Q&A, next steps
─────────────
Bonus (if time or fast students): Part 8 — Cognitive Agents

Detailed Plan

0:00 – 0:15 | Intro & Setup

Goal: Everyone has a running project with a verified connection.

What to do:

  1. Welcome students. Ask: “Who has used Prolog? Logic programming? Rule engines? Knowledge graphs?” — gauge the room.
  2. Share the credentials document.
  3. Walk through the Workshop Setup page live on your screen.
  4. Have everyone run npm start and confirm Connected! Found 0 sorts.

Talking points:

“Today we’re going to build a knowledge system that can reason — not just store data and query it, but actually derive new conclusions, handle uncertainty, and answer ‘what if?’ questions. This is NOT machine learning — it’s symbolic AI, logic-based. Everything is deterministic and explainable.”

Common issues:

ProblemFix
node: command not foundStudent needs to install Node.js 18+
ERR_MODULE_NOT_FOUNDRun npm install again
401 UnauthorizedWrong API key — check for extra spaces when copy-pasting
fetch failed / ECONNREFUSEDWrong URL — must be https://platform.ovh.reasoninglayer.ai/api/v1 (no trailing slash)
TypeError: fetch is not a functionNode version < 18 — needs upgrade

If a student is stuck: Pair them with a neighbor who’s working. Don’t let setup eat more than 15 minutes.


0:15 – 0:35 | Concept: What is the Reasoning Layer?

Goal: Students understand WHY this exists before writing code.

What to do:

  1. Live-explain using the What is the Reasoning Layer? concept page.
  2. Don’t read it — tell it. Use the whiteboard or slides.

Key talking points (10 min):

“In most systems, you have data in a database, logic in your code, types in a schema, and uncertainty handled by ML models. The Reasoning Layer unifies all of this. Let me show you how.”

Draw this on the whiteboard:

Traditional:
SQL table → App code (if/else) → ML model
(data) (logic) (uncertainty)
Reasoning Layer:
Sort lattice → Inference engine → Fuzzy logic
(types+data) (rules=data) (uncertainty=data)
Everything is a Psi-term.

Key analogies for CS students:

They knowReasoning Layer equivalentKey difference
SQL tableSortMultiple inheritance, feature constraints
SQL rowPsi-termTyped, validated, can be incomplete
if/else in codeInference ruleRules are data — you can query them!
WHERE salary > 100000guard('gt', 100000)Constraints propagate through rule chains
NULLUninstantiated / ResiduationSystem says “I don’t know yet” instead of “doesn’t exist”
ProbabilityFuzzy logicDegrees of truth, not chance

Pause and ask (5 min):

“Questions so far? The key idea is: data, logic, and types are all the same thing — Psi-terms. We call this homoiconicity. If you remember one thing from today, remember that.”


0:35 – 1:05 | Part 1: Sorts (30 min)

Goal: Students create a type hierarchy with multiple inheritance and explore it.

What to do:

  1. (5 min) Live-code Exercise 1.1 on your screen. Explain as you type:

    “A sort is like a class, but with multiple inheritance that actually works. No diamond problem — the system computes the most specific common type automatically.”

  2. (5 min) Live-code Exercise 1.2. Show isSubtype and getDescendants.
  3. (15 min) Students do Challenge 1 solo. Walk around the room. This is the first “your turn” moment.
  4. (5 min) Review the solution together. Ask a student to share their screen.

What to emphasize:

  • Sorts form a lattice, not a tree. Multiple parents are normal and useful.
  • bulkCreateSorts uses names for parents. createSort uses UUIDs. This catches people.
  • GLB = “most specific common type.” Draw it:
person organization
\ /
employee ← GLB(person, organization)

Common mistakes:

MistakeSymptomFix
Using sort name instead of ID in createSort400 Bad RequestUse result.sort_ids['person']
Running the script twiceSort name conflict (409)Clear sorts first or use new names
Forgetting awaitPromise object printed instead of dataAdd await

1:05 – 1:25 | Part 2: Terms (20 min)

Goal: Students create data instances and understand residuation.

What to do:

  1. (5 min) Live-code Exercise 2.1. Emphasize Value.string(), Value.integer():

    “Terms use TAGGED values — Value.string('Alice') produces {type: 'String', value: 'Alice'}. This is different from inference, which uses untagged values. We’ll see that in Part 3.”

  2. (5 min) Live-code Exercise 2.2. This is the key moment for residuation:

    “In SQL, missing data is NULL — it means ‘nothing’. Here, Value.uninstantiated() means ‘I don’t know YET.’ The system doesn’t fail — it suspends and waits for more info. This is called residuation.”

  3. (10 min) Students do Challenge 2 solo.

What to emphasize:

  • TermResponse wraps TermDto — always access response.term.id, not response.id
  • The state field tells you if the term is complete, residuated, or no_witnesses
  • Partial updates with updateTerm only change the features you specify

1:25 – 1:35 | ☕ Break

“We’ve covered the knowledge representation part — types and data. After the break, we start the reasoning part — rules and inference. That’s where the magic happens.”


1:35 – 2:15 | Part 3: Inference (40 min)

Goal: Students write rules, add facts, and run backward chaining. This is the core of the workshop.

What to do:

  1. (5 min) Explain the format switch before coding. This is the #1 mistake:

    “STOP. Important. From now on, we switch from Value.* to FeatureInput.*. Terms use tagged format: Value.string('Alice'){type: 'String', value: 'Alice'}. Inference uses untagged format: FeatureInput.string('Alice') → just 'Alice'. If you mix them, the backend rejects your request.”

    Write on the whiteboard:

    TERMS (storage): Value.string("Alice") → {"type":"String","value":"Alice"}
    INFERENCE (reasoning): FeatureInput.string("Alice") → "Alice"
    NEVER mix them. ❌ Value.* in addFact/addRule
  2. (10 min) Live-code Exercise 3.1 (add facts + rule). Explain each line:

    • TermInput.byName('student', {...}) — reference a sort by name
    • FeatureInput.variable('?Name') — a variable that will be bound during unification
    • FeatureInput.constrainedVar('?GPA', guard('gte', 3.5)) — a variable with a constraint
    • antecedents — the “if” part of the rule
  3. (5 min) Live-code Exercise 3.2 (backward chaining). Show the query and results:

    “We asked ‘who are the honor students?’ and the engine worked backward: it found the honor_student rule, then searched for student facts with GPA >= 3.5. Alice and Diana matched.”

  4. (5 min) Live-code Exercise 3.3 (chaining rules):

    “Now the scholarship rule depends on the honor_student rule. The engine chains them automatically — it proves honor_student first, then checks the year constraint. Two levels of deduction.”

  5. (15 min) Students do Challenge 3 solo. This is the most important solo exercise — they write 3 rules from scratch.

What to emphasize:

  • Variables starting with ? get bound during unification. Same variable name in head and body means “must be the same value.”
  • guard is a constraint, not a filter. It’s checked during unification, not after.
  • certainty on rules defaults to 1.0. When rules chain, certainties multiply.

Common mistakes:

MistakeSymptomFix
Using Value.* in addFact400 or silent data corruptionSwitch to FeatureInput.*
Forgetting a variable in the headVariable bindings missing from resultAdd the variable to the rule head
Variable name typo (?name vs ?Name)Variable not boundConsistent casing
Not adding facts before querying0 solutionsMake sure facts are added first

2:15 – 2:30 | Part 4: Forward Chaining (15 min)

Goal: Students understand the difference between backward and forward chaining.

What to do:

  1. (5 min) Explain the concept:

    “Backward chaining: you ask a question, the engine searches backward. Forward chaining: you say ‘apply ALL rules to ALL facts and tell me everything you derive.’ It’s materialization — like a SQL materialized view but with logic.”

  2. (10 min) Live-code Exercise 4.1. Show the derived facts:

    “Look — the engine automatically derived honor_student facts for Alice and Diana, scholarship_eligible for both, and any other rules you added in Challenge 3. All in one call.”

What to emphasize:

  • persist_derived: true saves derived facts permanently. false (default) just shows you what would be derived.
  • Forward chaining runs in iterations until no new facts can be derived (fixpoint).

2:30 – 2:50 | Part 5: Negation as Failure (20 min)

Goal: Students understand closed-world assumption and NAF.

What to do:

  1. (5 min) Explain the concept:

    “NAF means: if I can’t prove something, I assume it’s false. This is the ‘closed-world assumption.’ It’s how SQL works too — if a row doesn’t exist, the query returns nothing. But here, you can use it inside rules: ‘find students who are NOT honor students.’”

  2. (10 min) Live-code Exercise 5.1:

    “We asked for students where the positive literal (IS a student) matches, but the negative literal (is NOT an honor student) also holds. Bob and Charlie match — they are students but not honor students.”

  3. (5 min) Connect to the open-world concept:

    “There’s also open-world reasoning, where ‘can’t prove it’ means ‘I don’t know’ instead of ‘it’s false.’ We have a concept page on this if you want to dig deeper after the workshop.”


2:50 – 3:00 | ☕ Break

“We’ve covered the core: types, data, rules, backward/forward chaining, and negation. After the break, we move to the advanced stuff: fuzzy logic and causal reasoning.”


3:00 – 3:25 | Part 6: Fuzzy Logic (25 min)

Goal: Students understand fuzzy values and similarity-based reasoning.

What to do:

  1. (5 min) Explain the concept before coding:

    “Is 37.2°C a fever? Classical logic says ‘no’ (cutoff is 37.5). Fuzzy logic says ‘sort of — degree 0.3.’ Real-world data is imprecise. Fuzzy logic handles that.”

    Draw a triangular membership function on the whiteboard:

    1.0 │ /\
    │ / \
    0.5 │ / \
    │ / \
    0.0 │───/────────\───
    │ 20 22 24
    "comfortable temperature"

    “The triangular shape says: 22°C is fully comfortable (degree 1.0), 20°C and 24°C are the boundaries (degree 0.0), and anything in between gets a proportional degree.”

  2. (5 min) Live-code Exercise 6.1 (fuzzy values on terms):

    FuzzyShape.triangular(20, 22, 24) creates this shape. We attach it to a term feature with Value.fuzzyNumber(). The term now has an imprecise temperature.”

  3. (5 min) Live-code Exercise 6.2 (fuzzy unification):

    “Fuzzy unification asks: how similar are these two readings? It returns a degree from 0.0 to 1.0. Unlike regular unification which is pass/fail, this gives you a gradient.”

  4. (10 min) Students do Challenge 6 solo (fuzzy proving).

What to emphasize:

  • Four shapes: triangular (simple peak), trapezoidal (flat top), Gaussian (bell curve), cyclic Gaussian (wraps around)
  • FuzzyShape uses "kind" not "type" — this catches even experienced developers
  • T-norms control how truth degrees combine: min (conservative), product (probabilistic), lukasiewicz (strict)

3:25 – 3:45 | Part 7: Causal Reasoning (20 min)

Goal: Students build a causal model and run counterfactuals.

What to do:

  1. (5 min) Explain Pearl’s ladder:

    “Level 1: correlation — ‘people who smoke get cancer more.’ Level 2: intervention — ‘if I MAKE someone stop smoking, will they avoid cancer?’ Level 3: counterfactual — ‘WOULD this patient have avoided cancer if they hadn’t smoked?’ Each level is strictly more powerful than the last.”

  2. (5 min) Live-code Exercise 7.1 (build a causal model). Draw the graph:

    high_traffic ──→ high_cpu ──→ slow_response ──→ user_complaints
    memory_leak ──↗
  3. (5 min) Live-code Exercise 7.2 (root cause analysis):

    “We ask ‘why are users complaining?’ and the engine traces backward through the causal graph. It finds two root causes: high_traffic and memory_leak, with paths showing how they lead to complaints.”

  4. (5 min) Live-code Exercise 7.3 (counterfactual):

    “We ask ‘would users still complain if there was no high traffic?’ The engine runs Pearl’s three-step algorithm: abduction (what caused the evidence?), action (remove high_traffic), prediction (what follows?). If memory_leak is still there, the answer might be yes.”

What to emphasize:

  • Causal != correlational. This is a hard concept. Use the smoking example.
  • The evidence field in counterfactuals says “here’s what actually happened.”
  • Root cause analysis traces backward; interventions trace forward.

3:45 – 4:00 | Wrap-up

What to do:

  1. (5 min) Review the summary table from the exercises page:

    “In 4 hours, you’ve used sorts, terms, inference rules, backward chaining, forward chaining, negation, fuzzy logic, and causal reasoning. That’s the full stack of symbolic AI reasoning.”

  2. (5 min) Show what we didn’t cover but exists:

    • Cognitive agents (Part 8) — BDI architecture, autonomous agents
    • Optimization — LP solver integrated with the knowledge base
    • Natural language queriesclient.query.nlQuery()
    • WebSocket events — real-time agent monitoring
  3. (5 min) Q&A. Point to resources:

    • The concept pages for deeper understanding
    • The three advanced tutorials (Clinical Trial, Loan Approval, Incident Response)
    • The API reference for method-level docs

Closing:

“The key takeaway: in the Reasoning Layer, everything is a Psi-term — data, rules, constraints, even agents’ beliefs. This homoiconicity means the system can reason about its own reasoning. That’s what makes it fundamentally different from a database with application logic bolted on top.”


Bonus: Part 8 — Cognitive Agents

Use this if:

  • You have a 5th hour
  • Some students finish early and need advanced content
  • You want to do a live demo while students watch

Time needed: 15-20 min guided, 10 min challenge

Approach: Live-code Exercise 8.1 as a demo. The BDI cycle concept is the hardest to grasp — walk slowly through: belief → goal → cycle → result.


Troubleshooting Reference

Common errors across all parts

ErrorMeaningFix
400 Bad RequestWrong request formatCheck tagged vs untagged format
401 UnauthorizedBad API keyCheck credentials
404 Not FoundSort/term/agent doesn’t existCheck UUID, or sort wasn’t created yet
409 ConflictSort name already existsUse a different name, or delete first
429 Too Many RequestsRate limitedWait a few seconds and retry
500 Internal Server ErrorBackend bugReport to platform team

”My backward chaining returns 0 solutions”

Checklist:

  1. Did you add facts? (client.inference.addFact(...))
  2. Did you add the rule? (client.inference.addRule(...))
  3. Are sort names consistent between facts, rules, and the goal?
  4. Are variable names consistent? (?Name in head must match ?Name in body)
  5. Are you using FeatureInput.* (NOT Value.*) for inference?
  6. Is the guard constraint correct? (guard('gte', 3.5) not guard('gte', '3.5'))

“Sort name conflict” when running script twice

Students will run their script multiple times. Sorts with the same name in the same tenant will conflict. Options:

  • Add a cleanup section at the top of their script
  • Give each student a unique prefix
  • Use different sort names on each run

Quick cleanup snippet for students:

// Add at the top of your script to start fresh
const existingSorts = await client.sorts.listSorts();
for (const sort of existingSorts) {
try { await client.sorts.deleteSort(sort.id); } catch {}
}
await client.inference.clearFacts();
console.log('Cleaned up!');

Pacing Adjustment Guide

If running slow (behind by 15+ min)

  • Skip Challenge 2 (terms) — it’s the least critical challenge
  • Make Part 4 (forward chaining) a quick 3-min demo instead of 15 min
  • Make Part 5 (NAF) a 5-min demo
  • Skip Challenge 6 (fuzzy proving)

If running fast (ahead by 15+ min)

  • Let students attempt Part 8 (cognitive agents)
  • Deep-dive on open-world reasoning (concept page)
  • Show the psi() shorthand builder as a convenience
  • Let students explore the Clinical Trial tutorial

If some students are stuck while others are done

  • Fast students: point them to Challenge sections or advanced tutorials
  • Stuck students: pair with a neighbor, or give them the solution and have them modify it