Workshop Instructor Guide
This is your guide for running the Murder at the Tech Mansion workshop. Students solve a murder mystery by building an AI detective, learning sorts, terms, inference, fuzzy logic, causal reasoning, and cognitive agents through one continuous investigation.
The workshop code is available at gitlab.com/kortexya-pub/workshop. Students run each file in order, building on the previous step’s data.
Before the Workshop
1 week before
- Generate 20 tenant IDs + API keys (one per student, or one shared tenant)
- Test the platform URL:
https://platform.ovh.reasoninglayer.ai/api/v1 - Prepare a shared document (Google Doc, HackMD) with credentials for each student
- Send pre-workshop email to students:
- Install Node.js 18+ (
node --versionto check) - Have a code editor ready (VS Code recommended)
- Basic TypeScript knowledge required
- No prior knowledge of logic programming, Prolog, or AI needed
- Install Node.js 18+ (
Day of workshop — 30 min before
- Verify the platform is up:
curl https://platform.ovh.reasoninglayer.ai/api/v1/health - Open the docs site with the Workshop pages ready
- Prepare your screen for live coding (font size 16+, dark theme)
- Have the credentials document ready to share
Tenant strategy decision
Option A: One tenant per student (recommended)
- Each student works in isolation — no conflicts
- You can see what each student created
- Requires 20 tenant IDs + API keys
Option B: One shared tenant
- Students see each other’s sorts/terms — can be fun or chaotic
- Sort name collisions will happen (e.g., everyone creates “person”)
- Workaround: prefix sort names with student initials (e.g.,
alice_person) - Only 1 tenant ID + API key needed
Workshop Agenda (4 hours)
Overview
0:00 ─ 0:15 Intro & Setup 0:15 ─ 0:35 Concept: What is the Reasoning Layer? 0:35 ─ 1:05 Part 1: Sorts (guided + challenge) 1:05 ─ 1:25 Part 2: Terms (guided + challenge) 1:25 ─ 1:35 ☕ Break 1:35 ─ 2:15 Part 3: Inference — rules & backward chaining (guided + challenge) 2:15 ─ 2:30 Part 4: Forward chaining (guided) 2:30 ─ 2:50 Part 5: Negation as failure (guided) 2:50 ─ 3:00 ☕ Break 3:00 ─ 3:25 Part 6: Fuzzy logic (guided + challenge) 3:25 ─ 3:45 Part 7: Causal reasoning (guided) 3:45 ─ 4:00 Wrap-up, Q&A, next steps ───────────── Bonus (if time or fast students): Part 8 — Cognitive AgentsDetailed Plan
0:00 – 0:15 | Intro & Setup
Goal: Everyone has a running project with a verified connection.
What to do:
- Welcome students. Ask: “Who has used Prolog? Logic programming? Rule engines? Knowledge graphs?” — gauge the room.
- Share the credentials document.
- Walk through the Workshop Setup page live on your screen.
- Have everyone run
npm startand confirmConnected! Found 0 sorts.
Talking points:
“Today we’re going to build a knowledge system that can reason — not just store data and query it, but actually derive new conclusions, handle uncertainty, and answer ‘what if?’ questions. This is NOT machine learning — it’s symbolic AI, logic-based. Everything is deterministic and explainable.”
Common issues:
| Problem | Fix |
|---|---|
node: command not found | Student needs to install Node.js 18+ |
ERR_MODULE_NOT_FOUND | Run npm install again |
401 Unauthorized | Wrong API key — check for extra spaces when copy-pasting |
fetch failed / ECONNREFUSED | Wrong URL — must be https://platform.ovh.reasoninglayer.ai/api/v1 (no trailing slash) |
TypeError: fetch is not a function | Node version < 18 — needs upgrade |
If a student is stuck: Pair them with a neighbor who’s working. Don’t let setup eat more than 15 minutes.
0:15 – 0:35 | Concept: What is the Reasoning Layer?
Goal: Students understand WHY this exists before writing code.
What to do:
- Live-explain using the What is the Reasoning Layer? concept page.
- Don’t read it — tell it. Use the whiteboard or slides.
Key talking points (10 min):
“In most systems, you have data in a database, logic in your code, types in a schema, and uncertainty handled by ML models. The Reasoning Layer unifies all of this. Let me show you how.”
Draw this on the whiteboard:
Traditional: SQL table → App code (if/else) → ML model (data) (logic) (uncertainty)
Reasoning Layer: Sort lattice → Inference engine → Fuzzy logic (types+data) (rules=data) (uncertainty=data) Everything is a Psi-term.Key analogies for CS students:
| They know | Reasoning Layer equivalent | Key difference |
|---|---|---|
| SQL table | Sort | Multiple inheritance, feature constraints |
| SQL row | Psi-term | Typed, validated, can be incomplete |
if/else in code | Inference rule | Rules are data — you can query them! |
WHERE salary > 100000 | guard('gt', 100000) | Constraints propagate through rule chains |
NULL | Uninstantiated / Residuation | System says “I don’t know yet” instead of “doesn’t exist” |
| Probability | Fuzzy logic | Degrees of truth, not chance |
Pause and ask (5 min):
“Questions so far? The key idea is: data, logic, and types are all the same thing — Psi-terms. We call this homoiconicity. If you remember one thing from today, remember that.”
0:35 – 1:05 | Part 1: Sorts (30 min)
Goal: Students create a type hierarchy with multiple inheritance and explore it.
What to do:
- (5 min) Live-code Exercise 1.1 on your screen. Explain as you type:
“A sort is like a class, but with multiple inheritance that actually works. No diamond problem — the system computes the most specific common type automatically.”
- (5 min) Live-code Exercise 1.2. Show
isSubtypeandgetDescendants. - (15 min) Students do Challenge 1 solo. Walk around the room. This is the first “your turn” moment.
- (5 min) Review the solution together. Ask a student to share their screen.
What to emphasize:
- Sorts form a lattice, not a tree. Multiple parents are normal and useful.
bulkCreateSortsuses names for parents.createSortuses UUIDs. This catches people.- GLB = “most specific common type.” Draw it:
person organization \ / employee ← GLB(person, organization)Common mistakes:
| Mistake | Symptom | Fix |
|---|---|---|
Using sort name instead of ID in createSort | 400 Bad Request | Use result.sort_ids['person'] |
| Running the script twice | Sort name conflict (409) | Clear sorts first or use new names |
Forgetting await | Promise object printed instead of data | Add await |
1:05 – 1:25 | Part 2: Terms (20 min)
Goal: Students create data instances and understand residuation.
What to do:
- (5 min) Live-code Exercise 2.1. Emphasize
Value.string(),Value.integer():“Terms use TAGGED values —
Value.string('Alice')produces{type: 'String', value: 'Alice'}. This is different from inference, which uses untagged values. We’ll see that in Part 3.” - (5 min) Live-code Exercise 2.2. This is the key moment for residuation:
“In SQL, missing data is NULL — it means ‘nothing’. Here,
Value.uninstantiated()means ‘I don’t know YET.’ The system doesn’t fail — it suspends and waits for more info. This is called residuation.” - (10 min) Students do Challenge 2 solo.
What to emphasize:
TermResponsewrapsTermDto— always accessresponse.term.id, notresponse.id- The
statefield tells you if the term iscomplete,residuated, orno_witnesses - Partial updates with
updateTermonly change the features you specify
1:25 – 1:35 | ☕ Break
“We’ve covered the knowledge representation part — types and data. After the break, we start the reasoning part — rules and inference. That’s where the magic happens.”
1:35 – 2:15 | Part 3: Inference (40 min)
Goal: Students write rules, add facts, and run backward chaining. This is the core of the workshop.
What to do:
-
(5 min) Explain the format switch before coding. This is the #1 mistake:
“STOP. Important. From now on, we switch from
Value.*toFeatureInput.*. Terms use tagged format:Value.string('Alice')→{type: 'String', value: 'Alice'}. Inference uses untagged format:FeatureInput.string('Alice')→ just'Alice'. If you mix them, the backend rejects your request.”Write on the whiteboard:
TERMS (storage): Value.string("Alice") → {"type":"String","value":"Alice"}INFERENCE (reasoning): FeatureInput.string("Alice") → "Alice"NEVER mix them. ❌ Value.* in addFact/addRule -
(10 min) Live-code Exercise 3.1 (add facts + rule). Explain each line:
TermInput.byName('student', {...})— reference a sort by nameFeatureInput.variable('?Name')— a variable that will be bound during unificationFeatureInput.constrainedVar('?GPA', guard('gte', 3.5))— a variable with a constraintantecedents— the “if” part of the rule
-
(5 min) Live-code Exercise 3.2 (backward chaining). Show the query and results:
“We asked ‘who are the honor students?’ and the engine worked backward: it found the honor_student rule, then searched for student facts with GPA >= 3.5. Alice and Diana matched.”
-
(5 min) Live-code Exercise 3.3 (chaining rules):
“Now the scholarship rule depends on the honor_student rule. The engine chains them automatically — it proves honor_student first, then checks the year constraint. Two levels of deduction.”
-
(15 min) Students do Challenge 3 solo. This is the most important solo exercise — they write 3 rules from scratch.
What to emphasize:
- Variables starting with
?get bound during unification. Same variable name in head and body means “must be the same value.” guardis a constraint, not a filter. It’s checked during unification, not after.certaintyon rules defaults to 1.0. When rules chain, certainties multiply.
Common mistakes:
| Mistake | Symptom | Fix |
|---|---|---|
Using Value.* in addFact | 400 or silent data corruption | Switch to FeatureInput.* |
| Forgetting a variable in the head | Variable bindings missing from result | Add the variable to the rule head |
Variable name typo (?name vs ?Name) | Variable not bound | Consistent casing |
| Not adding facts before querying | 0 solutions | Make sure facts are added first |
2:15 – 2:30 | Part 4: Forward Chaining (15 min)
Goal: Students understand the difference between backward and forward chaining.
What to do:
-
(5 min) Explain the concept:
“Backward chaining: you ask a question, the engine searches backward. Forward chaining: you say ‘apply ALL rules to ALL facts and tell me everything you derive.’ It’s materialization — like a SQL materialized view but with logic.”
-
(10 min) Live-code Exercise 4.1. Show the derived facts:
“Look — the engine automatically derived honor_student facts for Alice and Diana, scholarship_eligible for both, and any other rules you added in Challenge 3. All in one call.”
What to emphasize:
persist_derived: truesaves derived facts permanently.false(default) just shows you what would be derived.- Forward chaining runs in iterations until no new facts can be derived (fixpoint).
2:30 – 2:50 | Part 5: Negation as Failure (20 min)
Goal: Students understand closed-world assumption and NAF.
What to do:
-
(5 min) Explain the concept:
“NAF means: if I can’t prove something, I assume it’s false. This is the ‘closed-world assumption.’ It’s how SQL works too — if a row doesn’t exist, the query returns nothing. But here, you can use it inside rules: ‘find students who are NOT honor students.’”
-
(10 min) Live-code Exercise 5.1:
“We asked for students where the positive literal (IS a student) matches, but the negative literal (is NOT an honor student) also holds. Bob and Charlie match — they are students but not honor students.”
-
(5 min) Connect to the open-world concept:
“There’s also open-world reasoning, where ‘can’t prove it’ means ‘I don’t know’ instead of ‘it’s false.’ We have a concept page on this if you want to dig deeper after the workshop.”
2:50 – 3:00 | ☕ Break
“We’ve covered the core: types, data, rules, backward/forward chaining, and negation. After the break, we move to the advanced stuff: fuzzy logic and causal reasoning.”
3:00 – 3:25 | Part 6: Fuzzy Logic (25 min)
Goal: Students understand fuzzy values and similarity-based reasoning.
What to do:
-
(5 min) Explain the concept before coding:
“Is 37.2°C a fever? Classical logic says ‘no’ (cutoff is 37.5). Fuzzy logic says ‘sort of — degree 0.3.’ Real-world data is imprecise. Fuzzy logic handles that.”
Draw a triangular membership function on the whiteboard:
1.0 │ /\│ / \0.5 │ / \│ / \0.0 │───/────────\───│ 20 22 24"comfortable temperature"“The triangular shape says: 22°C is fully comfortable (degree 1.0), 20°C and 24°C are the boundaries (degree 0.0), and anything in between gets a proportional degree.”
-
(5 min) Live-code Exercise 6.1 (fuzzy values on terms):
“
FuzzyShape.triangular(20, 22, 24)creates this shape. We attach it to a term feature withValue.fuzzyNumber(). The term now has an imprecise temperature.” -
(5 min) Live-code Exercise 6.2 (fuzzy unification):
“Fuzzy unification asks: how similar are these two readings? It returns a degree from 0.0 to 1.0. Unlike regular unification which is pass/fail, this gives you a gradient.”
-
(10 min) Students do Challenge 6 solo (fuzzy proving).
What to emphasize:
- Four shapes: triangular (simple peak), trapezoidal (flat top), Gaussian (bell curve), cyclic Gaussian (wraps around)
FuzzyShapeuses"kind"not"type"— this catches even experienced developers- T-norms control how truth degrees combine:
min(conservative),product(probabilistic),lukasiewicz(strict)
3:25 – 3:45 | Part 7: Causal Reasoning (20 min)
Goal: Students build a causal model and run counterfactuals.
What to do:
-
(5 min) Explain Pearl’s ladder:
“Level 1: correlation — ‘people who smoke get cancer more.’ Level 2: intervention — ‘if I MAKE someone stop smoking, will they avoid cancer?’ Level 3: counterfactual — ‘WOULD this patient have avoided cancer if they hadn’t smoked?’ Each level is strictly more powerful than the last.”
-
(5 min) Live-code Exercise 7.1 (build a causal model). Draw the graph:
high_traffic ──→ high_cpu ──→ slow_response ──→ user_complaintsmemory_leak ──↗ -
(5 min) Live-code Exercise 7.2 (root cause analysis):
“We ask ‘why are users complaining?’ and the engine traces backward through the causal graph. It finds two root causes: high_traffic and memory_leak, with paths showing how they lead to complaints.”
-
(5 min) Live-code Exercise 7.3 (counterfactual):
“We ask ‘would users still complain if there was no high traffic?’ The engine runs Pearl’s three-step algorithm: abduction (what caused the evidence?), action (remove high_traffic), prediction (what follows?). If memory_leak is still there, the answer might be yes.”
What to emphasize:
- Causal != correlational. This is a hard concept. Use the smoking example.
- The
evidencefield in counterfactuals says “here’s what actually happened.” - Root cause analysis traces backward; interventions trace forward.
3:45 – 4:00 | Wrap-up
What to do:
-
(5 min) Review the summary table from the exercises page:
“In 4 hours, you’ve used sorts, terms, inference rules, backward chaining, forward chaining, negation, fuzzy logic, and causal reasoning. That’s the full stack of symbolic AI reasoning.”
-
(5 min) Show what we didn’t cover but exists:
- Cognitive agents (Part 8) — BDI architecture, autonomous agents
- Optimization — LP solver integrated with the knowledge base
- Natural language queries —
client.query.nlQuery() - WebSocket events — real-time agent monitoring
-
(5 min) Q&A. Point to resources:
- The concept pages for deeper understanding
- The three advanced tutorials (Clinical Trial, Loan Approval, Incident Response)
- The API reference for method-level docs
Closing:
“The key takeaway: in the Reasoning Layer, everything is a Psi-term — data, rules, constraints, even agents’ beliefs. This homoiconicity means the system can reason about its own reasoning. That’s what makes it fundamentally different from a database with application logic bolted on top.”
Bonus: Part 8 — Cognitive Agents
Use this if:
- You have a 5th hour
- Some students finish early and need advanced content
- You want to do a live demo while students watch
Time needed: 15-20 min guided, 10 min challenge
Approach: Live-code Exercise 8.1 as a demo. The BDI cycle concept is the hardest to grasp — walk slowly through: belief → goal → cycle → result.
Troubleshooting Reference
Common errors across all parts
| Error | Meaning | Fix |
|---|---|---|
400 Bad Request | Wrong request format | Check tagged vs untagged format |
401 Unauthorized | Bad API key | Check credentials |
404 Not Found | Sort/term/agent doesn’t exist | Check UUID, or sort wasn’t created yet |
409 Conflict | Sort name already exists | Use a different name, or delete first |
429 Too Many Requests | Rate limited | Wait a few seconds and retry |
500 Internal Server Error | Backend bug | Report to platform team |
”My backward chaining returns 0 solutions”
Checklist:
- Did you add facts? (
client.inference.addFact(...)) - Did you add the rule? (
client.inference.addRule(...)) - Are sort names consistent between facts, rules, and the goal?
- Are variable names consistent? (
?Namein head must match?Namein body) - Are you using
FeatureInput.*(NOTValue.*) for inference? - Is the guard constraint correct? (
guard('gte', 3.5)notguard('gte', '3.5'))
“Sort name conflict” when running script twice
Students will run their script multiple times. Sorts with the same name in the same tenant will conflict. Options:
- Add a cleanup section at the top of their script
- Give each student a unique prefix
- Use different sort names on each run
Quick cleanup snippet for students:
// Add at the top of your script to start freshconst existingSorts = await client.sorts.listSorts();for (const sort of existingSorts) { try { await client.sorts.deleteSort(sort.id); } catch {}}await client.inference.clearFacts();console.log('Cleaned up!');Pacing Adjustment Guide
If running slow (behind by 15+ min)
- Skip Challenge 2 (terms) — it’s the least critical challenge
- Make Part 4 (forward chaining) a quick 3-min demo instead of 15 min
- Make Part 5 (NAF) a 5-min demo
- Skip Challenge 6 (fuzzy proving)
If running fast (ahead by 15+ min)
- Let students attempt Part 8 (cognitive agents)
- Deep-dive on open-world reasoning (concept page)
- Show the
psi()shorthand builder as a convenience - Let students explore the Clinical Trial tutorial
If some students are stuck while others are done
- Fast students: point them to Challenge sections or advanced tutorials
- Stuck students: pair with a neighbor, or give them the solution and have them modify it