Workshop — Murder at the Tech Mansion
The workshop code is available at gitlab.com/kortexya-pub/workshop.
The Case
Last night, billionaire tech CEO Marcus Chen was found dead in the study of his mansion during a private dinner party. Cause of death: cyanide poisoning in his wine glass. 6 guests were present. No one has left the property.
Your mission: build an AI detective engine to analyze the evidence, deduce who had opportunity, weigh uncertain testimonies, reconstruct the motive chain, and identify the killer.
The Suspects
| Name | Occupation | Motive | Study key? | Alibi |
|---|---|---|---|---|
| Dr. Sarah Park | Chief Scientist | $50M patent dispute | Yes | Library (unconfirmed) |
| James Chen | Victim’s nephew | $200M inheritance | Yes | Garden (claims Elena was there) |
| Elena Vasquez | Personal chef | About to be fired | Yes | Kitchen (contradicts James) |
| Prof. David Okafor | Professor | Stolen research | No | Dining room (confirmed by Tom) |
| Tom Reeves | Lawyer | Secret will change | No | Dining room (confirmed by David) |
| Nina Torres | Security consultant | Bitter breakup | Yes | Bathroom (no witness) |
The Evidence
- Poison vial found in study (wiped clean — no fingerprints)
- Victim’s wine glass contains cyanide traces
- Muddy footprints from garden to study
- Hair strand on victim’s collar: female DNA, 94% match confidence
- Staff member saw “dark-clothed figure near study around 8:40pm”
- James and Elena give contradicting alibis
- Study door was locked — only key holders could enter
Step 1: Setting Up the Case (Sorts)
File: step-01-sorts.ts | Time: 25 min | Concept: types with multiple inheritance
You define the TYPES of things in your investigation. The key insight: a person_of_interest inherits from BOTH suspect and witness — something impossible with single inheritance.
person / \ suspect witness \ / person_of_interest ← multiple inheritance!What you learn:
- Sorts form a lattice, not a tree
bulkCreateSortsuses names as parent referencesisSubtypechecks the hierarchycomputeGlbfinds the most specific common type
Run: npx tsx step-01-sorts.ts
Step 2: Building the Case File (Terms)
File: step-02-terms.ts | Time: 20 min | Concept: data instances, tagged values, residuation
You enter all 6 suspects, physical evidence, forensic evidence, testimonies, and locations into the system. Each piece of data is a term — a typed instance with named features.
Key moment — Residuation:
When Nina’s alibi witness is Value.uninstantiated(), the system doesn’t crash or store NULL. It says “I don’t know yet” and suspends judgment. This is fundamentally different from SQL NULL.
What you learn:
Value.string(),Value.integer(),Value.boolean()— tagged format for storageValue.uninstantiated()— represents unknown dataupdateTermfor partial updates
Run: npx tsx step-02-terms.ts
Step 3: The Deduction (Inference)
File: step-03-inference.ts | Time: 40 min | Concept: rules, backward/forward chaining
This is where the magic happens. You write logical rules and the engine deduces who could have done it.
The rules you write:
had_opportunity— has key to the study + unconfirmed alibimatches_dna— female + had opportunityprime_suspect— matches DNA + has motive
The engine chains them: prime_suspect → matches_dna → had_opportunity → suspect facts
When you ask “who is a prime suspect?”, the engine works backward through the chain — exactly like a detective following leads from conclusion to evidence.
What you learn:
FeatureInput.variable("?Name")— variables that get bound during unificationguard("gte", 3.5)— constraints on variablesbackwardChain— goal-directed queriesforwardChain— derive everything at once
Run: npx tsx step-03-inference.ts
Step 4: Eliminating the Innocent (Negation)
File: step-04-negation.ts | Time: 15 min | Concept: negation as failure, closed-world assumption
A detective doesn’t just find the guilty — they eliminate the innocent. NAF (Negation as Failure) finds suspects who CANNOT be proven to match the evidence.
“Find all suspects who are NOT prime suspects” → these people are cleared.
What you learn:
nafProvewith positive and negative literals- Closed-world assumption: “can’t prove it” = “it’s false”
- Elimination by contradiction
Run: npx tsx step-04-negation.ts
Step 5: Weighing Uncertain Evidence (Fuzzy Logic)
File: step-05-fuzzy.ts | Time: 25 min | Concept: fuzzy values, similarity, truth degrees
Not all evidence is equally reliable. DNA at 94% confidence is different from an eyewitness who “maybe saw someone in the dark.”
You model evidence reliability as fuzzy membership functions:
- DNA match:
Triangular(0.90, 0.94, 0.97)— high confidence - Footprints:
Triangular(0.4, 0.6, 0.75)— medium - Eyewitness:
Triangular(0.1, 0.3, 0.5)— low
Then compare evidence using fuzzy unification (how similar are two evidence profiles?) and search with Top-K (which evidence most supports this suspect?).
What you learn:
FuzzyShape.triangular(),FuzzyShape.gaussian()— membership functionsfuzzyUnify— similarity degree between termssearchTopK— find most similar evidencefuzzyProve— inference with truth degree propagation
Run: npx tsx step-05-fuzzy.ts
Step 6: The Motive Chain (Causal Reasoning)
File: step-06-causal.ts | Time: 20 min | Concept: causal graphs, root cause, counterfactuals
WHY did it happen? You reconstruct the chain of events:
bitter_breakup → revenge → murder_motive → planned_murder → victim_deadpatent_dispute → resentment → murder_motive ↗access_to_poison ────────────────────────────↗Then ask the most powerful question: “What if the breakup hadn’t happened? Would Marcus Chen still be dead?”
This is Pearl’s counterfactual reasoning — the highest level of causal inference.
What you learn:
addRelation— build a causal graphrootCause— trace backward to find root causescounterfactual— “what if?” with Pearl’s 3-step algorithmintervene— “what if we had acted differently?”checkDSeparated— test conditional independence
Run: npx tsx step-06-causal.ts
Step 7: Deploy Your AI Detective (Cognitive Agents)
File: step-07-agents.ts | Time: 20 min | Concept: BDI architecture, autonomous agents
Your detective becomes an autonomous agent with:
- Beliefs — evidence and deductions gathered (with confidence scores)
- Goals — solve the murder, verify alibis, resolve contradictions
- Intentions — committed investigation plans
You create two detectives (Holmes and Watson) with different theories, have them exchange leads, and run BDI cycles where they autonomously decide what to investigate next.
What you learn:
createAgent— create a cognitive agentaddBelief/addGoal— feed the agent informationrunCycle— one perceive→believe→desire→intend→act→learn cyclesendMessage— inter-agent communicationprovideFeedback— human-in-the-loop learning
Run: npx tsx step-07-agents.ts
The Reveal
After completing all 7 steps, your AI detective has:
| Capability | How it helps |
|---|---|
| Type hierarchy (sorts) | Models people, evidence, locations with multiple inheritance |
| Case file (terms) | Stores all suspects, evidence, and testimonies as structured data |
| Logical deduction (inference) | Narrows 6 suspects down to 3 prime suspects |
| Elimination (NAF) | Clears David, Tom, and James from prime suspect list |
| Evidence weighing (fuzzy) | Ranks evidence by reliability, finds strongest leads |
| Motive reconstruction (causal) | Traces why the murder happened, tests “what if?” scenarios |
| Autonomous investigation (agents) | Detectives with beliefs, goals, and the ability to learn |
Who killed Marcus Chen? The evidence points to Nina Torres — bitter breakup motive, key to the study, no alibi, DNA match, and a security background that could explain the wiped fingerprints. But can you prove it beyond reasonable doubt? The kitchen camera footage might change everything…
What you learned
Every concept in this workshop maps to a real AI/reasoning capability:
| Murder mystery concept | Reasoning Layer concept | Real-world application |
|---|---|---|
| Suspect types | Sorts (multiple inheritance lattice) | Knowledge graph schemas |
| Case file entries | Psi-terms (typed feature structures) | Structured data with validation |
| ”Who had opportunity?” | Backward chaining inference | Goal-directed expert systems |
| ”Derive all conclusions” | Forward chaining | Materialized views, alerting |
| ”Eliminate the innocent” | Negation as failure | Compliance checking, filtering |
| ”How reliable is this?” | Fuzzy logic / similarity | Anomaly detection, matching |
| ”Why did it happen?” | Causal reasoning / counterfactuals | Root cause analysis, auditing |
| ”Investigate autonomously” | BDI cognitive agents | Autonomous AI systems |