Skip to content

Workshop — Murder at the Tech Mansion

The workshop code is available at gitlab.com/kortexya-pub/workshop.

The Case

Last night, billionaire tech CEO Marcus Chen was found dead in the study of his mansion during a private dinner party. Cause of death: cyanide poisoning in his wine glass. 6 guests were present. No one has left the property.

Your mission: build an AI detective engine to analyze the evidence, deduce who had opportunity, weigh uncertain testimonies, reconstruct the motive chain, and identify the killer.

The Suspects

NameOccupationMotiveStudy key?Alibi
Dr. Sarah ParkChief Scientist$50M patent disputeYesLibrary (unconfirmed)
James ChenVictim’s nephew$200M inheritanceYesGarden (claims Elena was there)
Elena VasquezPersonal chefAbout to be firedYesKitchen (contradicts James)
Prof. David OkaforProfessorStolen researchNoDining room (confirmed by Tom)
Tom ReevesLawyerSecret will changeNoDining room (confirmed by David)
Nina TorresSecurity consultantBitter breakupYesBathroom (no witness)

The Evidence

  • Poison vial found in study (wiped clean — no fingerprints)
  • Victim’s wine glass contains cyanide traces
  • Muddy footprints from garden to study
  • Hair strand on victim’s collar: female DNA, 94% match confidence
  • Staff member saw “dark-clothed figure near study around 8:40pm”
  • James and Elena give contradicting alibis
  • Study door was locked — only key holders could enter

Step 1: Setting Up the Case (Sorts)

File: step-01-sorts.ts | Time: 25 min | Concept: types with multiple inheritance

You define the TYPES of things in your investigation. The key insight: a person_of_interest inherits from BOTH suspect and witness — something impossible with single inheritance.

person
/ \
suspect witness
\ /
person_of_interest ← multiple inheritance!

What you learn:

  • Sorts form a lattice, not a tree
  • bulkCreateSorts uses names as parent references
  • isSubtype checks the hierarchy
  • computeGlb finds the most specific common type

Run: npx tsx step-01-sorts.ts


Step 2: Building the Case File (Terms)

File: step-02-terms.ts | Time: 20 min | Concept: data instances, tagged values, residuation

You enter all 6 suspects, physical evidence, forensic evidence, testimonies, and locations into the system. Each piece of data is a term — a typed instance with named features.

Key moment — Residuation: When Nina’s alibi witness is Value.uninstantiated(), the system doesn’t crash or store NULL. It says “I don’t know yet” and suspends judgment. This is fundamentally different from SQL NULL.

What you learn:

  • Value.string(), Value.integer(), Value.boolean() — tagged format for storage
  • Value.uninstantiated() — represents unknown data
  • updateTerm for partial updates

Run: npx tsx step-02-terms.ts


Step 3: The Deduction (Inference)

File: step-03-inference.ts | Time: 40 min | Concept: rules, backward/forward chaining

This is where the magic happens. You write logical rules and the engine deduces who could have done it.

The rules you write:

  1. had_opportunity — has key to the study + unconfirmed alibi
  2. matches_dna — female + had opportunity
  3. prime_suspect — matches DNA + has motive

The engine chains them: prime_suspect → matches_dna → had_opportunity → suspect facts

When you ask “who is a prime suspect?”, the engine works backward through the chain — exactly like a detective following leads from conclusion to evidence.

What you learn:

  • FeatureInput.variable("?Name") — variables that get bound during unification
  • guard("gte", 3.5) — constraints on variables
  • backwardChain — goal-directed queries
  • forwardChain — derive everything at once

Run: npx tsx step-03-inference.ts


Step 4: Eliminating the Innocent (Negation)

File: step-04-negation.ts | Time: 15 min | Concept: negation as failure, closed-world assumption

A detective doesn’t just find the guilty — they eliminate the innocent. NAF (Negation as Failure) finds suspects who CANNOT be proven to match the evidence.

“Find all suspects who are NOT prime suspects” → these people are cleared.

What you learn:

  • nafProve with positive and negative literals
  • Closed-world assumption: “can’t prove it” = “it’s false”
  • Elimination by contradiction

Run: npx tsx step-04-negation.ts


Step 5: Weighing Uncertain Evidence (Fuzzy Logic)

File: step-05-fuzzy.ts | Time: 25 min | Concept: fuzzy values, similarity, truth degrees

Not all evidence is equally reliable. DNA at 94% confidence is different from an eyewitness who “maybe saw someone in the dark.”

You model evidence reliability as fuzzy membership functions:

  • DNA match: Triangular(0.90, 0.94, 0.97) — high confidence
  • Footprints: Triangular(0.4, 0.6, 0.75) — medium
  • Eyewitness: Triangular(0.1, 0.3, 0.5) — low

Then compare evidence using fuzzy unification (how similar are two evidence profiles?) and search with Top-K (which evidence most supports this suspect?).

What you learn:

  • FuzzyShape.triangular(), FuzzyShape.gaussian() — membership functions
  • fuzzyUnify — similarity degree between terms
  • searchTopK — find most similar evidence
  • fuzzyProve — inference with truth degree propagation

Run: npx tsx step-05-fuzzy.ts


Step 6: The Motive Chain (Causal Reasoning)

File: step-06-causal.ts | Time: 20 min | Concept: causal graphs, root cause, counterfactuals

WHY did it happen? You reconstruct the chain of events:

bitter_breakup → revenge → murder_motive → planned_murder → victim_dead
patent_dispute → resentment → murder_motive ↗
access_to_poison ────────────────────────────↗

Then ask the most powerful question: “What if the breakup hadn’t happened? Would Marcus Chen still be dead?”

This is Pearl’s counterfactual reasoning — the highest level of causal inference.

What you learn:

  • addRelation — build a causal graph
  • rootCause — trace backward to find root causes
  • counterfactual — “what if?” with Pearl’s 3-step algorithm
  • intervene — “what if we had acted differently?”
  • checkDSeparated — test conditional independence

Run: npx tsx step-06-causal.ts


Step 7: Deploy Your AI Detective (Cognitive Agents)

File: step-07-agents.ts | Time: 20 min | Concept: BDI architecture, autonomous agents

Your detective becomes an autonomous agent with:

  • Beliefs — evidence and deductions gathered (with confidence scores)
  • Goals — solve the murder, verify alibis, resolve contradictions
  • Intentions — committed investigation plans

You create two detectives (Holmes and Watson) with different theories, have them exchange leads, and run BDI cycles where they autonomously decide what to investigate next.

What you learn:

  • createAgent — create a cognitive agent
  • addBelief / addGoal — feed the agent information
  • runCycle — one perceive→believe→desire→intend→act→learn cycle
  • sendMessage — inter-agent communication
  • provideFeedback — human-in-the-loop learning

Run: npx tsx step-07-agents.ts


The Reveal

After completing all 7 steps, your AI detective has:

CapabilityHow it helps
Type hierarchy (sorts)Models people, evidence, locations with multiple inheritance
Case file (terms)Stores all suspects, evidence, and testimonies as structured data
Logical deduction (inference)Narrows 6 suspects down to 3 prime suspects
Elimination (NAF)Clears David, Tom, and James from prime suspect list
Evidence weighing (fuzzy)Ranks evidence by reliability, finds strongest leads
Motive reconstruction (causal)Traces why the murder happened, tests “what if?” scenarios
Autonomous investigation (agents)Detectives with beliefs, goals, and the ability to learn

Who killed Marcus Chen? The evidence points to Nina Torres — bitter breakup motive, key to the study, no alibi, DNA match, and a security background that could explain the wiped fingerprints. But can you prove it beyond reasonable doubt? The kitchen camera footage might change everything…


What you learned

Every concept in this workshop maps to a real AI/reasoning capability:

Murder mystery conceptReasoning Layer conceptReal-world application
Suspect typesSorts (multiple inheritance lattice)Knowledge graph schemas
Case file entriesPsi-terms (typed feature structures)Structured data with validation
”Who had opportunity?”Backward chaining inferenceGoal-directed expert systems
”Derive all conclusions”Forward chainingMaterialized views, alerting
”Eliminate the innocent”Negation as failureCompliance checking, filtering
”How reliable is this?”Fuzzy logic / similarityAnomaly detection, matching
”Why did it happen?”Causal reasoning / counterfactualsRoot cause analysis, auditing
”Investigate autonomously”BDI cognitive agentsAutonomous AI systems