Cognitive Agents and BDI
Cognitive agents are autonomous entities that perceive their environment, maintain beliefs about the world, pursue goals, and take actions. The Reasoning Layer implements agents using the BDI (Belief-Desire-Intention) architecture — a well-established model from AI research for building rational agents.
What is a cognitive agent?
A cognitive agent is not a chatbot or a script. It is an entity that:
- Perceives changes in its environment
- Believes things about the world (which may be uncertain or incomplete)
- Desires outcomes (goals it wants to achieve)
- Intends to act (commits to specific plans to achieve goals)
- Learns from experience (adjusts behavior based on outcomes)
Think of it as a software entity that acts more like a human decision-maker than a function call. It doesn’t just respond to inputs — it maintains state, pursues objectives, adapts to changing conditions, and can explain its reasoning.
The BDI architecture
BDI stands for Beliefs, Desires, and Intentions. These three mental attitudes drive all agent behavior:
Beliefs — “what do I think is true?”
Beliefs represent the agent’s understanding of the world. Each belief has:
- A sort and features (structured data, just like any Psi-term)
- A confidence score (0.0 to 1.0)
- A source (where the belief came from)
Beliefs can be uncertain, conflicting, or outdated. The agent revises them as new information arrives.
await client.cognitive.addBelief({ agent_id: agentId, belief: { sort: 'server_status', features: { host: 'prod-1', cpu_usage: 85, status: 'degraded' }, }, confidence: 0.9, source: 'monitoring-api',});Key insight: Beliefs are not “facts” in the database sense. They are the agent’s interpretation of the world, which may be wrong or incomplete.
Desires (Goals) — “what do I want to achieve?”
Goals represent desired states of the world. Each goal has:
- A sort and features (the desired outcome)
- A priority (which goals matter most)
- A status (pending, active, achieved, failed)
Goals can conflict with each other. The agent must prioritize and sometimes abandon lower-priority goals.
await client.cognitive.addGoal({ agent_id: agentId, goal: { sort: 'reduce_cpu_usage', features: { host: 'prod-1', target_usage: 50 }, }, priority: 8, status: 'pending',});Intentions — “what am I committed to doing?”
Intentions are the bridge between desires and actions. When an agent selects a goal to pursue, it forms an intention — a committed plan of action. Intentions have:
- A link to the goal they serve
- A plan (sequence of actions)
- A current step (progress tracker)
- A commitment strength (how strongly the agent is committed)
The agent doesn’t reconsider intentions on every cycle — it commits and follows through unless conditions change significantly (this is called commitment strategy).
The BDI cycle
The agent runs in discrete cycles. Each cycle follows this sequence:
┌─────────────────────────────────────────────────┐│ BDI Cycle ││ ││ 1. PERCEIVE → Gather new information ││ ↓ ││ 2. BELIEVE → Update beliefs based on ││ ↓ new perceptions ││ ↓ ││ 3. DESIRE → Generate/update goals based ││ ↓ on current beliefs ││ ↓ ││ 4. INTEND → Select goals, form plans ││ ↓ ││ 5. ACT → Execute the next action in ││ ↓ the current plan ││ ↓ ││ 6. LEARN → Record outcome, update ││ strategies for next time │└─────────────────────────────────────────────────┘const result = await client.cognitive.runCycle({ agent_id: agentId,});
console.log(`Goals pursued: ${result.outcome.goals_pursued.length}`);console.log(`Goals achieved: ${result.outcome.goals_achieved.length}`);console.log(`Actions executed: ${result.outcome.actions_executed.length}`);console.log(`New beliefs: ${result.outcome.new_beliefs.length}`);console.log(`Success rate: ${result.outcome.success_rate}`);Why BDI instead of simpler approaches?
vs. state machines
State machines have fixed transitions. BDI agents dynamically generate goals and plans based on beliefs. When the world changes unexpectedly, a state machine may get stuck — a BDI agent can replan.
vs. reactive agents (stimulus-response)
Reactive agents just respond to inputs without internal state. BDI agents maintain beliefs and pursue goals across multiple cycles. A reactive agent forgets what happened last cycle; a BDI agent remembers and adapts.
vs. planning-only agents
Pure planners make a plan and execute it. BDI agents continuously interleave planning and execution — they can react to changes mid-plan and replan if conditions change.
vs. LLM-based agents
LLM agents use natural language reasoning. BDI agents use structured logical reasoning over a knowledge base. The advantages:
- Deterministic: Same beliefs + goals → same actions (reproducible)
- Inspectable: You can see exactly what the agent believes and why it acted
- Efficient: No LLM inference cost per decision
- Composable: Multiple agents can share and reason about the same knowledge base
Additional capabilities
Episodic memory
Agents remember past experiences and can recall them when facing similar situations. This enables learning from history.
const episodes = await client.cognitive.recallEpisodes({ agent_id: agentId, goal_id: goalId, context: ['high-cpu', 'production'], max_results: 5,});// "Last time I faced high CPU on production, I scaled horizontally// and it worked with reward 0.8"Intrinsic motivation
Agents have internal drives (curiosity, efficiency, etc.) that generate goals even without external stimuli. This enables proactive behavior.
HTN planning
Hierarchical Task Network planning decomposes complex goals into subtask sequences. A goal like “deploy new version” might decompose into “run tests → build image → update service → verify health.”
Inter-agent messaging
Agents can communicate with each other — sending direct messages, broadcasting to all agents, and coordinating on shared goals.
Human-in-the-loop (HITL)
Agents can request human review when they’re uncertain. The hitl_request WebSocket event signals that the agent wants a human to approve an action before proceeding.
Real-time events
Subscribe to an agent’s WebSocket to observe its behavior in real time:
const subscription = client.cognitive.subscribeToEvents(agentId, { onEvent(event) { switch (event.type) { case 'cycle_complete': console.log(`Cycle done: ${event.actions_executed} actions`); break; case 'goal_completed': console.log(`Goal ${event.goal_id}: ${event.success ? 'achieved' : 'failed'}`); break; case 'hitl_request': console.log(`Agent requests human review: ${event.action_sort}`); break; case 'impasse_detected': console.log(`Agent is stuck: ${event.impasse_type}`); break; } },});Key takeaways
- BDI agents are goal-directed — they pursue objectives, not just respond to events
- Beliefs model uncertainty — the agent’s worldview can be incomplete or wrong, and it handles that gracefully
- Intentions provide commitment — agents follow through on plans rather than reconsidering every cycle
- The BDI cycle is the heartbeat — perceive → believe → desire → intend → act → learn
- Everything is inspectable — you can see an agent’s beliefs, goals, intentions, and motivations at any time
- Agents use the same knowledge base — beliefs and goals are Psi-terms, rules are Psi-terms, everything is connected
For practical usage with code examples, see the Cognitive Agents guide.