Flow Networks
The Flow Networks client exposes a generic capacitated directed graph as a first-class resource. You create a network once, optionally clone it for what-if branches, commit structural mutations (capacity tweaks and edge appends), and solve it under whichever algorithm fits the question — without rebuilding the graph each time.
The engine itself is domain-agnostic. It speaks nodes, edges, capacities, and (optionally) per-edge costs. Domain shapes — staff scheduling, transportation, supply chains, network reliability — sit on top: write a translator that turns your domain into a CreateFlowNetworkRequest, and let the engine handle the algorithmic core.
When to reach for it
The dedicated Scheduling client wraps a flow network for one specific shape (capacitated bipartite b-matching with role stratification). Reach for flowNetworks when:
- The graph is not a scheduling shape — transport flow, escape-network capacity, project network, etc.
- You need to run different algorithms over the same graph (max-flow now, min-cut later, edge classification for sensitivity).
- You want to mutate the network between solves (raise a capacity, add a back edge) without re-uploading the whole structure.
- You need the what-if clone-and-mutate pattern that mirrors the
/spacesresource.
If your problem fits the scheduling shape exactly, prefer client.scheduling.feasibility() / optimize() — same engine internally, but the SDK does the agent/day/shift translation for you.
Lifecycle
create → (clone) → (commit) → solve ↘ solve again with a different algorithmcreate returns a stable id. Every other endpoint takes that id and returns metadata or solver output. There’s no per-call payload bloat — the network lives on the server until evicted.
Create a network
import { ReasoningLayerClient } from '@kortexya/reasoninglayer';
const client = new ReasoningLayerClient({ baseUrl: 'https://platform.ovh.reasoninglayer.ai', tenantId: '...', auth: { mode: 'cookie' },});
const network = await client.flowNetworks.create({ nodes: ['s', 'a', 'b', 't'], source: 's', sink: 't', edges: [ { from: 's', to: 'a', capacity: 3, label: 's-a' }, { from: 's', to: 'b', capacity: 2, label: 's-b' }, { from: 'a', to: 't', capacity: 2, label: 'a-t' }, { from: 'b', to: 't', capacity: 3, label: 'b-t' }, ], description: 'demo network',});
console.log(network.id); // server-assigned UUIDconsole.log(network.nodeCount); // 4console.log(network.edgeCount); // 4Validation rules
sourceandsinkmust appear innodesand must be distinct.- Every edge’s
fromandtomust reference a declared node. capacitymust be≥ 0.labelis optional but, when supplied, must be unique across the network. Labelled edges can be referenced later bycommitfor capacity updates.costis optional; pure max-flow algorithms ignore it. Min-cost max-flow uses it as the per-unit-flow cost.
Malformed input → HTTP 400. The engine fails loudly rather than silently.
Solve
const maxFlow = await client.flowNetworks.solve(network.id, { algorithm: { kind: 'max_flow' },});
console.log(maxFlow.totalFlow); // 4for (const edge of maxFlow.edgeFlows) { console.log(`${edge.from} → ${edge.to}: ${edge.flow}/${edge.capacity}`);}The same network can be solved under any algorithm. The response narrows the relevant fields per algorithm; everything irrelevant is undefined.
| Algorithm | Returns |
|---|---|
{ kind: 'max_flow' } | totalFlow, edgeFlows |
{ kind: 'min_cost_max_flow' } | totalFlow, totalCost, edgeFlows (uses edge cost) |
{ kind: 'classify_edges' } | classifications (per edge: always_used / never_used / sometimes_used) |
{ kind: 'classify_edges_optimal' } | classifications over optimal (min-cost) max-flow solutions |
{ kind: 'min_cut' } | totalFlow, edgeFlows, minCut (source-/sink-side partition + cut edges) |
Edge classification
classify_edges and classify_edges_optimal partition every forward edge into three buckets via Dulmage–Mendelsohn residual analysis:
always_used— every max-flow solution saturates this edge. Removing it strictly reduces the max flow.never_used— no max-flow solution routes any flow through this edge. It can be removed at zero cost.sometimes_used— flow may or may not pass through; the engine has slack here.
This is the same trichotomy the scheduling engine returns over (agent, day, shift) cells, generalised to arbitrary edges. Use it for capacity-planning (“which links are bottlenecks?”) and for what-if pruning (“which edges are dead weight?”).
Min-cut
const cut = await client.flowNetworks.solve(network.id, { algorithm: { kind: 'min_cut' },});
console.log(cut.minCut!.sourceSide); // ['s', 'a']console.log(cut.minCut!.sinkSide); // ['b', 't']console.log(cut.minCut!.cutEdges); // saturated forward edges crossing the cutThe summed capacity of cutEdges equals totalFlow — the max-flow / min-cut duality on the wire.
Mutate without rebuilding
commit lets you change the network in place. Both fields on the request are optional and may be supplied together; capacity updates apply before new edges are appended.
await client.flowNetworks.commit(network.id, { updateCapacities: [ { label: 'a-t', capacity: 5 }, // bump the existing edge ], addEdges: [ { from: 'a', to: 'b', capacity: 1 }, // append a new edge ],});
// Re-solve under the same algorithm — no need to re-upload.const replayed = await client.flowNetworks.solve(network.id, { algorithm: { kind: 'max_flow' },});updateCapacities references edges by their caller-supplied label — the unique identifier you assigned at create time (or via a previous commit that appended a labelled edge). New edges added via addEdges may also carry labels for later commits.
What-if branches with clone
clone returns a fresh network id whose graph is an independent deep copy of the source. The original is untouched; mutations on the clone don’t propagate.
const branchA = await client.flowNetworks.clone(network.id);const branchB = await client.flowNetworks.clone(network.id);
await client.flowNetworks.commit(branchA.id, { updateCapacities: [{ label: 's-a', capacity: 5 }],});await client.flowNetworks.commit(branchB.id, { updateCapacities: [{ label: 's-b', capacity: 5 }],});
const [resA, resB] = await Promise.all([ client.flowNetworks.solve(branchA.id, { algorithm: { kind: 'max_flow' } }), client.flowNetworks.solve(branchB.id, { algorithm: { kind: 'max_flow' } }),]);
// Compare resA.totalFlow vs resB.totalFlow — which capacity bump matters?This is the same pattern as the /spaces resource for computation spaces, applied to flow networks.
Inspect a network
const snapshot = await client.flowNetworks.get(network.id);// { id, source, sink, nodeCount, edgeCount, description? }get returns metadata only — it doesn’t enumerate every edge. Use it to confirm a network exists, check its size after a commit, or surface the description in a UI list.
Beyond max-flow
Anything that fits “directed capacitated graph with a designated source and sink” maps onto these endpoints:
- Bipartite assignment — wire one side as source-edges, the other as sink-edges, capacities encode availability.
- Project networks — find the bottleneck path under capacity constraints.
- Network reliability —
classify_edgesreveals which links are always vs. never carrying flow. - Sensitivity analysis — clone, bump a capacity, re-solve, compare.
- Min-cost transportation — supply at source, demand at sink, edge cost = transport cost per unit; solve with
min_cost_max_flow.
The engine speaks edges and capacities; your domain decides what they mean.
Next steps
- See the FlowNetworks namespace API reference for the complete type-level documentation of every request and response field.
- The Scheduling guide shows the same engine wrapped for the scheduling shape — useful contrast for understanding when to use the generic resource vs. the specialised client.