Interface: TokenUsageDto
@kortexya/reasoninglayer / Ingestion / TokenUsageDto
Interface: TokenUsageDto
Defined in: src/types/ingestion.ts:545
Token usage from LLM API calls during ingestion.
Properties
cacheCreationTokens
cacheCreationTokens:
number
Defined in: src/types/ingestion.ts:547
Tokens written to prompt cache.
cacheReadTokens
cacheReadTokens:
number
Defined in: src/types/ingestion.ts:549
Tokens served from prompt cache (reduces cost).
inputTokens
inputTokens:
number
Defined in: src/types/ingestion.ts:551
Input tokens sent to the LLM.
outputTokens
outputTokens:
number
Defined in: src/types/ingestion.ts:553
Output tokens received from the LLM.
totalTokens
totalTokens:
number
Defined in: src/types/ingestion.ts:555
Total tokens (input + output).