context
Module: vibex/context
Context Management for LLM Conversations Handles token counting, message compression, and context window management
Interfaces
ContextBudget
View sourceProperties:
| Name | Type | Description |
|---|---|---|
totalLimit | number | |
systemPrompt | number | |
tools | number | |
completion | number | |
overhead | number | |
availableForMessages | number |
Functions
estimateTokenCount
View sourceSimple token estimation (4 chars ≈ 1 token) For production, use a proper tokenizer like tiktoken
function estimateTokenCount(text: string): numberParameters:
| Name | Type | Description |
|---|---|---|
text | string |
calculateContextBudget
View sourceCalculate token budget for messages
function calculateContextBudget(totalLimit: number, systemPrompt: string, toolsText: string = "", completionTokens: number = 4000): ContextBudgetParameters:
| Name | Type | Description |
|---|---|---|
totalLimit | number | |
systemPrompt | string | |
toolsText | string | (default: "") |
completionTokens | number | (default: 4000) |
compressMessages
View sourceCompress messages to fit within context window
function compressMessages(messages: ModelMessage[], budget: ContextBudget): Promise<ModelMessage[]>Parameters:
| Name | Type | Description |
|---|---|---|
messages | ModelMessage[] | |
budget | ContextBudget |
validateContext
View sourceValidate final context before sending to LLM
function validateContext(messages: ModelMessage[], budget: ContextBudget): { valid: boolean; reason?: string }Parameters:
| Name | Type | Description |
|---|---|---|
messages | ModelMessage[] | |
budget | ContextBudget |