LLM Processing
Model Configuration
The system uses Anthropic's Claude as the primary AI engine, with per-agent model selection possible.
| Setting | Value |
|---|---|
| Model | claude-sonnet-4-20250514 |
| Temperature | 0.1 (for consistent responses) |
| Streaming | Enabled |
| Framework | LangGraph + @langchain/langgraph-supervisor |
Message Processing Flow
sequenceDiagram
participant User
participant MessagesService
participant Supervisor
participant Subagent
participant MCP as MCP Tools
User->>MessagesService: Send message
MessagesService->>MessagesService: Build context (history, memory)
MessagesService->>MessagesService: Load MCP tools, partition by @agents
MessagesService->>Supervisor: Stream (messages + tools + updates)
Supervisor->>Supervisor: Decide which agent handles request
Supervisor->>Subagent: transfer_to_intelligence_agent
Subagent->>MCP: Call tools (SearchCompanies, FindRegions...)
MCP-->>Subagent: Tool results
Subagent-->>Supervisor: Response + tool history
Supervisor-->>User: Stream response text
MessagesService->>MessagesService: Save conversation, token usage
Cross-Domain Queries
For requests that span multiple domains (e.g., "Find SaaS companies and create a list"):
- Supervisor calls
intelligence_agentfirst → gets search results - Supervisor calls
lists_agentwith the results → creates the list - Supervisor composes the final response
Streaming Architecture
Three stream modes run simultaneously:
| Mode | What it captures | When it fires |
|---|---|---|
messages |
LLM token-by-token text output | Real-time as LLM generates |
tools |
Tool lifecycle events (on_tool_start, on_tool_end) |
Real-time as tools execute |
updates |
Node-level state updates (token counts, conversation tracking) | After each graph node completes |
Handoff Tool Filtering
The supervisor uses internal transfer_to_* and transfer_back_to_* tools for routing. These are filtered from:
- Streaming events —
isHandoffToolName()instream-handler.util.ts - Conversation history — Filtered in
DatabaseConversationHistoryAdapterto prevent Anthropic'stool_use/tool_resultpairing errors on subsequent turns
System Prompts
Supervisor Prompt
Pure routing logic — never answers questions directly:
- Lists available agents and their capabilities
- Routing rules for common query types
- Dynamically includes customer_tools_agent when customer servers are connected
- Memory block and conversation history appended at runtime
Domain Prompts
Each subagent has a focused prompt in src/connectors/prompts/subagent-prompts/:
| Prompt | Content |
|---|---|
intelligence.ts |
Entity search rules, filter syntax, financial data formatting |
lists.ts |
4-step list creation workflow, campaign management |
analytics.ts |
Dashboard generation, ClickHouse queries, scoring |
workflows.ts |
Workflow creation sequence, trigger types |
data-ops.ts |
Import templates, source management, matching review |
reports.ts |
Report builder workflow, template instructions |
data-science.ts |
Pipeline operations |
shared-formatting.ts |
Citation format, number formatting, work page links (shared across agents) |
Debug Modes
| Mode | Description |
|---|---|
[debug] |
Full MCP Request/Response debug with tool call details |
[info] |
Basic tool call notifications |
| Default | Standard processing without debug output |
File Processing
Supported File Types
Text: PDF, TXT, HTML Structured Data: CSV, Excel (XLSX/XLS), JSON Media: Images (JPG, PNG, GIF, WebP), Audio (MP3, WAV, M4A, OPUS with transcription) Documents: DOC/DOCX, PPT/PPTX
Document Export
The export_document tool (on the supervisor) generates downloadable PDF or DOCX files from text content.