Stargazer v3 — Technical Documentation
Architectural Overview
Stargazer v3 is a multi-platform conversational AI bot that bridges Discord and Matrix with a shared LLM inference pipeline, extensible tool-calling framework, and a neurochemical simulation engine for affective computing.
System Architecture
The system is organized into the following layers:
Entry Point & Lifecycle — main boots the BotRunner, which
manages multi-platform adapter lifecycles, message routing, and the web management GUI.
Platform Adapters — The platforms package provides a
PlatformAdapter interface implemented by
platforms.discord and platforms.matrix. Adapters normalize incoming
messages into IncomingMessage objects and expose a uniform
reply API.
Message Processing Pipeline — message_processor is the core orchestrator.
For each incoming message it:
Preprocesses (URL extraction, attachment handling, multimodal parts)
Resolves conversation history via
conversationandmessage_cacheClassifies and selects relevant tools via
classifiers.vector_classifierGathers RAG context from
rag_systemBuilds the LLM prompt with
prompt_contextandprompt_rendererCalls the LLM via
openrouter_clientPostprocesses the response with
response_postprocessor
Tool Framework — tools provides the ToolRegistry decorator
system. Each tool file in the tools/ directory is auto-loaded by tool_loader.
Tools are async callables that receive optional ToolContext for
bot-internal access. The classifiers.vector_classifier selects tools
per-message via semantic embedding similarity.
Neurochemical Model (NCM) — An affective computing layer that simulates neurochemical states:
limbic_system— Core neurochemical vector state machinecascade_engine— Multi-turn event cascadesncm_engine— Top-level NCM orchestratorncm_desire_engine— Desire/motivation modelingncm_semantic_triggers— Semantic trigger evaluationcadence_refiner— Response cadence and style adjustmentuser_limbic_mirror— Per-user affective modeling
Knowledge & Memory — Long-term memory and knowledge management:
knowledge_graph— FalkorDB-backed knowledge graphrag_system— RAG pipeline with embedding searchmessage_cache— Redis-backed message historythreadweave— Multi-thread conversation management
Background Processing — Async agents in background_agents:
background_agents.channel_summarizer— Automatic channel summariesbackground_agents.research_agent— Background research tasksbackground_agents.deep_think_agent— Extended reasoning
Configuration — config loads config.yaml with environment variable
overrides, supporting per-platform settings.
API Reference
Core Modules
- Core Modules
- anamnesis_engine
- api_key_encryption
- background_tasks
- btc_networks
- btc_wallet_manager
- build_kg
- cadence_refiner
- callbacks
- cascade_engine
- chroma_registry
- config
- conversation
- embedding_queue
- eth_networks
- feature_toggles
- flavor_engine
- flavor_memory
- game_assets
- game_memory
- game_ncm
- game_renderer
- game_session
- gemini_embed_pool
- gemini_kg_bulk_client
- init_redis_indexes
- kg_agentic_extraction
- kg_bulk_runner
- kg_consolidation
- kg_extraction
- latex_converter
- limbic_system
- log_rag_ingest
- log_redaction
- main
- media
- media_cache
- message_cache
- message_queue
- message_utils
- migrate_kg_overhaul
- migrate_kg_uuids
- ncm_delta_parser
- ncm_desire_engine
- ncm_engine
- ncm_local_embeddings
- ncm_semantic_triggers
- ncm_variant_cache
- oauth_manager
- observability
- openrouter_client
- proactive_triage
- prompt_context
- prompt_renderer
- response_postprocessor
- run_tool_test
- scrape_leafly
- search_query_generator
- server_stats
- star_avatar
- star_self_mirror
- status_manager
- task_manager
- terpene_engine
- test_anamnesis
- test_write
- threadweave
- tool_context
- tool_loader
- url_content_extractor
- user_limbic_mirror
- wallet_key_utils
- wallet_manager
- web_search_context
Tools
Classifiers
Platform Adapters
RAG System
Background Agents