proactive_triage
Proactive triage AI – lightweight LLM filter for interjection decisions.
Before the bot generates a full response to an unaddressed message, this
module runs a cheap, fast LLM call to decide whether Stargazer should
interject at all. The model outputs a single digit (1 = INTERJECT,
0 = SILENCE) based on the recent conversation context.
Uses the OpenAI-compatible chat-completions endpoint exposed by the LLM
proxy (config.llm_base_url), keeping the call entirely tool-free.
- class proactive_triage.ProactiveTriageAI(http_client, base_url, api_key, model='gemini-2.0-flash-lite')[source]
Bases:
objectLightweight triage deciding whether Stargazer should interject.
Makes a single OpenAI-compatible chat-completions call to a cheap, fast model (e.g.
gemini-2.0-flash-lite) and parses a binary1/0response.- __init__(http_client, base_url, api_key, model='gemini-2.0-flash-lite')[source]
Initialize the instance.
- static format_cached_message(msg)[source]
Format a
CachedMessagefor the triage prompt.- Return type:
- Parameters:
msg (CachedMessage)