proactive_triage

Proactive triage AI – lightweight LLM filter for interjection decisions.

Before the bot generates a full response to an unaddressed message, this module runs a cheap, fast LLM call to decide whether Stargazer should interject at all. The model outputs a single digit (1 = INTERJECT, 0 = SILENCE) based on the recent conversation context.

Uses the OpenAI-compatible chat-completions endpoint exposed by the LLM proxy (config.llm_base_url), keeping the call entirely tool-free.

class proactive_triage.ProactiveTriageAI(http_client, base_url, api_key, model='gemini-2.0-flash-lite')[source]

Bases: object

Lightweight triage deciding whether Stargazer should interject.

Makes a single OpenAI-compatible chat-completions call to a cheap, fast model (e.g. gemini-2.0-flash-lite) and parses a binary 1 / 0 response.

Parameters:
  • http_client (httpx.AsyncClient)

  • base_url (str)

  • api_key (str)

  • model (str)

__init__(http_client, base_url, api_key, model='gemini-2.0-flash-lite')[source]

Initialize the instance.

Parameters:
  • http_client (AsyncClient) – The http client value.

  • base_url (str) – The base url value.

  • api_key (str) – The api key value.

  • model (str) – The model value.

Return type:

None

static format_cached_message(msg)[source]

Format a CachedMessage for the triage prompt.

Return type:

str

Parameters:

msg (CachedMessage)

async should_interject(recent_messages, max_retries=3)[source]

Decide whether Stargazer should interject.

Returns (should_interject, raw_decision_text). Defaults to (False, ...) (SILENCE) on any unrecoverable error.

Return type:

tuple[bool, str]

Parameters: