gabm.io.llm.genai module

For sending prompts to Google Generative AI (genai), receiving responses, and managing model lists and cache.

Features: - Send prompts to GenAI and cache responses for reproducibility. - List available models from the GenAI API and save as both JSON and TXT for validation and reference. - Validate selected model names against the cached JSON model list. - Unified workflow for model management, matching other LLM modules in the project.

class gabm.io.llm.genai.GenAIService(logger=None)

Bases: LLMService

Service class for Google Generative AI LLM integration. Handles prompt sending, response caching, logging, and model listing.

SERVICE_NAME = 'genai'
extract_text_from_response(response)

Extract the text content from a GenAI response object for logging. Recursively searches for the first ‘text’ value in any nested structure. Logs a warning if no text is found.

list_available_models(api_key)

Dynamically fetches the list of available Gemini models using the OpenAI-compatible API. Requires the openai package and a valid Gemini API key.

send(api_key, message, model='models/gemini-2.5-flash')

Send a prompt to Google Generative AI and return the response object. Caches and logs the response for reproducibility.

Args:

api_key (str): Google API key. message (str): Prompt to send. model (str): Model name (default: “models/gemini-2.5-pro”).

Returns:

Response object (dict) or None on error.

static simple_extract_text(response)