gabm.io.llm.publicai module
For sending prompts to PublicAI (Apertus), receiving responses, and managing model lists and cache.
Features: - Send prompts to PublicAI and cache responses for reproducibility. - List available models from the PublicAI API and save as both JSON and TXT for validation and reference. - Validate selected model names against the cached JSON model list. - Unified workflow for model management, matching other LLM modules in the project.
- class gabm.io.llm.publicai.PublicAIService(logger=None)
Bases:
LLMServiceService class for PublicAI LLM integration. Handles prompt sending, response caching, logging, and model listing.
- SERVICE_NAME = 'publicai'
- list_available_models(api_key)
List available PublicAI models and write them to JSON and TXT files. Returns the list.
- Args:
api_key (str): PublicAI API key.
- Returns:
list: List of model objects.
- send(api_key, message, model='swiss-ai/apertus-8b-instruct')
Send a prompt to PublicAI and return the response object. Caches and logs the response for reproducibility.
- Args:
api_key (str): PublicAI API key. message (str): Prompt to send. model (str): Model name (default: “apertus-llm-7b”).
- Returns:
Response object (dict) or None on error.
- static simple_extract_text(response)