gabm.io.llm.deepseek module

For sending prompts to DeepSeek, receiving responses, and managing model lists and cache.

Features: - Send prompts to DeepSeek and cache responses for reproducibility. - List available models from the DeepSeek API and save as both JSON and TXT for validation and reference. - Validate selected model names against the cached JSON model list. - Unified workflow for model management, matching other LLM modules in the project.

class gabm.io.llm.deepseek.DeepSeekService(logger=None)

Bases: LLMService

Service class for DeepSeek LLM integration. Handles prompt sending, response caching, logging, and model listing.

SERVICE_NAME = 'deepseek'
list_available_models(api_key)

List available DeepSeek models and write them to JSON and TXT files. Returns the list.

Args:

api_key (str): DeepSeek API key.

Returns:

list: List of model objects.

send(api_key, message, model='deepseek-model-1')

Send a prompt to DeepSeek and return the response object. Caches and logs the response for reproducibility.

Args:

api_key (str): DeepSeek API key. message (str): Prompt to send. model (str): Model name (default: “deepseek-model-1”).

Returns:

Response object (dict) or None on error.

static simple_extract_text(response)