gabm.io.llm.utils module

Utility functions for LLM API error handling, model listing, caching, and validation.

  • Provides a decorator for safe API calls.

  • Provides utilities to write model lists as both JSON and TXT for all LLMs.

  • Provides a loader for model lists from JSON for validation and selection.

This supports a unified workflow for model management across all LLM providers in the project.

gabm.io.llm.utils.cache_and_log(cache: Dict[Any, Any], cache_key: Any, response: Any, cache_path: Path | str, jsonl_path: Path | str, prompt: str | None = None, model: str | None = None, extra: Dict[str, Any] | None = None, logger: Any | None = None, extract_text_from_response: Callable[[Any], str] | None = None) None

Cache and log the prompt/response pair to a JSONL file.

Args:

cache: The in-memory cache dictionary to update. cache_key: The key to use for caching the response. response: The response object to cache and log. cache_path (Path or str): Path to the pickle file for caching. jsonl_path (Path or str): Path to the JSONL file for logging. prompt (str, optional): The prompt that was sent (for logging). model (str, optional): The model used for the request (for logging). extra (dict, optional): Any extra information to include in the log entry. logger: Logger for info/error messages (optional). extract_text_from_response (callable, optional): Function to extract text from the response for logging. Defaults to global extract_text_from_response.

gabm.io.llm.utils.call_and_cache_response(api_call: Callable[[], Any], cache_and_log_func: Callable, cache: Dict[Any, Any], cache_key: Any, cache_path: Any, jsonl_path: Any, prompt: Any, model: Any, api_key: str, logger: Any, service_name: str, list_available_models_func: Callable[[str], Any], extract_text_from_response: Callable[[Any], str] | None = None) Any | None

Generic try/except, error logging, model listing, and caching for LLM send methods.

Args:

api_call (callable): Function that performs the LLM API call and returns the response. cache_and_log_func (callable): Function to cache and log the response. cache (dict): The in-memory cache dictionary. cache_key: The cache key for this request. cache_path: Path to the pickle file for caching. jsonl_path: Path to the JSONL file for logging. prompt: The prompt/message sent. model: The model used. api_key: The API key (for model listing on error). logger: Logger for info/error messages. service_name (str): Name of the LLM service (for error messages). list_available_models_func (callable): Function to list available models. extract_text_from_response (callable, optional): Function to extract text from the response for logging. Passed to cache_and_log_func.

Returns:

The response object or None on error.

gabm.io.llm.utils.extract_text_from_response(response: Any) str

Extract the text content from the LLM response object for logging. For local LLMs, this is just str(response). For remote, can be overridden.

gabm.io.llm.utils.get_llm_cache_paths(service_name: str) Tuple[Path, Path]

Return the cache (pickle) and JSONL log paths for a given LLM service.

Args:

service_name (str): The name of the LLM service (e.g., ‘openai’).

Returns:

tuple: (cache_path, jsonl_path) as Path objects

gabm.io.llm.utils.list_models_to_txt(models: Iterable[Any], models_path: Path, formatter: Callable[[Any], str], header: str | None = None) None

Write a list of models to a text file with a custom formatter.

Args:

models (iterable): List of model objects. models_path (Path): Path to the output file. formatter (callable): Function that takes a model and returns a string. header (str, optional): Header string for the file.

gabm.io.llm.utils.load_llm_cache(cache_path: Path, logger: Any | None = None) Dict[Any, Any]

Load a pickle cache from the given path. Returns an empty dict if not found or on error.

Args:
cache_path (Path):

Path to the pickle file.

logger:

Logger for warnings (optional).

Returns:

dict: The loaded cache or an empty dict.

gabm.io.llm.utils.load_models_from_json(models_json_path: Path) List[Any]

Load a list of models from a JSON file for validation and selection.

Args:

models_json_path (Path): Path to the JSON file.

Returns:

list: List of model dicts.

gabm.io.llm.utils.pre_send_check_and_cache(api_key: str, message: str, model: str, cache: Dict[Any, Any], logger: Any, service_name: str, api_key_env_var: str) Any | None

Generic pre-send checks: API key presence, cache hit, and env var setup.

Args:

api_key (str): The API key for the LLM service. message (str): The message being sent. model (str): The model being used. cache (dict): The in-memory cache dictionary. logger: Logger for info/error messages. service_name (str): Name of the LLM service (for error messages). api_key_env_var (str): The environment variable name for the API key.

Returns:

The cached response if a cache hit occurs, or None if no cache hit or on error

gabm.io.llm.utils.safe_api_call(api_name: str) Callable

Decorator to handle exceptions for LLM API calls and log errors gracefully.

Args:

api_name (str): Name of the API for logging purposes.

Returns:

function: Decorator that wraps the target function, returning None on error.

gabm.io.llm.utils.write_models_json_and_txt(models: Iterable[Any], models_json_path: Path, models_txt_path: Path, formatter: Callable[[Any], str], header: str | None = None) None

Write a list of models to both JSON and TXT files for LLM model management.

Args:

models (iterable): List of model objects or dicts. models_json_path (Path): Path to the output JSON file. models_txt_path (Path): Path to the output TXT file. formatter (callable): Function that takes a model and returns a string for TXT output. header (str, optional): Header string for the TXT file.

This enables both human-readable and machine-readable model lists for all LLMs.