agent_inspect.clients package

Submodules

agent_inspect.clients.azure_openai_client module

class agent_inspect.clients.azure_openai_client.AzureOpenAIClient(model, max_tokens, temperature=0)[source]

Bases: LLMClient

Client class providing connection to the Azure OpenAI Service. Need to set the following environment variables: AZURE_API_VERSION, AZURE_API_BASE, AZURE_API_KEY.

Parameters:
  • model (str) – the selected Azure OpenAI model which will receive the prompt. This is the deployment name in Azure.

  • max_tokens (int) – the maximum number of tokens allowed for the LLM to generate.

  • temperature (float) – the temperature setting for LLM model. Default to 0.

convert_payload_to_raw_request(payload)[source]
Return type:

Dict[str, Any]

Parameters:

payload (LLMPayload)

async make_llm_request(prompt)[source]

Returns LLM completion after sending a prompt to the selected the model. Uses an exponential backoff retry mechanism for transient failures.

Parameters:

prompt (str) – the provided prompt to send to the model.

Return type:

LLMResponse

Returns:

LLMResponse object containing status code, completion and error message.

async make_llm_request_with_retry(prompt)[source]
Parameters:

prompt (str)

async make_llm_requests(prompts)[source]

Returns LLM completion after sending a prompt to the selected the model.

Parameters:

prompts (list[str]) – a list of provided prompts to send to the model.

Return type:

list[LLMResponse]

Returns:

a List [LLMResponse] object containing status codes, completions and error messages.

async make_request_with_payload(payload)[source]

Returns LLM completion after sending a payload to the selected the model.

Parameters:

payload (LLMPayload) – the provided payload to send to the model.

Return type:

LLMResponse

Returns:

LLMResponse object containing status code, completion and error message.

async make_request_with_payload_using_retry(payload)[source]
Parameters:

payload (LLMPayload)

agent_inspect.clients.azure_openai_client.backoff_handler(details)[source]

Custom handler for backoff events to log messages with extra attributes.

agent_inspect.clients.azure_openai_client.give_up_handler(details)[source]

Custom handler for when max retries are reached.

agent_inspect.clients.litellm_client module

class agent_inspect.clients.litellm_client.LiteLLMClient(model, max_tokens, temperature=0, extra_params=None)[source]

Bases: LLMClient

Client class providing connection to the LiteLLM Service. Need to set the following environment variables: AZURE_API_VERSION, AZURE_API_BASE, AZURE_API_KEY.

Parameters:
  • model (str) – The selected lite llm model which will receive the prompt.

  • max_tokens (int) – The maximum number of tokens allowed for the LLM to generate.

  • temperature (float) – The temperature setting for LLM model. Default to 0.

  • extra_params (Optional[Dict[str, Any]]) – Additional parameters to pass to the LiteLLM API calls.

convert_payload_to_raw_request(payload)[source]
Return type:

Dict[str, Any]

Parameters:

payload (LLMPayload)

async make_llm_request(prompt)[source]

Returns a LLM completion after sending a prompt to the selected the model. Uses an exponential backoff retry mechanism for transient failures.

Parameters:

prompt (str) – The provided prompt to send to the model.

Return type:

LLMResponse

Returns:

LLMResponse object containing status code, completion, and error message.

async make_llm_request_with_retry(prompt)[source]
Parameters:

prompt (str)

async make_llm_requests(prompts)[source]

Returns LLM completions after sending a batch of prompts to the selected the model.

Parameters:

prompts (list[str]) – A list of provided prompts to send to the model.

Return type:

list[LLMResponse]

Returns:

A List [LLMResponse] object containing status codes, completions and error messages.

async make_request_with_payload(payload)[source]

Returns LLM completion after sending a payload to the selected the model.

Parameters:

payload (LLMPayload) – the provided payload to send to the model.

Return type:

LLMResponse

Returns:

LLMResponse object containing status code, completion and error message.

async make_request_with_payload_using_retry(payload)[source]
Parameters:

payload (LLMPayload)

agent_inspect.clients.litellm_client.backoff_handler(details)[source]

Custom handler for backoff events to log messages with extra attributes.

agent_inspect.clients.litellm_client.give_up_handler(details)[source]

Custom handler for when max retries are reached.

agent_inspect.clients.llm_client module

class agent_inspect.clients.llm_client.LLMClient[source]

Bases: ABC

This is a base abstract class that should be extended for actual implementations to connect to llm-as-a-judge model.

abstract async make_llm_request(prompt)[source]

This is an abstract method and should be implemented for concrete class to make LLM request to the LLM model.

Parameters:

prompt (str) – the user provided prompt to send to the model.

Return type:

LLMResponse

Returns:

LLMResponse object containing status code, completion and error message.

abstract async make_llm_requests(prompts)[source]

This is an abstract method and should be implemented for concrete class to make multiple LLM requests to the LLM model.

Parameters:

prompts (list[str]) – the user provided prompts to send to the model.

Return type:

list[LLMResponse]

Returns:

a List [LLMResponse] object containing status codes, completions and error messages.

abstract async make_request_with_payload(payload)[source]

This is an abstract method and should be implemented for concrete class to make LLM request to the LLM model with LLMPayload.

Parameters:

payload (LLMPayload) – the user provided LLMPayload to send to the model.

Return type:

LLMResponse

Returns:

LLMResponse object containing status code, completion and error message.

Module contents

class agent_inspect.clients.AzureOpenAIClient(model, max_tokens, temperature=0)[source]

Bases: LLMClient

Client class providing connection to the Azure OpenAI Service. Need to set the following environment variables: AZURE_API_VERSION, AZURE_API_BASE, AZURE_API_KEY.

Parameters:
  • model (str) – the selected Azure OpenAI model which will receive the prompt. This is the deployment name in Azure.

  • max_tokens (int) – the maximum number of tokens allowed for the LLM to generate.

  • temperature (float) – the temperature setting for LLM model. Default to 0.

convert_payload_to_raw_request(payload)[source]
Return type:

Dict[str, Any]

Parameters:

payload (LLMPayload)

async make_llm_request(prompt)[source]

Returns LLM completion after sending a prompt to the selected the model. Uses an exponential backoff retry mechanism for transient failures.

Parameters:

prompt (str) – the provided prompt to send to the model.

Return type:

LLMResponse

Returns:

LLMResponse object containing status code, completion and error message.

async make_llm_request_with_retry(prompt)[source]
Parameters:

prompt (str)

async make_llm_requests(prompts)[source]

Returns LLM completion after sending a prompt to the selected the model.

Parameters:

prompts (list[str]) – a list of provided prompts to send to the model.

Return type:

list[LLMResponse]

Returns:

a List [LLMResponse] object containing status codes, completions and error messages.

async make_request_with_payload(payload)[source]

Returns LLM completion after sending a payload to the selected the model.

Parameters:

payload (LLMPayload) – the provided payload to send to the model.

Return type:

LLMResponse

Returns:

LLMResponse object containing status code, completion and error message.

async make_request_with_payload_using_retry(payload)[source]
Parameters:

payload (LLMPayload)

class agent_inspect.clients.LLMClient[source]

Bases: ABC

This is a base abstract class that should be extended for actual implementations to connect to llm-as-a-judge model.

abstract async make_llm_request(prompt)[source]

This is an abstract method and should be implemented for concrete class to make LLM request to the LLM model.

Parameters:

prompt (str) – the user provided prompt to send to the model.

Return type:

LLMResponse

Returns:

LLMResponse object containing status code, completion and error message.

abstract async make_llm_requests(prompts)[source]

This is an abstract method and should be implemented for concrete class to make multiple LLM requests to the LLM model.

Parameters:

prompts (list[str]) – the user provided prompts to send to the model.

Return type:

list[LLMResponse]

Returns:

a List [LLMResponse] object containing status codes, completions and error messages.

abstract async make_request_with_payload(payload)[source]

This is an abstract method and should be implemented for concrete class to make LLM request to the LLM model with LLMPayload.

Parameters:

payload (LLMPayload) – the user provided LLMPayload to send to the model.

Return type:

LLMResponse

Returns:

LLMResponse object containing status code, completion and error message.

class agent_inspect.clients.LiteLLMClient(model, max_tokens, temperature=0, extra_params=None)[source]

Bases: LLMClient

Client class providing connection to the LiteLLM Service. Need to set the following environment variables: AZURE_API_VERSION, AZURE_API_BASE, AZURE_API_KEY.

Parameters:
  • model (str) – The selected lite llm model which will receive the prompt.

  • max_tokens (int) – The maximum number of tokens allowed for the LLM to generate.

  • temperature (float) – The temperature setting for LLM model. Default to 0.

  • extra_params (Optional[Dict[str, Any]]) – Additional parameters to pass to the LiteLLM API calls.

convert_payload_to_raw_request(payload)[source]
Return type:

Dict[str, Any]

Parameters:

payload (LLMPayload)

async make_llm_request(prompt)[source]

Returns a LLM completion after sending a prompt to the selected the model. Uses an exponential backoff retry mechanism for transient failures.

Parameters:

prompt (str) – The provided prompt to send to the model.

Return type:

LLMResponse

Returns:

LLMResponse object containing status code, completion, and error message.

async make_llm_request_with_retry(prompt)[source]
Parameters:

prompt (str)

async make_llm_requests(prompts)[source]

Returns LLM completions after sending a batch of prompts to the selected the model.

Parameters:

prompts (list[str]) – A list of provided prompts to send to the model.

Return type:

list[LLMResponse]

Returns:

A List [LLMResponse] object containing status codes, completions and error messages.

async make_request_with_payload(payload)[source]

Returns LLM completion after sending a payload to the selected the model.

Parameters:

payload (LLMPayload) – the provided payload to send to the model.

Return type:

LLMResponse

Returns:

LLMResponse object containing status code, completion and error message.

async make_request_with_payload_using_retry(payload)[source]
Parameters:

payload (LLMPayload)