agent_inspect.user_proxy package

Submodules

agent_inspect.user_proxy.llm_proxy_agent module

class agent_inspect.user_proxy.llm_proxy_agent.LLMProxyAgent(llm_client, config=None)[source]

Bases: ABC

Abstract class which should be extended for actual implementation of LLM agent.

Parameters:
  • llm_client (LLMClient) – the connection to the llm client for response generation.

  • config (Optional[Dict[str, Any]]) – configuration for LLM agent initialization. Default to None.

abstract async generate_message_from_chat_history(chat_history)[source]

This is an abstract method and should be implemented in a concrete class.

Parameters:

chat_history (ChatHistory) – a ChatHistory object containing the conversation history.

Return type:

UserProxyMessage

Returns:

a UserProxyMessage object containing the LLM agent response.

agent_inspect.user_proxy.templates module

agent_inspect.user_proxy.user_proxy_agent module

class agent_inspect.user_proxy.user_proxy_agent.UserProxyAgent(llm_client, task_summary, terminating_conditions, agent_description='', initial_message='', config=None)[source]

Bases: LLMProxyAgent

User proxy (a.k.a. simulated user) class which generates the user utterances during a dynamic conversation with the AI agent. The dynamic user utterances are generated based on the user task instruction and user persona (e.g., expert or non-expert) prompt templates via a two-step process—reflection followed by response generation.

Parameters:
  • llm_client (LLMClient) – the connection to the llm client for user utterances generation.

  • task_summary (str) – a user task instruction describing the summary of the task user wants the AI agent to complete.

  • terminating_conditions (List[TerminatingCondition]) – a List [TerminatingCondition] object where each element in the list is a terminating condition for the user proxy to exit the user-agent conversation early. Currently supports only one terminating condition.

  • agent_description (str) – the description of the AI agent that will interact with the user proxy, provided as additional context for the user proxy. Default to empty string.

  • initial_message (str) – a static message that is used as the user proxy’s initial message (if available) to the AI agent. Default to empty string.

  • config (Optional[Dict[str, Any]]) –

    Default to None. Configuration options:

    • use_expert_agent: a bool flag to indicate whether the user proxy should use an expert persona; otherwise, it uses a non-expert persona. Default to True to use an expert persona.

async generate_message_from_chat_history(chat_history)[source]

Generates the next user utterance given a ChatHistory object containing the user-agent conversation history as input.

Parameters:

chat_history (Optional[ChatHistory]) – a ChatHistory object containing the user-agent conversation history.

Return type:

UserProxyMessage

Returns:

a UserProxyMessage object containing the next user utterance. For the first user utterance, if the input variable chat_history is None and self.initial_message is not an empty string, the method returns the self.initial_message as the initial user utterance, otherwise it generates the user utterance based on the conversation history. All subsequent user utterances are generated based on the conversation history.

Example:

>>> from agent_inspect.user_proxy import UserProxyAgent
>>> from agent_inspect.models.user_proxy import ChatHistory TerminatingCondition
>>> from agent_inspect.metrics.constants import USE_EXPERT_AGENT
>>> from agent_inspect.clients import AzureOpenAIClient
>>> from uuid import uuid4
>>> import asyncio
>>>
>>> user_instruct, term_condition = load_user_instruct_term(sample_path) # Load user instruction and terminating condition
>>> client = AzureOpenAIClient(model="gpt-4.1", max_tokens=4096) # create llm client for user proxy
>>> user = UserProxyAgent(
...     llm_client=client,
...     task_summary=user_instruct,
...     terminating_conditions=[
...         TerminatingCondition(check=term_condition)
...     ],
...     config={USE_EXPERT_AGENT: True}
... )
>>> chat_history = ChatHistory(id=str(uuid4()), conversations=[]) # start from an empty conversation
>>> user_response = asyncio.run(user.generate_message_from_chat_history(chat_history))
>>> print(user_response.message_str)

agent_inspect.user_proxy.utils module

agent_inspect.user_proxy.utils.ensure_full_stop(text)[source]
Return type:

str

Parameters:

text (str)

Module contents

class agent_inspect.user_proxy.UserProxyAgent(llm_client, task_summary, terminating_conditions, agent_description='', initial_message='', config=None)[source]

Bases: LLMProxyAgent

User proxy (a.k.a. simulated user) class which generates the user utterances during a dynamic conversation with the AI agent. The dynamic user utterances are generated based on the user task instruction and user persona (e.g., expert or non-expert) prompt templates via a two-step process—reflection followed by response generation.

Parameters:
  • llm_client (LLMClient) – the connection to the llm client for user utterances generation.

  • task_summary (str) – a user task instruction describing the summary of the task user wants the AI agent to complete.

  • terminating_conditions (List[TerminatingCondition]) – a List [TerminatingCondition] object where each element in the list is a terminating condition for the user proxy to exit the user-agent conversation early. Currently supports only one terminating condition.

  • agent_description (str) – the description of the AI agent that will interact with the user proxy, provided as additional context for the user proxy. Default to empty string.

  • initial_message (str) – a static message that is used as the user proxy’s initial message (if available) to the AI agent. Default to empty string.

  • config (Optional[Dict[str, Any]]) –

    Default to None. Configuration options:

    • use_expert_agent: a bool flag to indicate whether the user proxy should use an expert persona; otherwise, it uses a non-expert persona. Default to True to use an expert persona.

async generate_message_from_chat_history(chat_history)[source]

Generates the next user utterance given a ChatHistory object containing the user-agent conversation history as input.

Parameters:

chat_history (Optional[ChatHistory]) – a ChatHistory object containing the user-agent conversation history.

Return type:

UserProxyMessage

Returns:

a UserProxyMessage object containing the next user utterance. For the first user utterance, if the input variable chat_history is None and self.initial_message is not an empty string, the method returns the self.initial_message as the initial user utterance, otherwise it generates the user utterance based on the conversation history. All subsequent user utterances are generated based on the conversation history.

Example:

>>> from agent_inspect.user_proxy import UserProxyAgent
>>> from agent_inspect.models.user_proxy import ChatHistory TerminatingCondition
>>> from agent_inspect.metrics.constants import USE_EXPERT_AGENT
>>> from agent_inspect.clients import AzureOpenAIClient
>>> from uuid import uuid4
>>> import asyncio
>>>
>>> user_instruct, term_condition = load_user_instruct_term(sample_path) # Load user instruction and terminating condition
>>> client = AzureOpenAIClient(model="gpt-4.1", max_tokens=4096) # create llm client for user proxy
>>> user = UserProxyAgent(
...     llm_client=client,
...     task_summary=user_instruct,
...     terminating_conditions=[
...         TerminatingCondition(check=term_condition)
...     ],
...     config={USE_EXPERT_AGENT: True}
... )
>>> chat_history = ChatHistory(id=str(uuid4()), conversations=[]) # start from an empty conversation
>>> user_response = asyncio.run(user.generate_message_from_chat_history(chat_history))
>>> print(user_response.message_str)