agent_inspect.models package
Subpackages
Submodules
agent_inspect.models.llm_payload module
- class agent_inspect.models.llm_payload.LLMPayload(user_prompt, model=None, system_prompt=None, temperature=None, max_tokens=None, structured_output=None)[source]
Bases:
objectRepresents the payload to be sent to a Language Model (LLM) for processing.
- Parameters:
user_prompt (str)
model (str | None)
system_prompt (str | None)
temperature (float | None)
max_tokens (int | None)
structured_output (Any | None)
-
max_tokens:
Optional[int] = None The maximum number of tokens to be generated in the LLM’s response.
-
model:
Optional[str] = None The specific LLM model to be used for processing the prompt.
-
structured_output:
Optional[Any] = None An optional structured format for the LLM’s output, if applicable.
-
system_prompt:
Optional[str] = None The system-level prompt that provides context or instructions to the LLM.
-
temperature:
Optional[float] = None The temperature setting for the LLM, influencing the randomness of its output.
-
user_prompt:
str The raw text prompt provided by the user to the LLM for processing.
agent_inspect.models.llm_response module
- class agent_inspect.models.llm_response.LLMResponse(status, completion=None, error_message=None)[source]
Bases:
objectRepresents a response from a LLM.
- Parameters:
status (int)
completion (str | None)
error_message (str | None)
-
completion:
Optional[str] = None Represents the text output generated by the LLM in response to an input prompt. Can be none if there is no output generated from the LLM due to an error.
-
error_message:
Optional[str] = None Contains error related information from the LLM in response to an input prompt. Can be none if there is no error occurred.
-
status:
int Contains the http status code from the LLM.
Module contents
- class agent_inspect.models.LLMPayload(user_prompt, model=None, system_prompt=None, temperature=None, max_tokens=None, structured_output=None)[source]
Bases:
objectRepresents the payload to be sent to a Language Model (LLM) for processing.
- Parameters:
user_prompt (str)
model (str | None)
system_prompt (str | None)
temperature (float | None)
max_tokens (int | None)
structured_output (Any | None)
-
max_tokens:
Optional[int] = None The maximum number of tokens to be generated in the LLM’s response.
-
model:
Optional[str] = None The specific LLM model to be used for processing the prompt.
-
structured_output:
Optional[Any] = None An optional structured format for the LLM’s output, if applicable.
-
system_prompt:
Optional[str] = None The system-level prompt that provides context or instructions to the LLM.
-
temperature:
Optional[float] = None The temperature setting for the LLM, influencing the randomness of its output.
-
user_prompt:
str The raw text prompt provided by the user to the LLM for processing.
- class agent_inspect.models.LLMResponse(status, completion=None, error_message=None)[source]
Bases:
objectRepresents a response from a LLM.
- Parameters:
status (int)
completion (str | None)
error_message (str | None)
-
completion:
Optional[str] = None Represents the text output generated by the LLM in response to an input prompt. Can be none if there is no output generated from the LLM due to an error.
-
error_message:
Optional[str] = None Contains error related information from the LLM in response to an input prompt. Can be none if there is no error occurred.
-
status:
int Contains the http status code from the LLM.