Call Agent
Call an Agent. The Agent will run on the Humanloop runtime and return a completed Agent Log.
If the Agent requires a tool call that cannot be ran by Humanloop, execution will halt. To continue, pass the ID of the incomplete Log and the required tool call to the /agents/continue endpoint.
The agent will run for the maximum number of iterations, or until it encounters a stop condition, according to its configuration.
You can use query parameters version_id
, or environment
, to target
an existing version of the Agent. Otherwise the default deployed version will be chosen.
Instead of targeting an existing version explicitly, you can instead pass in Agent details in the request body. A new version is created if it does not match any existing ones. This is helpful in the case where you are storing or deriving your Agent details in code.
Headers
Query parameters
A specific Version ID of the Agent to log to.
Name of the Environment identifying a deployed version to log to.
Request
If true, Agent events and tokens will be sent as data-only server-sent events.
Path of the Agent, including the name. This locates the Agent in the Humanloop filesystem and is used as as a unique identifier. For example: folder/name
or just name
.
ID for an existing Agent.
The messages passed to the to provider chat endpoint.
Controls how the model uses tools. The following options are supported:
'none'
means the model will not call any tool and instead generates a message; this is the default when no tools are provided as part of the Prompt.'auto'
means the model can decide to call one or more of the provided tools; this is the default when tools are provided as part of the Prompt.'required'
means the model must call one or more of the provided tools.{'type': 'function', 'function': {name': <TOOL_NAME>}}
forces the model to use the named function.
The Agent configuration to use. Two formats are supported:
- An object representing the details of the Agent configuration
- A string representing the raw contents of a .agent file
A new Agent version will be created if the provided details do not match any existing version.
The inputs passed to the prompt template.
Identifies where the model was called from.
Any additional metadata to record.
When the logged event started.
When the logged event ended.
Status of a Log. Set to incomplete
if you intend to update and eventually complete the Log and want the File’s monitoring Evaluators to wait until you mark it as complete
. If log_status is not provided, observability will pick up the Log as soon as possible. Updating this from specified to unspecified is undefined behavior.
Unique identifier for the Datapoint that this Log is derived from. This can be used by Humanloop to associate Logs to Evaluations. If provided, Humanloop will automatically associate this Log to Evaluations that require a Log for this Datapoint-Version pair.
The ID of the parent Log to nest this Log under in a Trace.
End-user ID related to the Log.
The name of the Environment the Log is associated to.
Whether the request/response payloads will be stored on Humanloop.
This will identify a Log. If you don’t provide a Log ID, Humanloop will generate one for you.
API keys required by each provider to make API calls. The API keys provided here are not stored by Humanloop. If not specified here, Humanloop will fall back to the key saved to your organization.
Whether to return the inputs in the response. If false, the response will contain an empty dictionary under inputs. This is useful for reducing the size of the response. Defaults to true.
If true, populate trace_children
for the returned Agent Log. Only applies when not streaming. Defaults to false.
Response
Agent that generated the Log.
Unique identifier for the Log.
List of Evaluator Logs associated with the Log. These contain Evaluator judgments on the Log.
The message returned by the provider.
Number of tokens in the prompt used to generate the output.
Number of reasoning tokens used to generate the output.
Number of tokens in the output generated by the model.
Cost in dollars associated to the tokens in the prompt.
Cost in dollars associated to the tokens in the output.
Reason the generation finished.
The messages passed to the to provider chat endpoint.
Controls how the model uses tools. The following options are supported:
'none'
means the model will not call any tool and instead generates a message; this is the default when no tools are provided as part of the Prompt.'auto'
means the model can decide to call one or more of the provided tools; this is the default when tools are provided as part of the Prompt.'required'
means the model must call one or more of the provided tools.{'type': 'function', 'function': {name': <TOOL_NAME>}}
forces the model to use the named function.
When the logged event started.
When the logged event ended.
Generated output from your model for the provided inputs. Can be None
if logging an error, or if creating a parent Log with the intention to populate it later.
User defined timestamp for when the log was created.
Error message if the log is an error.
Duration of the logged event in seconds.
Captured log and debug statements.
Raw request sent to provider.
Raw response received the provider.
The inputs passed to the prompt template.
Identifies where the model was called from.
Any additional metadata to record.
Status of the Agent Log. If incomplete
, the Agent turn was suspended due to a tool call and can be continued by calling /agents/continue with responses to the Agent’s last message (which should contain tool calls). See the previous_agent_message
field for easy access to the Agent’s last message.
Unique identifier for the Datapoint that this Log is derived from. This can be used by Humanloop to associate Logs to Evaluations. If provided, Humanloop will automatically associate this Log to Evaluations that require a Log for this Datapoint-Version pair.
The ID of the parent Log to nest this Log under in a Trace.
Array of Batch IDs that this Log is part of. Batches are used to group Logs together for offline Evaluations
End-user ID related to the Log.
The name of the Environment the Log is associated to.
Whether the request/response payloads will be stored on Humanloop.
This will identify a Log. If you don’t provide a Log ID, Humanloop will generate one for you.
Identifier for the Flow that the Trace belongs to.
Identifier for the Trace that the Log belongs to.
Logs nested under this Log in the Trace.
The Agent’s last message, which should contain tool calls. Only populated if the Log is incomplete due to a suspended Agent turn with tool calls. This is useful for continuing the Agent call by calling /agents/continue.