Update Agent Log

Update a Log. Update the details of a Log with the given ID.

Path parameters

idstringRequired
Unique identifier for Agent.
log_idstringRequired
Unique identifier for the Log.

Headers

X-API-KEYstringRequired

Request

This endpoint expects an object.
messageslist of objectsOptional
List of chat messages that were used as an input to the Flow.
output_messageobjectOptional
The output message returned by this Flow.
inputsmap from strings to anyOptional
The inputs passed to the Flow Log.
outputstringOptional

The output of the Flow Log. Provide None to unset existing output value. Provide either this, output_message or error.

errorstringOptional

The error message of the Flow Log. Provide None to unset existing error value. Provide either this, output_message or output.

log_statusenumOptional

Status of the Flow Log. When a Flow Log is updated to complete, no more Logs can be added to it. You cannot update a Flow Log’s status from complete to incomplete.

Allowed values:

Response

Successful Response
agentobject
Agent that generated the Log.
idstring
Unique identifier for the Log.
evaluator_logslist of objects
List of Evaluator Logs associated with the Log. These contain Evaluator judgments on the Log.
output_messageobjectOptional
The message returned by the provider.
prompt_tokensintegerOptional
Number of tokens in the prompt used to generate the output.
reasoning_tokensintegerOptional
Number of reasoning tokens used to generate the output.
output_tokensintegerOptional
Number of tokens in the output generated by the model.
prompt_costdoubleOptional
Cost in dollars associated to the tokens in the prompt.
output_costdoubleOptional
Cost in dollars associated to the tokens in the output.
finish_reasonstringOptional
Reason the generation finished.
messageslist of objectsOptional
The messages passed to the to provider chat endpoint.
tool_choice"none" or "auto" or "required" or objectOptional

Controls how the model uses tools. The following options are supported:

  • 'none' means the model will not call any tool and instead generates a message; this is the default when no tools are provided as part of the Prompt.
  • 'auto' means the model can decide to call one or more of the provided tools; this is the default when tools are provided as part of the Prompt.
  • 'required' means the model must call one or more of the provided tools.
  • {'type': 'function', 'function': {name': <TOOL_NAME>}} forces the model to use the named function.
start_timedatetimeOptional
When the logged event started.
end_timedatetimeOptional
When the logged event ended.
outputstringOptional

Generated output from your model for the provided inputs. Can be None if logging an error, or if creating a parent Log with the intention to populate it later.

created_atdatetimeOptional
User defined timestamp for when the log was created.
errorstringOptional
Error message if the log is an error.
provider_latencydoubleOptional
Duration of the logged event in seconds.
stdoutstringOptional
Captured log and debug statements.
provider_requestmap from strings to anyOptional
Raw request sent to provider.
provider_responsemap from strings to anyOptional
Raw response received the provider.
inputsmap from strings to anyOptional
The inputs passed to the prompt template.
sourcestringOptional
Identifies where the model was called from.
metadatamap from strings to anyOptional
Any additional metadata to record.
source_datapoint_idstringOptional

Unique identifier for the Datapoint that this Log is derived from. This can be used by Humanloop to associate Logs to Evaluations. If provided, Humanloop will automatically associate this Log to Evaluations that require a Log for this Datapoint-Version pair.

trace_parent_idstringOptional
The ID of the parent Log to nest this Log under in a Trace.
batcheslist of stringsOptional
Array of Batch IDs that this Log is part of. Batches are used to group Logs together for offline Evaluations
userstringOptional

End-user ID related to the Log.

environmentstringOptional
The name of the Environment the Log is associated to.
savebooleanOptionalDefaults to true

Whether the request/response payloads will be stored on Humanloop.

log_idstringOptional
This will identify a Log. If you don't provide a Log ID, Humanloop will generate one for you.
trace_flow_idstringOptional
Identifier for the Flow that the Trace belongs to.
trace_idstringOptional
Identifier for the Trace that the Log belongs to.
trace_childrenlist of objectsOptional
Logs nested under this Log in the Trace.

Errors