Update By Reference

PATCH

Update a logged datapoint by its reference ID.

The reference_id query parameter must be provided, and refers to the reference_id of a previously-logged datapoint.

Query parameters

reference_idstringRequired

A unique string to reference the datapoint. Identifies the logged datapoint created with the same reference_id.

Request

This endpoint expects an object.
outputstringOptional
Generated output from your model for the provided inputs.
errorstringOptional
Error message if the log is an error.
durationdoubleOptional
Duration of the logged event in seconds.

Response

This endpoint returns an object
idstring

String ID of logged datapoint. Starts with data_.

configobject
evaluation_resultslist of objects
observability_statusenum
Allowed values: pendingrunningcompletedfailed

Status of a Log for observability.

Observability is implemented by running monitoring Evaluators on Logs.

updated_atdatetime
projectstringOptional
The name of the project associated with this log
project_idstringOptional
The unique ID of the project associated with this log.
session_idstringOptional
ID of the session to associate the datapoint.
session_reference_idstringOptional

A unique string identifying the session to associate the datapoint to. Allows you to log multiple datapoints to a session (using an ID kept by your internal systems) by passing the same session_reference_id in subsequent log requests. Specify at most one of this or session_id.

parent_idstringOptional
ID associated to the parent datapoint in a session.
parent_reference_idstringOptional

A unique string identifying the previously-logged parent datapoint in a session. Allows you to log nested datapoints with your internal system IDs by passing the same reference ID as parent_id in a prior log request. Specify at most one of this or parent_id. Note that this cannot refer to a datapoint being logged in the same request.

inputsmap from strings to anyOptional
The inputs passed to the prompt template.
sourcestringOptional
Identifies where the model was called from.
metadatamap from strings to anyOptional
Any additional metadata to record.
savebooleanOptional
Whether the request/response payloads will be stored on Humanloop.
source_datapoint_idstringOptional
ID of the source datapoint if this is a log derived from a datapoint in a dataset.
reference_idstringOptional
Unique user-provided string identifying the datapoint.
messageslist of objectsOptional
The messages passed to the to provider chat endpoint.
outputstringOptional

Generated output from your model for the provided inputs. Can be None if logging an error, or if logging a parent datapoint with the intention to populate it later

judgmentboolean or double or list of strings or stringOptional
config_idstringOptional
Unique ID of a config to associate to the log.
environmentstringOptional
The environment name used to create the log.
feedbacklist of objectsOptional
created_atdatetimeOptional
User defined timestamp for when the log was created.
errorstringOptional
Error message if the log is an error.
stdoutstringOptional
Captured log and debug statements.
durationdoubleOptional
Duration of the logged event in seconds.
output_messageobjectOptional
The message returned by the provider.
prompt_tokensintegerOptional
Number of tokens in the prompt used to generate the output.
output_tokensintegerOptional
Number of tokens in the output generated by the model.
prompt_costdoubleOptional
Cost in dollars associated to the tokens in the prompt.
output_costdoubleOptional
Cost in dollars associated to the tokens in the output.
provider_requestmap from strings to anyOptional
Raw request sent to provider.
provider_responsemap from strings to anyOptional
Raw response received the provider.
userstringOptional
User email address provided when creating the datapoint.
provider_latencydoubleOptional
Latency of provider response.
tokensintegerOptional
Total number of tokens in the prompt and output.
raw_outputstringOptional
Raw output from the provider.
finish_reasonstringOptional
Reason the generation finished.
toolslist of objectsOptional
A result from a tool used to populate the prompt template
tool_choice"none" or "auto" or "required" or objectOptional
Controls how the model uses tools. The following options are supported: 'none' forces the model to not call a tool; the default when no tools are provided as part of the model config. 'auto' the model can decide to call one of the provided tools; the default when tools are provided as part of the model config. Providing {'type': 'function', 'function': {name': <TOOL_NAME>}} forces the model to use the named function.
batch_idslist of stringsOptional
List of batch IDs the log belongs to.

Errors