Update a logged datapoint in your Humanloop project.
String ID of logged datapoint to return. Starts with data_
.
Generated output from your model for the provided inputs.
Error message if the log is an error.
Duration of the logged event in seconds.
Successful Response
String ID of logged datapoint. Starts with data_
.
Status of a Log for observability.
Observability is implemented by running monitoring Evaluators on Logs.
The name of the project associated with this log
The unique ID of the project associated with this log.
ID of the session to associate the datapoint.
A unique string identifying the session to associate the datapoint to. Allows you to log multiple datapoints to a session (using an ID kept by your internal systems) by passing the same session_reference_id
in subsequent log requests. Specify at most one of this or session_id
.
ID associated to the parent datapoint in a session.
A unique string identifying the previously-logged parent datapoint in a session. Allows you to log nested datapoints with your internal system IDs by passing the same reference ID as parent_id
in a prior log request. Specify at most one of this or parent_id
. Note that this cannot refer to a datapoint being logged in the same request.
The inputs passed to the prompt template.
Identifies where the model was called from.
Any additional metadata to record.
Whether the request/response payloads will be stored on Humanloop.
ID of the source datapoint if this is a log derived from a datapoint in a dataset.
Unique user-provided string identifying the datapoint.
The messages passed to the to provider chat endpoint.
Generated output from your model for the provided inputs. Can be None
if logging an error, or if logging a parent datapoint with the intention to populate it later
Unique ID of a config to associate to the log.
The environment name used to create the log.
User defined timestamp for when the log was created.
Error message if the log is an error.
Captured log and debug statements.
Duration of the logged event in seconds.
The message returned by the provider.
Number of tokens in the prompt used to generate the output.
Number of tokens in the output generated by the model.
Cost in dollars associated to the tokens in the prompt.
Cost in dollars associated to the tokens in the output.
Raw request sent to provider.
Raw response received the provider.
User email address provided when creating the datapoint.
Latency of provider response.
Total number of tokens in the prompt and output.
Raw output from the provider.
Reason the generation finished.
Controls how the model uses tools. The following options are supported: ‘none’ forces the model to not call a tool; the default when no tools are provided as part of the model config. ‘auto’ the model can decide to call one of the provided tools; the default when tools are provided as part of the model config. Providing {‘type’: ‘function’, ‘function’: {name’: <TOOL_NAME>}} forces the model to use the named function.
List of batch IDs the log belongs to.