Log to a Prompt

POST

Log to a Prompt.

You can use query parameters version_id, or environment, to target an existing version of the Prompt. Otherwise, the default deployed version will be chosen.

Instead of targeting an existing version explicitly, you can instead pass in Prompt details in the request body. In this case, we will check if the details correspond to an existing version of the Prompt. If they do not, we will create a new version. This is helpful in the case where you are storing or deriving your Prompt details in code.

Query parameters

version_idstringOptional

A specific Version ID of the Prompt to log to.

environmentstringOptional

Name of the Environment identifying a deployed version to log to.

Request

This endpoint expects an object.
evaluation_idstringOptional

Unique identifier for the Evaluation Report to associate the Log to.

pathstringOptional

Path of the Prompt, including the name. This locates the Prompt in the Humanloop filesystem and is used as as a unique identifier. For example: folder/name or just name.

idstringOptional

ID for an existing Prompt.

output_messageobjectOptional

The message returned by the provider.

prompt_tokensintegerOptional

Number of tokens in the prompt used to generate the output.

output_tokensintegerOptional

Number of tokens in the output generated by the model.

prompt_costdoubleOptional

Cost in dollars associated to the tokens in the prompt.

output_costdoubleOptional

Cost in dollars associated to the tokens in the output.

finish_reasonstringOptional

Reason the generation finished.

messageslist of objectsOptional

The messages passed to the to provider chat endpoint.

tool_choice"none" or "auto" or "required" or objectOptional

Controls how the model uses tools. The following options are supported:

  • 'none' means the model will not call any tool and instead generates a message; this is the default when no tools are provided as part of the Prompt.
  • 'auto' means the model can decide to call one or more of the provided tools; this is the default when tools are provided as part of the Prompt.
  • 'required' means the model can decide to call one or more of the provided tools.
  • {'type': 'function', 'function': {name': <TOOL_NAME>}} forces the model to use the named function.
promptobjectOptional

Details of your Prompt. A new Prompt version will be created if the provided details are new.

start_timedatetimeOptional

When the logged event started.

end_timedatetimeOptional

When the logged event ended.

outputstringOptional

Generated output from your model for the provided inputs. Can be None if logging an error, or if creating a parent Log with the intention to populate it later.

created_atdatetimeOptional

User defined timestamp for when the log was created.

errorstringOptional

Error message if the log is an error.

provider_latencydoubleOptional

Duration of the logged event in seconds.

stdoutstringOptional

Captured log and debug statements.

provider_requestmap from strings to anyOptional

Raw request sent to provider.

provider_responsemap from strings to anyOptional

Raw response received the provider.

inputsmap from strings to anyOptional

The inputs passed to the prompt template.

sourcestringOptional

Identifies where the model was called from.

metadatamap from strings to anyOptional

Any additional metadata to record.

source_datapoint_idstringOptional

Unique identifier for the Datapoint that this Log is derived from. This can be used by Humanloop to associate Logs to Evaluations. If provided, Humanloop will automatically associate this Log to Evaluations that require a Log for this Datapoint-Version pair.

trace_parent_idstringOptional

The ID of the parent Log to nest this Log under in a Trace.

batch_idstringOptional

Unique identifier for the Batch to add this Batch to. Batches are used to group Logs together for Evaluations. A Batch will be created if one with the given ID does not exist.

userstringOptional

End-user ID related to the Log.

environmentstringOptional

The name of the Environment the Log is associated to.

savebooleanOptional

Whether the request/response payloads will be stored on Humanloop.

Response

This endpoint returns an object.
idstring

String ID of log.

prompt_idstring

ID of the Prompt the log belongs to.

version_idstring

ID of the specific version of the Prompt.

session_idstringOptional

String ID of session the log belongs to.

Errors