Call Prompt

POST

Call a Prompt.

Calling a Prompt calls the model provider before logging the request, responses and metadata to Humanloop.

You can use query parameters version_id, or environment, to target an existing version of the Prompt. Otherwise the default deployed version will be chosen.

Instead of targeting an existing version explicitly, you can instead pass in Prompt details in the request body. In this case, we will check if the details correspond to an existing version of the Prompt. If they do not, we will create a new version. This is helpful in the case where you are storing or deriving your Prompt details in code.

Query parameters

version_idstringOptional

A specific Version ID of the Prompt to log to.

environmentstringOptional

Name of the Environment identifying a deployed version to log to.

Request

This endpoint expects an object.
streamtrueRequired

If true, tokens will be sent as data-only server-sent events. If num_samples > 1, samples are streamed back independently.

pathstringOptional

Path of the Prompt, including the name. This locates the Prompt in the Humanloop filesystem and is used as as a unique identifier. For example: folder/name or just name.

idstringOptional

ID for an existing Prompt.

messageslist of objectsOptional

The messages passed to the to provider chat endpoint.

tool_choice"none" or "auto" or "required" or objectOptional

Controls how the model uses tools. The following options are supported:

  • 'none' means the model will not call any tool and instead generates a message; this is the default when no tools are provided as part of the Prompt.
  • 'auto' means the model can decide to call one or more of the provided tools; this is the default when tools are provided as part of the Prompt.
  • 'required' means the model can decide to call one or more of the provided tools.
  • {'type': 'function', 'function': {name': <TOOL_NAME>}} forces the model to use the named function.
promptobjectOptional

Details of your Prompt. A new Prompt version will be created if the provided details are new.

inputsmap from strings to anyOptional

The inputs passed to the prompt template.

sourcestringOptional

Identifies where the model was called from.

metadatamap from strings to anyOptional

Any additional metadata to record.

start_timedatetimeOptional

When the logged event started.

end_timedatetimeOptional

When the logged event ended.

source_datapoint_idstringOptional

Unique identifier for the Datapoint that this Log is derived from. This can be used by Humanloop to associate Logs to Evaluations. If provided, Humanloop will automatically associate this Log to Evaluations that require a Log for this Datapoint-Version pair.

trace_parent_idstringOptional

The ID of the parent Log to nest this Log under in a Trace.

userstringOptional

End-user ID related to the Log.

environmentstringOptional

The name of the Environment the Log is associated to.

savebooleanOptional

Whether the request/response payloads will be stored on Humanloop.

log_idstringOptional

This will identify a Log. If you don’t provide a Log ID, Humanloop will generate one for you.

provider_api_keysobjectOptional

API keys required by each provider to make API calls. The API keys provided here are not stored by Humanloop. If not specified here, Humanloop will fall back to the key saved to your organization.

num_samplesintegerOptionalDefaults to 1

The number of generations.

return_inputsbooleanOptional

Whether to return the inputs in the response. If false, the response will contain an empty dictionary under inputs. This is useful for reducing the size of the response. Defaults to true.

logprobsintegerOptional

Include the log probabilities of the top n tokens in the provider_response

suffixstringOptional

The suffix that comes after a completion of inserted text. Useful for completions that act like inserts.

Response

This endpoint returns a stream of object.
indexinteger

The index of the sample in the batch.

idstring

ID of the log.

prompt_idstring

ID of the Prompt the log belongs to.

version_idstring

ID of the specific version of the Prompt.

outputstringOptional

Generated output from your model for the provided inputs. Can be None if logging an error, or if creating a parent Log with the intention to populate it later.

created_atdatetimeOptional

User defined timestamp for when the log was created.

errorstringOptional

Error message if the log is an error.

provider_latencydoubleOptional

Duration of the logged event in seconds.

stdoutstringOptional

Captured log and debug statements.

output_messageobjectOptional

The message returned by the provider.

prompt_tokensintegerOptional

Number of tokens in the prompt used to generate the output.

output_tokensintegerOptional

Number of tokens in the output generated by the model.

prompt_costdoubleOptional

Cost in dollars associated to the tokens in the prompt.

output_costdoubleOptional

Cost in dollars associated to the tokens in the output.

finish_reasonstringOptional

Reason the generation finished.

Errors