Call Prompt
Call a Prompt.
Calling a Prompt calls the model provider before logging the request, responses and metadata to Humanloop.
You can use query parameters version_id
, or environment
, to target
an existing version of the Prompt. Otherwise the default deployed version will be chosen.
Instead of targeting an existing version explicitly, you can instead pass in Prompt details in the request body. In this case, we will check if the details correspond to an existing version of the Prompt. If they do not, we will create a new version. This is helpful in the case where you are storing or deriving your Prompt details in code.
Headers
Query parameters
Request
If true, tokens will be sent as data-only server-sent events. If num_samples > 1, samples are streamed back independently.
Path of the Prompt, including the name. This locates the Prompt in the Humanloop filesystem and is used as as a unique identifier. For example: folder/name
or just name
.
Controls how the model uses tools. The following options are supported:
'none'
means the model will not call any tool and instead generates a message; this is the default when no tools are provided as part of the Prompt.'auto'
means the model can decide to call one or more of the provided tools; this is the default when tools are provided as part of the Prompt.'required'
means the model must call one or more of the provided tools.{'type': 'function', 'function': {name': <TOOL_NAME>}}
forces the model to use the named function.
The Prompt configuration to use. Two formats are supported:
- An object representing the details of the Prompt configuration
- A string representing the raw contents of a .prompt file
A new Prompt version will be created if the provided details do not match any existing version.
Unique identifier for the Datapoint that this Log is derived from. This can be used by Humanloop to associate Logs to Evaluations. If provided, Humanloop will automatically associate this Log to Evaluations that require a Log for this Datapoint-Version pair.
End-user ID related to the Log.
Whether the request/response payloads will be stored on Humanloop.
Include the log probabilities of the top n tokens in the provider_response
Response
Controls how the model uses tools. The following options are supported:
'none'
means the model will not call any tool and instead generates a message; this is the default when no tools are provided as part of the Prompt.'auto'
means the model can decide to call one or more of the provided tools; this is the default when tools are provided as part of the Prompt.'required'
means the model must call one or more of the provided tools.{'type': 'function', 'function': {name': <TOOL_NAME>}}
forces the model to use the named function.
Unique identifier for the Datapoint that this Log is derived from. This can be used by Humanloop to associate Logs to Evaluations. If provided, Humanloop will automatically associate this Log to Evaluations that require a Log for this Datapoint-Version pair.
End-user ID related to the Log.
Whether the request/response payloads will be stored on Humanloop.