Get a chat response by providing details of the model configuration in the request.
If true, tokens will be sent as data-only server-sent events. If num_samples > 1, samples are streamed back independently.
The messages passed to the to provider chat endpoint.
The model configuration used to create a chat response.
Unique project name. If no project exists with this name, a new project will be created.
Unique ID of a project to associate to the log. Either this or project
must be provided.
ID of the session to associate the datapoint.
A unique string identifying the session to associate the datapoint to. Allows you to log multiple datapoints to a session (using an ID kept by your internal systems) by passing the same session_reference_id
in subsequent log requests. Specify at most one of this or session_id
.
ID associated to the parent datapoint in a session.
A unique string identifying the previously-logged parent datapoint in a session. Allows you to log nested datapoints with your internal system IDs by passing the same reference ID as parent_id
in a prior log request. Specify at most one of this or parent_id
. Note that this cannot refer to a datapoint being logged in the same request.
The inputs passed to the prompt template.
Identifies where the model was called from.
Any additional metadata to record.
Whether the request/response payloads will be stored on Humanloop.
ID of the source datapoint if this is a log derived from a datapoint in a dataset.
API keys required by each provider to make API calls. The API keys provided here are not stored by Humanloop. If not specified here, Humanloop will fall back to the key saved to your organization.
The number of generations.
End-user ID passed through to provider call.
Whether to return the inputs in the response. If false, the response will contain an empty dictionary under inputs. This is useful for reducing the size of the response. Defaults to true.
Controls how the model uses tools. The following options are supported: ‘none’ forces the model to not call a tool; the default when no tools are provided as part of the model config. ‘auto’ the model can decide to call one of the provided tools; the default when tools are provided as part of the model config. Providing {‘type’: ‘function’, ‘function’: {name’: <TOOL_NAME>}} forces the model to use the named function.
The format of the response. Only type json_object is currently supported for chat.
Deprecated field: the seed is instead set as part of the request.config object.
NB: Deprecated with new tool_choice. Controls how the model uses tools. The following options are supported: ‘none’ forces the model to not call a tool; the default when no tools are provided as part of the model config. ‘auto’ the model can decide to call one of the provided tools; the default when tools are provided as part of the model config. Providing {‘name’: <TOOL_NAME>} forces the model to use the provided tool of the same name.
Array containing the chat responses.
The raw responses returned by the model provider.
Unique identifier of the parent project. Will not be provided if the request was made without providing a project name or id
The number of chat responses.
Include the log probabilities of the top n tokens in the provider_response
The suffix that comes after a completion of inserted text. Useful for completions that act like inserts.
End-user ID passed through to provider call.
Counts of the number of tokens used and related stats.
Any additional metadata to record.
The raw request sent to the model provider.
ID of the session if it belongs to one.
Controls how the model uses tools. The following options are supported: ‘none’ forces the model to not call a tool; the default when no tools are provided as part of the model config. ‘auto’ the model can decide to call one of the provided tools; the default when tools are provided as part of the model config. Providing {‘type’: ‘function’, ‘function’: {name’: <TOOL_NAME>}} forces the model to use the named function.