Deserialize

POST
Deserialize a model config from a .prompt file format.

Request

This endpoint expects an object.
configstringRequired

Response

This endpoint returns an object
idstring

String ID of config. Starts with config_.

modelstring
The model instance used. E.g. text-davinci-002.
othermap from strings to anyOptional
Other parameter values to be passed to the provider call.
namestringOptional
A friendly display name for the model config. If not provided, a name will be generated.
descriptionstringOptional
A description of the model config.
providerenumOptional
The company providing the underlying model service.
max_tokensintegerOptionalDefaults to -1

The maximum number of tokens to generate. Provide max_tokens=-1 to dynamically calculate the maximum number of tokens to generate given the length of the prompt

temperaturedoubleOptionalDefaults to 1
What sampling temperature to use when making a generation. Higher values means the model will be more creative.
top_pdoubleOptionalDefaults to 1

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass.

stopstring or list of stringsOptional
The string (or list of strings) after which the model will stop generating. The returned text will not contain the stop sequence.
presence_penaltydoubleOptionalDefaults to 0
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the generation so far.
frequency_penaltydoubleOptionalDefaults to 0
Number between -2.0 and 2.0. Positive values penalize new tokens based on how frequently they appear in the generation so far.
seedintegerOptional
If specified, model will make a best effort to sample deterministically, but it is not guaranteed.
response_formatobjectOptional

The format of the response. Only type json_object is currently supported for chat.

prompt_templatestringOptional

Prompt template that will take your specified inputs to form your final request to the model. NB: Input variables within the prompt template should be specified with syntax: {{INPUT_NAME}}.

chat_templatelist of objectsOptional

Messages prepended to the list of messages sent to the provider. These messages that will take your specified inputs to form your final request to the provider model. NB: Input variables within the template should be specified with syntax: {{INPUT_NAME}}.

toolslist of objectsOptional
Tools shown to the model.
endpointenumOptional
Allowed values: completechatedit
The provider model endpoint used.
tool_configslist of objectsOptionalDeprecated
NB: Deprecated with tools field. Definition of tools shown to the model.

Errors