Deserialize a model config from a .prompt file format.
Successful Response
String ID of config. Starts with config_
.
The model instance used. E.g. text-davinci-002.
Other parameter values to be passed to the provider call.
A friendly display name for the model config. If not provided, a name will be generated.
A description of the model config.
The company providing the underlying model service.
The maximum number of tokens to generate. Provide max_tokens=-1 to dynamically calculate the maximum number of tokens to generate given the length of the prompt
What sampling temperature to use when making a generation. Higher values means the model will be more creative.
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass.
The string (or list of strings) after which the model will stop generating. The returned text will not contain the stop sequence.
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the generation so far.
Number between -2.0 and 2.0. Positive values penalize new tokens based on how frequently they appear in the generation so far.
If specified, model will make a best effort to sample deterministically, but it is not guaranteed.
The format of the response. Only type json_object is currently supported for chat.
Prompt template that will take your specified inputs to form your final request to the model. NB: Input variables within the prompt template should be specified with syntax: {{input_name}}
.
Messages prepended to the list of messages sent to the provider. These messages that will take your specified inputs to form your final request to the provider model. NB: Input variables within the template should be specified with syntax: {{input_name}}
.
Tools shown to the model.
The provider model endpoint used.
NB: Deprecated with tools field. Definition of tools shown to the model.