--- subtitle: Prompts define a task for a Large Language Model. description: >- Discover how Humanloop manages prompts, with version control and rigorous evaluation for better performance. image: type: url value: >- https://app.buildwithfern.com/_next/image?url=https%3A%2F%2Ffdr-prod-docs-files-public.s3.amazonaws.com%2Fhttps%3A%2F%2Fhumanloop.docs.buildwithfern.com%2F2024-05-30T22%3A44%3A51.828Z%2Fassets%2Fimages%2F1763dbe-Prompts.png&w=1920&q=75 --- <img src="file:065d2c1e-90ab-40ec-b24e-c256a4d34e34" /> A Prompt on Humanloop defines the instructions and configuration for guiding a Large Language Model (LLM) to perform a specific task. Each change in any of the following properties creates a new Version of the Prompt: - the **template** such as `Write a song about {{topic}}`.<br />For chat models, the template contains an array of messages - the **model** e.g. `gpt-4o` - the **parameters** to the model such as `temperature`, `max_tokens`, `top_p` - any **tools** available to the model A Prompt is callable in that if you supply the necessary inputs, it will return a response from the model. Inputs are defined in the template through the double-curly bracket syntax e.g. `{{topic}}` and the value of the variable will need to be supplied when you call the Prompt to create a generation. This separation of concerns, keeping configuration separate from the query time data, is crucial for enabling you to experiment with different configurations and evaluate any changes. The Prompt stores the configuration and the query time data in [Logs](./logs), which can then be used to create Datasets for evaluation purposes. <Callout> Note that we use a capitalized "[Prompt](/docs/explanation/prompts)" to refer to the entity in Humanloop, and a lowercase "prompt" to refer to the general concept of input to the model. </Callout> <Frame caption="An example Prompt, serialized in the .prompt file format"> ```jsx --- model: gpt-4o temperature: 1.0 max_tokens: -1 provider: openai endpoint: chat --- <system> Write a song about {{topic}} </system> ``` </Frame> ## Versioning Versioning your Prompts enables you to track how adjustments to the template or parameters influence the model's responses. This is crucial for iterative development, as you can pinpoint which configuration produces the most relevant or accurate outputs for your use cases. A Prompt File will have multiple Versions as you iterate on different models, templates, or parameters, but each version should perform the same task and generally be interchangeable with one another. ### When to create a new Prompt File You should create a new Prompt File for each different 'task to be done' with an LLM. Each of these tasks can have its own separate Prompt File: _Writing Copilot_, _Personal Assistant_, _Summarizer_, etc. Many users find value in creating a 'playground' Prompt where they can freely experiment without risking damage to their other Prompts or creating disorder. ## Using Prompts Prompts are callable as an API, allowing you to provide query-time data such as input values or user messages, and receive the model's text output in response. <EndpointRequestSnippet endpoint="POST /prompts/call" /> Prompts can also be used without proxying through Humanloop to the model provider. Instead, you can call the model directly and explicitly log the results to your Prompt. <EndpointRequestSnippet endpoint="POST /prompts/log" /> ## Serialization The [.prompt file format](../reference/prompt-file-format) is a serialized representation of a Prompt Version, designed to be human-readable and suitable for integration into version control systems alongside code. The format is heavily inspired by [MDX](https://mdxjs.com/), with model and parameters specified in a YAML header alongside a JSX-inspired syntax for chat templates. <CodeBlocks> ```jsx Chat --- model: gpt-4o temperature: 1.0 max_tokens: -1 provider: openai endpoint: chat --- <system> You are a friendly assistant. </system> ``` ```jsx Completion --- model: claude-2 temperature: 0.7 max_tokens: 256 top_p: 1.0 provider: anthropic endpoint: complete --- Autocomplete the sentence. Context: {{context}} {{sentence}} ``` </CodeBlocks> ```
Built with