# Humanloop is the LLM Evals Platform for Enterprises > Learn how to use Humanloop for prompt engineering, evaluation and monitoring. Comprehensive guides and tutorials for LLMOps. Humanloop enables product teams to build robust AI features with LLMs, using best-in-class tooling for **Evaluation**, **Prompt Management**, and **Observability**. Humanloop The most successful AI teams focus on two best practices: **Evals-driven development**
They put evals at the heart of product development, continuously refining and enhancing AI features through feedback and iteration. **Collaborative development**
They enable non-technical domain experts and PMs to work seamlessly with engineers on prompt engineering and evaluation. ### Get started with Humanloop Humanloop enables you to adopt these best practices. Our evals, prompt engineering and observability are designed to work together in a fast feedback loop. It works both UI-first and code-first so that the experience is great for developers and subject matter experts (SMEs). Get started with evals in code Get started with prompt engineering in our UI Get started with the guides above or learn more about Humanloop's [key concepts](/docs/explanation/files) and [customer stories](https://humanloop.com/customers). # Quickstart > Quickstart guides for evaluating and instrumenting your LLM apps. **If you're technical**, get started by evaluating or logging an AI application in code: Create an Eval in code Log your existing app
**Or, if you don't want to touch code**, get started by creating a Prompt or an Eval in the UI: Create a Prompt in the UI Create an Eval in the UI # Evals in code > Getting up and running with Humanloop is quick and easy. This guide will explain how to set up evaluations on Humanloop and use them to iteratively improve your applications. ## Prerequisites First you need to install and initialize the SDK. If you have already done this, skip to the next section. Open up your terminal and follow these steps: 1. Install the Humanloop SDK: ```python pip install humanloop ``` ```typescript npm install humanloop ``` 2. Initialize the SDK with your Humanloop API key (you can get it from the [Organization Settings page](https://app.humanloop.com/account/api-keys)). ```python from humanloop import Humanloop humanloop = Humanloop(api_key="") # Check that the authentication was successful print(humanloop.prompts.list()) ``` ```typescript import { HumanloopClient, Humanloop } from "humanloop"; const humanloop = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); // Check that the authentication was successful console.log(await humanloop.prompts.list()); ``` This tutorial will take you through running your first Eval with Humanloop. You'll learn how to trigger an evaluation from code, interpret an eval-report on Humanloop and use it to improve your AI features. ## Create an evals script Add the following code in a file: ```python maxLines=100 from humanloop import Humanloop humanloop = Humanloop(api_key="") checks = humanloop.evaluations.run( name="Initial Test", file={ "path": "Scifi/App", # Replace with your AI model "callable": lambda messages: ( "I'm sorry, Dave. I'm afraid I can't do that." if messages[-1]["content"].lower() == "hal" else "Beep boop!" ) }, # Replace with your test dataset dataset={ "path": "Scifi/Tests", "datapoints": [ { "messages": [ { "role": "system", "content": "You are an AI that responds like famous sci-fi AIs." }, { "role": "user", "content": "HAL" } ], "target": { "output": "I'm sorry, Dave. I'm afraid I can't do that." } }, { "messages": [ { "role": "system", "content": "You are an AI that responds like famous sci-fi AIs." }, { "role": "user", "content": "R2D2" } ], "target": { "output": "Beep boop beep!" } } ] }, evaluators=[ {"path": "Example Evaluators/Code/Exact match"}, {"path": "Example Evaluators/Code/Latency"}, {"path": "Example Evaluators/AI/Semantic similarity"}, ], ) ``` ```typescript maxLines=100 import { HumanloopClient } from "humanloop"; const humanloop = new HumanloopClient({ apiKey: "", }); const checks = humanloop.evaluations.run({ name: "Initial Test", file: { path: "Scifi/App", // Replace with your AI model callable: (inputs, messages) => messages && messages[messages.length - 1].content.toLowerCase() === "hal" ? "I'm sorry, Dave. I'm afraid I can't do that." : "Beep boop!", }, // Replace with your test dataset dataset: { path: "Scifi/Tests", datapoints: [ { messages: [ { role: "system", content: "You are an AI that responds like famous sci-fi AIs.", }, { role: "user", content: "HAL", }, ], target: { output: "I'm sorry, Dave. I'm afraid I can't do that." }, }, { messages: [ { role: "system", content: "You are an AI that responds like famous sci-fi AIs.", }, { role: "user", content: "R2D2", }, ], target: { output: "Beep boop beep!" }, }, ], }, evaluators: [ { path: "Example Evaluators/Code/Exact match" }, { path: "Example Evaluators/Code/Latency" }, { path: "Example Evaluators/AI/Semantic similarity" }, ], }); ``` This sets up the basic structure of an [Evaluation](/docs/guides/evals/overview): 1. A **callable** function that you want to evaluate. The callable should take your inputs and/or messages and returns a string. The `file` argument defines the callable as well as the location of where the evaluation results will appear on Humanloop. 2. A test [Dataset](/docs/explanation/datasets) of inputs and/or messages to run your function over and optional expected targets to evaluate against. 3. A set of [Evaluators](/docs/explanation/evaluators) to provide judgments on the output of your function. This example uses default evaluators that come with every Humanloop workspace. Evaluators can also be defined locally and pushed to the Humanloop runtime. It returns a `checks` object that contains the results of the eval per Evaluator. ## Run your script Run your script with the following command: ```python python main.py ``` ```typescript npx tsx index.ts ``` You will see a URL to view your evals on Humanloop. A summary of progress and the final results will be displayed directly in your terminal: Eval progress and url in terminal Eval results in terminal ## View the results Navigate to the URL provided in your terminal to see the result of running your script on Humanloop. This `Stats` view will show you the live progress of your local eval runs as well summary statistics of the final results. Each new run will add a column to your `Stats` view, allowing you to compare the performance of your LLM app over time. The `Logs` and `Review` tabs allow you to drill into individual datapoints and view the outputs of different runs side-by-side to understand how to improve your LLM app. Eval results in terminal ## Make a change and re-run Your first run resulted in a `Semantic similarity` score of 3 (out of 5) and an `Exact match` score of 0. Try and make a change to your `callable` to improve the output and re-run your script. A second run will be added to your `Stats` view and the difference in performance will be displayed. Re-run eval results in terminal Re-run eval results in UI ## Next steps Now that you've run your first eval on Humanloop, you can: * Explore our detailed [tutorial](/docs/tutorials/rag-evaluation) on evaluating a real RAG app where you'll learn about versioning your app, customizing logging, adding Evaluator thresholds and more. * Create your own [Dataset](/docs/explanation/datasets) of test cases to evaluate your LLM app against. * Create your own [Evaluators](/docs/explanation/evaluators) to provide judgments on the output of your LLM app. # Evals in the UI > Getting up and running with Humanloop is quick and easy. This guide will explain how to set up evaluations through the Humanloop UI and use them to iteratively improve your applications. This tutorial will take you through running your first Eval. You'll learn how to assess multiple Prompt versions to improve the quality of your AI products.

Create a Humanloop Account

If you haven't already, [create an account](https://app.humanloop.com/signup) or [log in](https://app.humanloop.com/login) to Humanloop

Add an OpenAI API Key

If you're the first person in your organization, you'll need to add an API key to a model provider. 1. Go to OpenAI and [grab an API key](https://platform.openai.com/api-keys). 2. In Humanloop [Organization Settings](https://app.humanloop.com/account/api-keys) set up OpenAI as a model provider. Using the Prompt Editor will use your OpenAI credits in the same way that the OpenAI playground does. Keep your API keys for Humanloop and the model providers private.
## Running Evals For this tutorial, we're going to evaluate the performance of a simple Support Agent that responds to user queries. ### Create a Prompt File When you first open Humanloop you'll see your File navigation on the left. Click **+New** and create a **Prompt**. In the Prompt Editor, add the following Prompt instructions: ```text You are a support agent. Answer questions. ``` Commit this version. ### Create a Dataset The Dataset contains datapoints describing the inputs and, optionally, the expected results for a given task. For this tutorial, we [created](https://github.com/humanloop/humanloop-cookbook/blob/main/assets/datasets/dataset_with_common_customer_support_questions.csv) a csv file with 100 common customer support questions. Create a new Dataset file, then click on the **Upload CSV** button to upload the file. ### Run your first Evaluation Navigate to the Prompt you've just created and click on the **Evaluation** tab. Click on the **Evaluate** button to create a new Evaluation. Create a new Run by clicking on the **+Run** button. Select the Dataset and Prompt version we created earlier. Add Evaluators by clicking on **+Evaluator** button. For this tutorial, we selected *Semantic similarity*, *Cost* and *Latency* Evaluators. You can find these Evaluators in the **Example Evaluators** folder. "Semantic similarity" Evaluator measures the degree of similarity between the model's response and the expected output provided on a scale from 1 to 5 (1 is very dissimilar and 5 is very similar). Click **Save**. Humanloop will start generating Logs for the Evaluation. ### Review results After the Run is completed, you can review the Logs produced and corresponding judgments in the **Review** tab. The summary of all Evaluators is displayed in the **Stats** tab. ### Make changes to your Prompt After reviewing the results, we now have a better understanding of the Prompt's behavior. We can improve its performance. Navigate back to the Prompt Editor and change the instructions to: ``` You are a helpful assistant. Your job is to respond to users questions in {{tone}} tone. Be polite and succinct. ``` Commit this new version. ### Run another Evaluation We can now create a new Run with the new Prompt version. Click on the **+Run** button and select the newly created Prompt version. We can now see from the Stats view that the updated version performs better across the board. To get a detailed view, navigate to **Logs** or **Stats** tabs. ## Next steps Now that you've successfully run your first Eval, you can explore customizing for your use case: * Explore how you set up [Human Evaluators](/docs/evaluation/guides/human-evaluators) to get human feedback on your Prompt Logs * Learn how internal [subject-matter experts can evaluate model outputs](/docs/evaluation/guides/manage-multiple-reviewers) and improve your AI product # Create a Prompt in the UI > This guide will show you how you can use Humanloop to quickly create a new prompt and experiment with it. In this tutorial you'll create a prompt in the UI, experiment with it and then see logs from your experimentation. {/* This is just a taste of what you can do through the UI in Humanloop. Once you've got started, check out our guides on how to use production logs and evaluators to [systematically improve your products](../tutorials/systematically-improve.mdx). */} #### Create a Humanloop Account If you haven’t already, create an account or log in to Humanloop #### Add an OpenAI API Key If you’re the first person in your organization, you’ll need to add an API key to a model provider. 1. Go to OpenAI and [grab an API key](https://platform.openai.com/api-keys) 2. In Humanloop [Organization Settings](https://app.humanloop.com/account/api-keys) set up OpenAI as a model provider. Using the Prompt Editor will use your OpenAI credits in the same way that the OpenAI playground does. Keep your API keys for Humanloop and the model providers private. ## Get Started ### Create a Prompt File When you first open Humanloop you’ll see your File navigation on the left. Click ‘**+ New**’ and create a **Prompt**. In the sidebar, rename this file to "Comedian Bot" now or later. ### Create the Prompt template in the Editor The left hand side of the screen defines your Prompt – the parameters such as model, temperature and template. The right hand side is a single chat session with this Prompt. Click the “**+ Message**” button within the chat template to add a system message to the chat template. Add the following templated message to the chat template. ``` You are a funny comedian. Write a joke about {{topic}}. ``` This message forms the chat template. It has an input slot called `topic` (surrounded by two curly brackets) for an input value that is provided each time you call this Prompt. On the right hand side of the page, you’ll now see a box in the **Inputs** section for `topic`. 1. Add a value for `topic` e.g. music, jogging, whatever 2. Click **Run** in the bottom right of the page This will call OpenAI’s model and return the assistant response. Feel free to try other values, the model is *very* funny. You now have a first version of your prompt that you can use. ### Commit your first version of this Prompt 1. Click the **Commit** button 2. Put “initial version” in the commit message field 3. Click **Commit** ### View the logs Under the Prompt File, click ‘Logs’ to view all the generations from this Prompt Click on a row to see the details of what version of the Prompt generated it. From here you can give feedback to that generation, see performance metrics, open up this example in the Editor, or add this log to a Dataset. ## Next Steps Well done! You've now created your first Prompt. # Set up logging > Use Humanloop to add logging to an AI project. This tutorial takes a chat agent and adds Humanloop logging to it so you can observe and reason about its behavior. ## Prerequisites

Create a Humanloop Account

If you haven't already, [create an account](https://app.humanloop.com/signup) or [log in](https://app.humanloop.com/login) to Humanloop

Add an OpenAI API Key

If you're the first person in your organization, you'll need to add an API key to a model provider. 1. Go to OpenAI and [grab an API key](https://platform.openai.com/api-keys). 2. In Humanloop [Organization Settings](https://app.humanloop.com/account/api-keys) set up OpenAI as a model provider. Using the Prompt Editor will use your OpenAI credits in the same way that the OpenAI playground does. Keep your API keys for Humanloop and the model providers private.
```python pip install humanloop openai ``` ```typescript npm install humanloop openai ```
## Create the chat agent To demonstrate how to add logging, we will start with a chat agent that answers math and science questions. Create a script and add the following: ```python maxLines=30 import json from humanloop import Humanloop from openai import OpenAI openai = OpenAI(api_key="") humanloop = Humanloop(api_key="") def calculator(operation: str, num1: int, num2: int) -> str: """Do arithmetic operations on two numbers.""" if operation == "add": return num1 + num2 elif operation == "subtract": return num1 - num2 elif operation == "multiply": return num1 * num2 elif operation == "divide": return num1 / num2 else: return "Invalid operation" def call_model(messages: list[str]) -> str: output = openai.chat.completions.create( messages=messages, model="gpt-4o-mini", tools=[{ "type": "function", "function": { 'name': 'calculator', 'description': 'Do arithmetic operations on two numbers.', 'parameters': { 'type': 'object', 'required': ['operation', 'num1', 'num2'], 'properties': { 'operation': {'type': 'string'}, 'num1': {'type': 'integer'}, 'num2': {'type': 'integer'} }, 'additionalProperties': False }, }, }], temperature=0.7, ) # Check if model asked for a tool call if output.choices[0].message.tool_calls: for tool_call in output.choices[0].message.tool_calls: arguments = json.loads(tool_call.function.arguments) if tool_call.function.name == "calculator": result = calculator(**arguments) return f"[TOOL CALL] {result}" # Otherwise, return the LLM response return output.choices[0].message.content def conversation(): messages = [ { "role": "system", "content": "You are a a groovy 80s surfer dude " "helping with math and science." }, ] while True: user_input = input("You: ") if user_input == "exit": break messages.append({"role": "user", "content": user_input}) response = call_model(messages=messages) messages.append({"role": "assistant", "content": response}) print(f"Agent: {response}") if __name__ == "__main__": conversation() ``` ```typescript maxLines=30 import * as readline from "readline/promises"; import { HumanloopClient } from "humanloop"; import OpenAI from "openai"; type MessageType = { content: string; role: "system" | "user" | "assistant"; }; const humanloop = new HumanloopClient({apiKey: ""}); const openAIClient = new OpenAI({apiKey: ""}); // Passed to the LLM to enable function calling const CALCULATOR_JSON_SCHEMA = { name: "calculator", description: "Perform arithmetic operations on two numbers", strict: true, parameters: { type: "object", properties: { operation: { type: "string", description: "The operation to perform", enum: ["add", "subtract", "multiply", "divide"], }, num1: { type: "number", description: "The first number", }, num2: { type: "number", description: "The second number", }, }, required: ["operation", "num1", "num2"], additionalProperties: false, }, }; const calculator = ({ operation, num1, num2 }: { operation: string; num1: number; num2: number; }) => { switch (operation) { case "add": return num1 + num2; case "subtract": return num1 - num2; case "multiply": return num1 * num2; case "divide": if (num2 === 0) { throw new Error("Cannot divide by zero"); } return num1 / num2; default: throw new Error("Invalid operation"); } }; const callModel = async (traceId: string, messages: MessageType[]) => { const output = await openAIClient.chat.completions.create({ messages: messages, model: "gpt-4o-mini", temperature: 0.8, tools: [ { type: "function", function: CALCULATOR_JSON_SCHEMA, } as OpenAI.ChatCompletionTool, ], }); let llmResponse = ""; // Check if the agent made a tool call if (output.choices[0].message.tool_calls) { for (const toolCall of output.choices[0].message.tool_calls) { const toolCallArgs = JSON.parse(toolCall.function.arguments); const result = calculator(toolCallArgs); // Log the tool call humanloop.tools.log({ path: "Chat Agent/Calculator", inputs: toolCallArgs, output: JSON.stringify(result), traceParentId: traceId, }); llmResponse = `[${toolCall.function.name}] ${result}`; } } else { llmResponse = output.choices[0].message.content || ""; } // Log the model call await humanloop.prompts.log({ path: "Chat Agent/Call Model", prompt: { model: "gpt-4o", temperature: 0.8, tools: [CALCULATOR_JSON_SCHEMA], }, traceParentId: traceId, messages: [...messages, { role: "assistant", content: llmResponse }], }); return llmResponse; }; const conversation = async () => { const messages: MessageType[] = [ { role: "system", content: "You are a groovy 80s surfer dude helping with math and science.", }, ]; const rl = readline.createInterface({ input: process.stdin, output: process.stdout, }); // Create the Flow trace // Each conversation will have a unique trace const traceId = ( await humanloop.flows.log({ path: "Chat Agent/Conversation", startTime: new Date(), }) ).id; while (true) { let userInput = await rl.question("You: "); if (userInput === "exit") { rl.close(); break; } messages.push({ role: "user", content: userInput }); const response = await callModel(traceId, messages); console.log("Assistant:", response); messages.push({ role: "assistant", content: response, }); } // Close the Flow trace when the conversation is done await humanloop.flows.updateLog(traceId, { traceStatus: "complete", output: JSON.stringify(messages), }); }; await conversation(); ``` ## Log to Humanloop If you use a programming language not supported by the SDK, or want more control, see our guide on [logging through the API](/docs/v5/guides/observability/logging-through-api) for an alternative to decorators. Use the SDK decorators to enable logging. At runtime, every call to a decorated function will create a [Log](/docs/v5/explanations/logs) on Humanloop. ```python maxLines=50 highlight={1,5,10} @humanloop.tool(path="Logging Quickstart/Calculator") def calculator(operation: str, num1: int, num2: int) -> str: ... @humanloop.prompt(path="Logging Quickstart/QA Prompt") def call_model(messages: list[str]) -> str: ... @humanloop.flow(path="Logging Quickstart/QA Agent") def conversation(): ... if __name__ == "__main__": conversation() ``` ```typescript maxLines=50 highlight={1-12, 17-20, 25-28} const calculator = humanloop.tool({ path: "Chat Agent/Calculator", version: {function: CALCULATOR_JSON_SCHEMA}, callable: ({ operation, num1, num2, }: { operation: string; num1: number; num2: number; }) => { ... }, }); const callModel = (messages: MessageType[]) => humanloop.prompt({ path: "Chat Agent/Call Model", callable: async (inputs: any, messages: MessageType[]) => { ... }, })(undefined, messages); const conversation = () => humanloop.flow({ path: "Chat Agent/Conversation", callable: async () => { ... }, })(undefined, undefined); ``` ## Run the code Have a conversation with the agent. When you're done, type `exit` to close the program. ```python > python main.py You: Hi dude! Agent: Tubular! I am here to help with math and science, what is groovin? You: How does flying work? Agent: ... You: What is 5678 * 456? Agent: [TOOL CALL] 2587968 You: exit ``` ```typescript > npx tsx index.ts You: Hi dude! Agent: Tubular! I am here to help with math and science, what is groovin? You: How does flying work? Agent: ... You: What is 5678 * 456? Agent: [TOOL CALL] 2587968 You: exit ``` ## Check your workspace Navigate to [your workspace](https://app.humanloop.com) to see the logged conversation. Inside the **Logging Quickstart** directory on the left, click the **QA Agent** [Flow](/docs/v5/explanation/flows). Select the **Logs** tab from the top of the page and click the Log inside the table. You will see the conversation's trace, containing Logs corresponding to the [Tool](/docs/v5/explanation/tools) and the [Prompt](/docs/v5/explanation/prompts). ## Change the agent and rerun Modify the `call_model` function to use a different model and temperature. ```python maxLines=30 highlight={6-11} @humanloop.prompt(path="Logging Quickstart/QA Prompt") def call_model(messages: list[str]) -> str: output = openai.chat.completions.create( messages=messages, model="gpt-4o-mini", tools=[ # The @tool utility adds a .json_schema attribute # to avoid redefining the schema calculator.json_schema ], temperature=0.2, ) # Check if model asked for a tool call if output.choices[0].message.tool_calls: for tool_call in output.choices[0].message.tool_calls: arguments = json.loads(tool_call.function.arguments) if tool_call.function.name == "calculator": result = calculator(**arguments) return f"[TOOL CALL] {result}" # Otherwise, return the LLM response return output.choices[0].message.content ``` ```typescript maxLines=30 highlight={8-14} const callModel = (messages: MessageType[]) => humanloop.prompt({ path: "Chat Agent/Call Model", callable: async (inputs: any, messages: MessageType[]) => { const output = await openAIClient.chat.completions.create({ messages: messages, model: "gpt-4o-mini", tools: [ // The tool utility adds a .jsonSchema attribute // to avoid redefining the schema { type: "function", function: calculator.jsonSchema, } as OpenAI.ChatCompletionTool, ], temperature: 0.2, }); let llmResponse = ""; // Check if the agent made a tool call if (output.choices[0].message.tool_calls) { for (const toolCall of output.choices[0].message.tool_calls) { const toolCallArgs = JSON.parse(toolCall.function.arguments); const result = await calculator(toolCallArgs); llmResponse = `[${toolCall.function.name}] ${result}`; } } else { llmResponse = output.choices[0].message.content || ""; } return llmResponse; }, })(undefined, messages); ``` Run the agent again, then head back to your workspace. Click the **QA Prompt** [Prompt](/docs/v5/explanation/prompts), select the **Dashboard** tab from the top of the page and look at **Uncommitted** Versions. By changing the hyperparameters of the OpenAI call, you have tagged a new version of the Prompt. ## Next steps Logging is the first step to observing your AI product. Follow up with these guides on monitoring and evals: * Add [monitoring Evaluators](/docs/v5/guides/observability/monitoring) to evaluate Logs as they're made against a File. * Explore evals to improve the performance of your AI feature in our [guide on running an Evaluation](/docs/v5/guides/evals/run-evaluation). * See logging in action on a complex example in our [tutorial on evaluating an agent](/docs/v5/tutorials/agent-evaluation). # Vercel AI SDK > Add logging to your Vercel AI SDK application with Humanloop. The [Vercel AI SDK](https://sdk.vercel.ai/docs/introduction) is the TypeScript toolkit designed to help developers build AI-powered applications with React, Next.js, Vue, Svelte, Node.js, and more. Humanloop supports receiving OpenTelemetry traces from the Vercel AI SDK, so you can see and evaluate your AI SDK application in your Humanloop Dashboard. Take a look at the following guides to learn how to integrate Humanloop with your Vercel AI SDK application. Instrument a Vercel AI SDK chat agent in Next.js Instrument a Vercel AI SDK chat agent in Node.js For a full reference, see our [Vercel AI SDK API reference](/docs/v5/reference/vercel-ai-sdk). # Observability in Next.js > Add logging to your Vercel AI SDK application with Humanloop. Add Humanloop observability to a chat agent by calling the tool built with Vercel AI SDK. It builds on the AI SDK's [Next.js example](https://sdk.vercel.ai/docs/getting-started/nextjs-app-router). Looking for Node.js? See the [guide here](/docs/v5/integrations/vercel-ai-sdk/agent-nodejs-observability). ## Prerequisites

Create a Humanloop Account

1. [Create an account](https://app.humanloop.com/signup) or [log in](https://app.humanloop.com/login) to Humanloop 2. Get a Humanloop API key from [Organization Settings](https://app.humanloop.com/account/api-keys).

Add an OpenAI API Key

If you're the first person in your organization, you'll need to add an API key to a model provider. 1. Go to OpenAI and [grab an API key](https://platform.openai.com/api-keys). 2. In Humanloop [Organization Settings](https://app.humanloop.com/account/api-keys) set up OpenAI as a model provider. Using the Prompt Editor will use your OpenAI credits in the same way that the OpenAI playground does. Keep your API keys for Humanloop and the model providers private.
Create a new Next.js project. This command will create a new directory named `my-ai-app` and set up a basic Next.js application inside of it. ```bash title="npm" npx create-next-app@latest my-ai-app ``` ```bash title="pnpm" pnpm create next-app@latest my-ai-app ``` ```bash title="yarn" yarn create next-app@latest my-ai-app ``` Navigate to the newly created directory: ```bash cd my-ai-app ``` To follow this guide, you'll also need Node.js 18+ installed on your machine. Install `ai`, '@ai-sdk/react', and `@ai-sdk/openai`, along with other necessary dependencies. ```bash title="npm" npm install ai @ai-sdk/react @ai-sdk/openai zod npm install -D @types/node tsx typescript ``` ```bash title="pnpm" pnpm add ai @ai-sdk/react @ai-sdk/openai zod pnpm add -D @types/node tsx typescript ``` ```bash title="yarn" yarn add ai @ai-sdk/react @ai-sdk/openai zod yarn add -D @types/node tsx typescript ``` Add a `.env.local` file to your project with your Humanloop and OpenAI API keys. ```bash touch .env.local ``` ``` HUMANLOOP_API_KEY= OPENAI_API_KEY= ```
## Full code If you'd like to immediately try out the full example, you can copy and paste the code below and run the app. ```bash title="npm" wordWrap npm install ai @ai-sdk/react @ai-sdk/openai zod npm install -D @types/node tsx typescript npm install @vercel/otel @opentelemetry/api @opentelemetry/sdk-logs @opentelemetry/api-logs @opentelemetry/instrumentation ``` ```bash title="pnpm" wordWrap pnpm add ai @ai-sdk/react @ai-sdk/openai zod pnpm add -D @types/node tsx typescript pnpm add @vercel/otel @opentelemetry/api @opentelemetry/sdk-logs @opentelemetry/api-logs @opentelemetry/instrumentation ``` ```bash title="yarn" wordWrap yarn add ai @ai-sdk/react @ai-sdk/openai zod yarn add -D @types/node tsx typescript yarn add @vercel/otel @opentelemetry/api @opentelemetry/sdk-logs @opentelemetry/api-logs @opentelemetry/instrumentation ``` ```plaintext title=".env.local" wordWrap highlight={4-6} HUMANLOOP_API_KEY= OPENAI_API_KEY= OTEL_EXPORTER_OTLP_ENDPOINT=https://api.humanloop.com/v5/import/otel OTEL_EXPORTER_OTLP_PROTOCOL=http/json OTEL_EXPORTER_OTLP_HEADERS="X-API-KEY=" # Humanloop API key ``` ```typescript title="instrumentation.ts" import { registerOTel } from "@vercel/otel"; export function register() { registerOTel({ serviceName: "humanloop-ai-sdk-agent", }); } ``` ```typescript title="app/api/chat/route.ts" maxLines=20 import { openai } from "@ai-sdk/openai"; import { streamText, tool } from "ai"; import { z } from "zod"; export const maxDuration = 30; export async function POST(req: Request) { const { messages } = await req.json(); const result = streamText({ model: openai("gpt-4o"), messages, experimental_telemetry: { isEnabled: true, metadata: { "humanloop.directoryPath": "path/to/directory", }, }, tools: { weather: tool({ description: "Get the weather in a location (fahrenheit)", parameters: z.object({ location: z.string().describe("The location to get the weather for"), }), execute: async ({ location }) => { const temperature = Math.round(Math.random() * (90 - 32) + 32); return { location, temperature, }; }, }), }, }); return result.toDataStreamResponse(); } ``` ```typescript title="app/page.tsx" maxLines=50 'use client'; import { useChat } from '@ai-sdk/react'; export default function Chat() { const { messages, input, handleInputChange, handleSubmit } = useChat({ maxSteps: 5, }); return (
{messages.map(m => (
{m.role === 'user' ? 'User: ' : 'AI: '} {m.parts.map((p, idx) => p.type === "tool-invocation" || p.type === "source" ? (
{JSON.stringify(p, null, 2)}
) : p.type === "text" ? (

{p.text}

) : (

{p.reasoning}

) )}
))}
); } ```
```bash title="npm" npm run dev ``` ```bash title="pnpm" pnpm run dev ``` ```bash title="yarn" yarn dev ``` ## Create the agent ### Create a Route Handler We start with a backend route that handles a chat request and streams back a response from a model. This model can call a function to get the weather in a given location. ```typescript title="app/api/chat/route.ts" maxLines=50 import { openai } from "@ai-sdk/openai"; import { streamText, tool } from "ai"; import { z } from "zod"; export const maxDuration = 30; export async function POST(req: Request) { const { messages } = await req.json(); const result = streamText({ model: openai("gpt-4o"), messages, tools: { weather: tool({ description: "Get the weather in a location (fahrenheit)", parameters: z.object({ location: z.string().describe("The location to get the weather for"), }), execute: async ({ location }) => { const temperature = Math.round(Math.random() * (90 - 32) + 32); return { location, temperature, }; }, }), }, }); return result.toDataStreamResponse(); } ``` ### Wire up the UI Now that you have a Route Handler that can query an LLM, it's time to setup your frontend. The AI SDK's [UI](https://sdk.vercel.ai/docs/ai-sdk-ui) package abstracts the complexity of a chat interface into one hook, `useChat`. Update your root page to show a chat interface and provide a user message input. The `maxSteps` prop allows the model to take multiple "steps" for a given generation, using tool calls to refine its response. ```typescript title="app/page.tsx" maxLines=50 'use client'; import { useChat } from '@ai-sdk/react'; export default function Chat() { const { messages, input, handleInputChange, handleSubmit } = useChat({ maxSteps: 5, }); return (
{messages.map(m => (
{m.role === 'user' ? 'User: ' : 'AI: '} {m.parts.map((p, idx) => p.type === "tool-invocation" || p.type === "source" ? (
{JSON.stringify(p, null, 2)}
) : p.type === "text" ? (

{p.text}

) : (

{p.reasoning}

) )}
))}
); } ``` ### Run the agent Start your app and give the agent a try. ```bash title="npm" npm run dev ``` ```bash title="pnpm" pnpm run dev ``` ```bash title="yarn" yarn dev ```
## Log to Humanloop The agent works and is capable of function calling. However, we rely on inputs and outputs to reason about the behavior. Humanloop logging allows you to observe the steps taken by the agent, which we will demonstrate below. We'll use Vercel AI SDK's built-in OpenTelemetry tracing to log to Humanloop. ### Set up OpenTelemetry Install dependencies. ```bash wordWrap title="npm" npm install @vercel/otel @opentelemetry/sdk-logs @opentelemetry/api-logs @opentelemetry/instrumentation ``` ```bash wordWrap title="pnpm" pnpm add @vercel/otel @opentelemetry/sdk-logs @opentelemetry/api-logs @opentelemetry/instrumentation ``` ```bash wordWrap title="yarn" yarn add @vercel/otel @opentelemetry/sdk-logs @opentelemetry/api-logs @opentelemetry/instrumentation ``` Create a file called `instrumentation.ts` in your root or /src directory and add the following code: ```typescript title="instrumentation.ts" import { registerOTel } from "@vercel/otel"; export function register() { registerOTel({ serviceName: "humanloop-ai-sdk-agent", }); } ``` Configure the [OpenTelemetry exporter](https://opentelemetry.io/docs/specs/otel/protocol/exporter/) to forward logs to Humanloop. ```plaintext highlight={4-6} title=".env.local" wordWrap HUMANLOOP_API_KEY= OPENAI_API_KEY= OTEL_EXPORTER_OTLP_ENDPOINT=https://api.humanloop.com/v5/import/otel OTEL_EXPORTER_OTLP_PROTOCOL=http/json OTEL_EXPORTER_OTLP_HEADERS="X-API-KEY=" # Humanloop API key ``` ### Trace AI SDK calls The telemetry metadata associates Logs with your Files on Humanloop. We will use a Humanloop [Prompt](/docs/explanation/prompts) to log LLM calls, and a Humanloop [Flow](/docs/explanation/flows) to group related generation calls into a trace. The `humanloop.directoryPath` specifies the path to a Directory where your Files and Logs will be located. ```typescript title="app/api/chat/route.ts" highlight={13-19} maxLines=50 import { openai } from "@ai-sdk/openai"; import { streamText, tool } from "ai"; import { z } from "zod"; export const maxDuration = 30; export async function POST(req: Request) { const { messages } = await req.json(); const result = streamText({ model: openai("gpt-4o"), messages, experimental_telemetry: { isEnabled: true, metadata: { "humanloop.directoryPath": "path/to/directory", }, }, tools: { weather: tool({ description: "Get the weather in a location (fahrenheit)", parameters: z.object({ location: z.string().describe("The location to get the weather for"), }), execute: async ({ location }) => { const temperature = Math.round(Math.random() * (90 - 32) + 32); return { location, temperature, }; }, }), }, }); return result.toDataStreamResponse(); } ``` Restart your app, and have a conversation with the agent. ### Explore logs on Humanloop Now you can explore your logs on the [Humanloop platform](https://app.humanloop.com), and see the steps taken by the agent during your conversation. ## Debugging If you run into any issues, add OpenTelemetry debug logging to ensure the Exporter is working correctly. ```bash title="npm" npm install @opentelemetry/api ``` ```bash title="pnpm" pnpm add @opentelemetry/api ``` ```bash title="yarn" yarn add @opentelemetry/api ``` ```typescript title="instrumentation.ts" import { registerOTel } from "@vercel/otel"; import { diag, DiagConsoleLogger, DiagLogLevel } from "@opentelemetry/api"; diag.setLogger(new DiagConsoleLogger(), DiagLogLevel.DEBUG); export function register() { registerOTel({ serviceName: "humanloop-ai-sdk-agent", }); } ``` ## Next steps Logging is the first step to observing your AI product. Read these guides to learn more about evals on Humanloop: * Add [monitoring Evaluators](/docs/v5/guides/observability/monitoring) to evaluate Logs as they're made against a File. * See evals in action in our [tutorial on evaluating an agent](/docs/v5/tutorials/agent-evaluation). # Observability in Node.js > Add logging to your Vercel AI SDK application with Humanloop. Add Humanloop observability to a chat agent by calling the tool built with Vercel AI SDK. It builds on the AI SDK's [Node.js example](https://sdk.vercel.ai/docs/getting-started/nodejs). Looking for Next.js? See the [guide here](/docs/v5/integrations/vercel-ai-sdk/agent-nextjs-observability). ## Prerequisites

Create a Humanloop Account

1. [Create an account](https://app.humanloop.com/signup) or [log in](https://app.humanloop.com/login) to Humanloop 2. Get a Humanloop API key from [Organization Settings](https://app.humanloop.com/account/api-keys).

Add an OpenAI API Key

If you're the first person in your organization, you'll need to add an API key to a model provider. 1. Go to OpenAI and [grab an API key](https://platform.openai.com/api-keys). 2. In Humanloop [Organization Settings](https://app.humanloop.com/account/api-keys) set up OpenAI as a model provider. Using the Prompt Editor will use your OpenAI credits in the same way that the OpenAI playground does. Keep your API keys for Humanloop and the model providers private.
Start by creating a new directory for your project and initializing it: ```bash title="npm" mkdir my-ai-agent cd my-ai-agent npm init -y ``` ```bash title="pnpm" mkdir my-ai-agent cd my-ai-agent pnpm init ``` ```bash title="yarn" mkdir my-ai-agent cd my-ai-agent yarn init -y ``` To follow this guide, you'll also need Node.js 18+ installed on your machine. Install `humanloop`, `ai`, and `@ai-sdk/openai`, the AI SDK's OpenAI provider, along with other necessary dependencies. ```bash title="npm" npm install humanloop ai @ai-sdk/openai zod dotenv npm install -D @types/node tsx typescript ``` ```bash title="pnpm" pnpm add humanloop ai @ai-sdk/openai zod dotenv pnpm add -D @types/node tsx typescript ``` ```bash title="yarn" yarn add humanloop ai @ai-sdk/openai zod dotenv yarn add -D @types/node tsx typescript ``` Add a `.env` file to your project with your Humanloop and OpenAI API keys. ```bash touch .env ``` ``` HUMANLOOP_API_KEY= OPENAI_API_KEY= ```
## Full code If you'd like to immediately try out the full example, you can copy and paste the code below and run the file. ```bash title="npm" wordWrap npm install humanloop ai @ai-sdk/openai zod dotenv npm install -D @types/node tsx typescript npm install dotenv @opentelemetry/sdk-node @opentelemetry/auto-instrumentations-node ``` ```bash title="pnpm" wordWrap pnpm add humanloop ai @ai-sdk/openai zod dotenv pnpm add -D @types/node tsx typescript pnpm add dotenv @opentelemetry/sdk-node @opentelemetry/auto-instrumentations-node ``` ```bash title="yarn" wordWrap yarn add humanloop ai @ai-sdk/openai zod dotenv yarn add -D @types/node tsx typescript yarn add dotenv @opentelemetry/sdk-node @opentelemetry/auto-instrumentations-node ``` ```plaintext highlight={4-6} title=".env" wordWrap HUMANLOOP_API_KEY= OPENAI_API_KEY= OTEL_EXPORTER_OTLP_ENDPOINT=https://api.humanloop.com/v5/import/otel OTEL_EXPORTER_OTLP_PROTOCOL=http/json OTEL_EXPORTER_OTLP_HEADERS="X-API-KEY=" # Humanloop API key ``` ```typescript title="agent.ts" maxLines=20 import { openai } from "@ai-sdk/openai"; import { CoreMessage, streamText, tool } from "ai"; import { z } from "zod"; import * as readline from "node:readline/promises"; import { NodeSDK } from "@opentelemetry/sdk-node"; import { getNodeAutoInstrumentations } from "@opentelemetry/auto-instrumentations-node"; import dotenv from "dotenv"; dotenv.config(); const sdk = new NodeSDK({ instrumentations: [getNodeAutoInstrumentations()], }); sdk.start(); async function exit() { console.log("Assistant: Shutting down..."); await sdk.shutdown(); process.exit(0); } const terminal = readline.createInterface({ input: process.stdin, output: process.stdout, }); const messages: CoreMessage[] = [ { role: "system", content: "You are a helpful assistant. If the user asks you to exit, you should exit the program.", }, ]; async function main() { while (true) { const userInput = await terminal.question("You: "); if (userInput === "exit") { break; } messages.push({ role: "user", content: userInput }); const result = streamText({ model: openai("gpt-4o"), messages, maxSteps: 5, experimental_telemetry: { isEnabled: true, metadata: { "humanloop.directoryPath": "Vercel AI/Weather", }, }, tools: { weather: tool({ description: "Get the weather in a location (in Celsius)", parameters: z.object({ location: z .string() .describe("The location to get the weather for"), }), execute: async ({ location }) => ({ location, temperature: Math.round((Math.random() * 30 + 5) * 10) / 10, // Random temp between 5°C and 35°C }), }), }, }); let fullResponse = ""; process.stdout.write("\nAssistant: "); for await (const delta of result.textStream) { fullResponse += delta; process.stdout.write(delta); } process.stdout.write("\n\n"); messages.push({ role: "assistant", content: fullResponse }); } await exit(); } main().catch(console.error); ``` ```bash title="npm" npx tsx agent.ts ``` ```bash title="pnpm" pnpm tsx agent.ts ``` ```bash title="yarn" yarn tsx agent.ts ``` ## Create the agent We start with a simple chat agent capable of function calling. ```typescript title="agent.ts" maxLines=50 import { openai } from "@ai-sdk/openai"; import { CoreMessage, streamText, tool } from "ai"; import { z } from "zod"; import * as readline from "node:readline/promises"; async function exit() { console.log("Assistant: Shutting down..."); process.exit(0); } const terminal = readline.createInterface({ input: process.stdin, output: process.stdout, }); const messages: CoreMessage[] = [ { role: "system", content: "You are a helpful assistant. If the user asks you to exit, you should exit the program.", }, ]; async function main() { while (true) { const userInput = await terminal.question("You: "); if (userInput === "exit") { break; } messages.push({ role: "user", content: userInput }); const result = streamText({ model: openai("gpt-4o"), messages, maxSteps: 5, tools: { weather: tool({ description: "Get the weather in a location (in Celsius)", parameters: z.object({ location: z .string() .describe("The location to get the weather for"), }), execute: async ({ location }) => ({ location, temperature: Math.round((Math.random() * 30 + 5) * 10) / 10, // Random temp between 5°C and 35°C }), }), }, }); let fullResponse = ""; process.stdout.write("\nAssistant: "); for await (const delta of result.textStream) { fullResponse += delta; process.stdout.write(delta); } process.stdout.write("\n\n"); messages.push({ role: "assistant", content: fullResponse }); } await exit(); } main().catch(console.error); ``` This agent can provide weather updates for a user-provided location. ```plaintext $ npx tsx agent.ts You: What's the weather like in London? Assistant: The current temperature in London is 20°C. You: exit Assistant: Shutting down... ``` ## Log to Humanloop The agent works and is capable of function calling. However, we rely on inputs and outputs to reason about the behavior. Humanloop logging allows you to observe the steps taken by the agent, which we will demonstrate below. We'll use Vercel AI SDK's built-in OpenTelemetry tracing to log to Humanloop. ### Set up OpenTelemetry Install dependencies. ```bash title="npm" npm install dotenv @opentelemetry/sdk-node @opentelemetry/auto-instrumentations-node ``` ```bash title="pnpm" pnpm add dotenv @opentelemetry/sdk-node @opentelemetry/auto-instrumentations-node ``` ```bash title="yarn" yarn add dotenv @opentelemetry/sdk-node @opentelemetry/auto-instrumentations-node ``` Configure the [OpenTelemetry exporter](https://opentelemetry.io/docs/specs/otel/protocol/exporter/) to forward logs to Humanloop. ```plaintext highlight={4-6} title=".env" wordWrap HUMANLOOP_API_KEY= OPENAI_API_KEY= OTEL_EXPORTER_OTLP_ENDPOINT=https://api.humanloop.com/v5/import/otel OTEL_EXPORTER_OTLP_PROTOCOL=http/json OTEL_EXPORTER_OTLP_HEADERS="X-API-KEY=" # Humanloop API key ``` ### Trace AI SDK calls Vercel AI SDK will now forward OpenTelemetry logs to Humanloop. The telemetry metadata associates Logs with your Files on Humanloop. The `humanloop.directoryPath` specifies the path to a Directory where your Files and Logs will be located. ```typescript title="agent.ts" maxLines=50 highlight={6-16,20,51-57} import { openai } from "@ai-sdk/openai"; import { CoreMessage, streamText, tool } from "ai"; import { z } from "zod"; import * as readline from "node:readline/promises"; import { NodeSDK } from "@opentelemetry/sdk-node"; import { getNodeAutoInstrumentations } from "@opentelemetry/auto-instrumentations-node"; import dotenv from "dotenv"; dotenv.config(); const sdk = new NodeSDK({ instrumentations: [getNodeAutoInstrumentations()], }); sdk.start(); async function exit() { console.log("Assistant: Shutting down..."); await sdk.shutdown(); process.exit(0); } const terminal = readline.createInterface({ input: process.stdin, output: process.stdout, }); const messages: CoreMessage[] = [ { role: "system", content: "You are a helpful assistant. If the user asks you to exit, you should exit the program.", }, ]; async function main() { while (true) { const userInput = await terminal.question("You: "); if (userInput === "exit") { break; } messages.push({ role: "user", content: userInput }); const result = streamText({ model: openai("gpt-4o"), messages, maxSteps: 5, experimental_telemetry: { isEnabled: true, metadata: { "humanloop.directoryPath": "Vercel AI/Weather", }, }, tools: { weather: tool({ description: "Get the weather in a location (in Celsius)", parameters: z.object({ location: z .string() .describe("The location to get the weather for"), }), execute: async ({ location }) => ({ location, temperature: Math.round((Math.random() * 30 + 5) * 10) / 10, // Random temp between 5°C and 35°C }), }), }, }); let fullResponse = ""; process.stdout.write("\nAssistant: "); for await (const delta of result.textStream) { fullResponse += delta; process.stdout.write(delta); } process.stdout.write("\n\n"); messages.push({ role: "assistant", content: fullResponse }); } await exit(); } main().catch(console.error); ``` ### Run the agent ```bash title="npm" npx tsx agent.ts ``` ```bash title="pnpm" pnpm tsx agent.ts ``` ```bash title="yarn" yarn tsx agent.ts ``` Have a conversation with the agent, and try asking about the weather in a city (in Celsius or Fahrenheit). When you're done, type `exit` to close the program. ### Explore logs on Humanloop Now you can explore your logs on the [Humanloop platform](https://app.humanloop.com), and see the steps taken by the agent during your conversation. You can see below the full trace of prompts and tool calls that were made. ## Debugging If you run into any issues, add OpenTelemetry debug logging to ensure your Exporter is working correctly. ```bash title="npm" npm install @opentelemetry/api ``` ```bash title="pnpm" pnpm add @opentelemetry/api ``` ```bash title="yarn" yarn add @opentelemetry/api ``` ```typescript title="agent.ts" import { diag, DiagConsoleLogger, DiagLogLevel } from "@opentelemetry/api"; diag.setLogger(new DiagConsoleLogger(), DiagLogLevel.DEBUG); ``` ## Next steps Logging is the first step to observing your AI product. Read these guides to learn more about evals on Humanloop: * Add [monitoring Evaluators](/docs/v5/guides/observability/monitoring) to evaluate Logs as they're made against a File. * See evals in action in our [tutorial on evaluating an agent](/docs/v5/tutorials/agent-evaluation). # Integrating Humanloop > Best practices for integrating Humanloop into your software development life cycle Humanloop is the central registry for your AI features. It versions the key artifacts — the Prompts, Tools, Flows, and Datasets — while capturing logs to enable you to run evals and monitor performance to make reliable AI applications. Humanloop is flexible on how you integrate it into your applications, you can: * Store the artifacts in your codebase or within Humanloop * Execute AI logic in code or on Humanloop * Run evals in your runtime or on Humanloop This facilitates collaboration on AI features with non-technical subject matter experts while maintaining your software development practices. ## Choosing an integration approach Your integration approach will depend on the complexity of your project, the structure of your team, and your deployment requirements. ### UI vs Code Prompt Management You should choose one of two ways for managing prompts and AI artifacts: "UI first" or "code first". You’ll likely want to use UI-first approach if your domain experts are involved in prompt development. You create and iterate on your Prompts, Tools, Flows and Datasets in the Humanloop UI. Prompts can be synced back to your codebase or called directly via the API. This is ideal for situations where you have subject matter experts or project managers who are primarily responsible for the "how it works" of your AI features. **Benefits:** * ✅ Faster iteration which can be decoupled from code releases * ✅ Easier collaboration with subject matter experts **Drawbacks:** * ❌ Does not support complex prompt chains or flows You define and maintain prompts, tools and agents in code and then log to Humanloop. **Benefits:** * ✅ Full flexibility of code for developing your prompts or complex agents * ✅ Maintains a point-in-time snapshot of both code and prompts * ✅ Maintains your existing CI/CD practices **Drawbacks:** * ❌ Harder for domain experts to collaborate * ❌ Less convenient for iterating on prompts and running evaluations * ❌ Tightly coupled to code releases, means slower iteration ### Proxy vs Direct Calls You have two options for calling your Prompts: "Proxy via Humanloop" or "Direct Calls + Logging". You'll likely want to use the proxy approach if you want to centralize your logging and versioning and the direct call approach if you want to minimize latency. Your calls to large language models go via Humanloop, which forwards requests to model providers and logs the results automatically. **Benefits:** * ✅ Simplifies integration * ✅ Centralized logging and versioning * ✅ Easier hot-swapping of Prompts **Drawbacks:** * ❌ Adds a slight latency overhead * ❌ Adds Humanloop as a dependency to your critical path Your app calls the model provider directly, then logs results to Humanloop. **Benefits:** * ✅ No extra latency * ✅ Can call custom providers and services **Drawbacks:** * ❌ Requires additional logging logic The rest of this guide gives more details on the different approaches. ## Prompt Management Prompt management is the process of creating, updating, and versioning your Prompts, Tools, Flows and Datasets. AI artifacts should be version-controlled just like application code, however unlike application code, often the people that best understand how they should work are domain experts, not developers. Humanloop provides a UI-based environment for domain expert involvment, while still being able to sync the back to your codebase and version control. ### Versioning Humanloop automatically tracks all versions of your Tools, Flows, Agents and Datasets, whether created in the UI or in code. Each version is uniquely identified by hashing its parameters. Humanloop computes this hash based on the template, the temperature and other parameters and logs accordingly. Every time a Prompt is called or an LLM response is logged, the resulting [Log](/docs/v5/concepts/logs) is associated with a specific version. This versioning allows you to: * **Evaluate:** compare the performance of different versions * **Auditability:** review changes over time * **Deployment control:** manage rollbacks and staged releases ## Create and manage your Prompts in code or in the UI Humanloop supports both UI and code-based prompt management, giving you flexibility for your team's needs. You can always mix both approaches — starting in the UI and syncing to code, or vice versa. ### Use code-first management if: * **You are creating a complex agent or workflow** If an LLM call is embedded within a structured workflow with dependencies on the schemas before and after the step, managing it in code is usually more practical. * **You are creating your Prompts dynamically** For example, if you are dynamically setting the template or available tools based on app state or user inputs, you’ll find it easier to set this in code. Note that with template variables (`{{ variable }}`) and jinja syntax, you should put the dynamic state as inputs into the template. ### Use the UI for AI development if: * **Your domain experts are responsible for prompt iteration.** The UI enables non-technical team members to contribute effectively. * **You want an interactive environment for prompt engineering** Your domain experts can load in examples, replay interesting datapoints, and run evaluations with ease. * **You’re iterating on existing Prompts** Even if the Prompt was initially developed in code, the UI is a more convenient place to iterate on it ## Syncing Humanloop to your codebase If Humanloop is the **source of truth**, and you wish to have the state of your Prompts in your codebase, you can sync the latest versions back to your codebase. Each file can be serialised and stored in version control. The [`.prompt` format](/docs/v5/reference/prompt-file-format) is a human-readable representation of your Prompts that can be stored in git, and all other files can be serialised as JSON. You can use the [List Files](/docs/v5/api-reference/files/list-files) API to fetch the latest versions of your files and store them in your codebase. {/* TODO: add some guides on how to sync */} {/* ### How to Sync */} {/* **Export from Humanloop:** You can manually download or programmatically fetch the latest `.prompt` files using the API. */} {/* **Use the CLI (Coming Soon):** Humanloop will offer a CLI tool to fetch and sync the latest state of your files effortlessly. */} {/* For detailed instructions, refer to Guide */} ## Decoupling AI updates from your software lifecycle For many applications you want to update your AI behavior faster than your traditional software release process. Humanloop allows you to iterate on prompts separately while maintaining control over which versions are used in production. ### Environment labels Humanloop supports **environment labels** (e.g., `"development"`, `"staging"`, `"production"`) that allow you to deploy specific prompt versions without modifying code. How it works: 1. When calling/fetching your Prompts in code, specify the environment e.g. `production` instead of a fixed version ID. 2. In the UI (or via API), promote a new version to the desired environment. 3. Your application now uses the updated Prompt without any code changes ✨ ## Calling your Prompts, Agents and Flows You have two options for calling LLMs with your prompts in production: ### 1. Use the Humanloop Proxy for simplicity Proxying your LLM calls via Humanloop simplifies logging, versioning, and provider management. Humanloop provides a **unified API** that forwards requests to the LLM provider and automatically logs the results. **How it works:** 1. You call the Humanloop API (e.g.`Prompts.call()`) instead of each model provider. 2. Humanloop fetches/creates the prompt, calls the provider and logs and returns the result. ✅ **Easier integration** – fetches latest version, calls provider and logs result in one call
✅ **Unified interface** – easy swapping between model providers
✅ **Automatic versioning** – ensures all interactions are versioned
⚠️ **Adds slight latency** – due to the extra network hop
⚠️ **Adds to the critical path** - for uptime critical applications you may want an enterprise SLA Using the Humanloop proxy can accelerate development since you can swap out models or adjust prompt templates on the platform side without changing code. ### 2. Direct call and logging With this you call the model provider directly and log results manually to Humanloop. This minimizes latency and allows custom models. **How it works:** 1. You fetch the latest version of the prompt from Humanloop (or have it in code) 2. You send the requests to OpenAI, Anthropic, etc. 3. You log responses separately to Humanloop. {/* TODO: add a link to the logging docs */} {/* You can use the decorators [TODO: link] in the SDK to simplify the logging */} {/* TODO: I'd like here to be able to explain how we can make this convenient with fetching the latest version of the prompt */} * ✅ **No extra network hop** – ideal for latency-sensitive use cases
* ✅ **Supports proprietary/custom models**
* ⚠️ **Requires manual logging** – you must track metadata yourself ### Choosing the Right Approach For most teams, **proxying via Humanloop is the easiest choice**. If performance is critical, use **direct calls** but ensure proper logging. | **Use Case** | **Best Approach** | | ------------------------------------ | -------------------------- | | Fast iteration & centralized logging | **Proxy via Humanloop** | | Low-latency, mission-critical apps | **Direct calls + logging** | | Using a custom/proprietary model | **Direct calls + logging** | ## Running Evals Given the non-determistic nature of LLMs and the challenging subjective tasks they're applied to, evaluation is integral to the development of AI features. Evaluations are where [Evaluators](/docs/v5/concepts/evaluators) are run on the Logs of your Prompts, Flows and Agents. The Logs can be production data, or created by running a [Dataset](/docs/v5/concepts/datasets) through the application you're testing. ### Evaluators in Humanloop runtime versus your runtime Evaluators are used to evaluate the performance of your Prompts, Flows and Agents. They can be defined on Humanloop as a Python function, an LLM call, or as a “human" feedback. Your Evaluators and Datasets can be stored in code, or in Humanloop. Evaluations can be run in code or through the UI. Evaluators which are defined within the Humanloop UI can be executed in the Humanloop runtime, whereas Evaluators defined in your code can be executed in your runtime and the results posted back to Humanloop. This provides flexibility for supporting more complex evaluation workflows. Our recommendation is to store them on Humanloop where possible so that you can benefit from: * ✅ Parallel execution * ✅ Automatic logging of results * ✅ Versioning of evaluators * ✅ Integration with your existing CI/CD * ✅ Easy triggering of evals from the UI or from your CI/CD * ✅ Re-use of your evaluators as online monitoring evaluators {/* TODO! */} {/* ## CI/CD Automation */} {/* Humanloop can be integrated into CI/CD to ensure **AI regressions are caught before deployment**. This helps teams continuously improve prompts and models. */} {/* For a detailed guide, see [**Humanloop CI/CD Setup**](https://www.notion.so/Doc-Integrating-Humanloop-1a0b02063acd80efa2e3fae864e1aafb?pvs=21). */} ## What's Next? You should now have an understanding of the flexibility of Humanloop’s integration patterns, that can cater for subject matter experts and developers to collaborate on AI features. {/* This guide covers: */} {/* - **Prompt Management:** Where to store your AI artifacts and sync with git */} {/* - **Runtime:** how to call models and log results */} {/* - **Evaluations:** How to run evals on your prompts */} {/* - **CI/CD automation**: how to automate your testing and deployment workflows */} # Humanloop Files > Discover how Humanloop manages datasets, with version control and collaboration to enable you to evaluate and fine-tune your models. Files represent the core building blocks of your AI features on Humanloop. They exist within a flexible filesystem in an Organization. ### Function Files [**Prompts**](./prompts)
Prompts define a task for a Large Language Model. [**Evaluators**](./evaluators)
Evaluators judge the output of Prompts, Tools, Flows, or other Evaluators. [**Tools**](./tools)
Tools extend Prompts with access to external data sources and enable them to take action. [**Flows**](./flows)
Flows are orchestrations of Prompts, Tools, and other code— enabling evaluation and improvement of complete AI pipelines. ### Static Files [**Datasets**](./datasets)
Datasets are collections of Datapoints used for evaluation and fine-tuning. *** ## File properties ### Files are managed in the UI or code Files can be managed in the [Humanloop UI](https://app.humanloop.com/), or via the [API](/docs/api-reference/). Product teams and their Subject Matter Experts (SMEs) may prefer using the UI for convenience, whereas AI Teams and Engineers may prefer to use the API for greater control and customization. ### Files are version-controlled Files have immutable Versions that are uniquely determined by their parameters that characterise the behavior of the system. For example, a Prompt Version is determined by the prompt template, base model, and hyperparameters. Within the Humanloop Editor and via the API, you can commit new Versions of a File, view the history of changes, and revert to a previous version via deployments. ### Files have a serialized form All Files can be exported and imported in a serialized form. For example, Prompts are serialized to our [.prompt file format](/docs/reference/prompt-file-format). This allows technical teams to maintain the source of truth within their existing version control systems, such as Git. ### Files support deployments You can tag File Versions with specific Environments and target these Environments via the UI and API to facilitate robust deployment workflows. ### Function Files are callable Function Files can be called when using the Humanloop runtime. Or logged to when using your own runtime. *Example:* When a Prompt is called, Humanloop acts as a proxy to the [model provider](/docs/reference/models#providers), logs and returns the output. If you manage the model calls yourself, the results can be logged to the Prompt File. Using the Humanloop runtime is easier and lets you run files directly in the UI, while managing your own runtime gives you more control. ### Function Files have Logs Every time a Function File is called, a [Log](/docs/explanation/logs) is created. Logs can also be posted via the API. *** | File | Versioned | Callable | Logs | | ------------------------------------------ | --------- | -------- | ---- | | [Prompts](/docs/explanation/prompts) | ✅ | ✅ | ✅ | | [Evaluators](/docs/explanation/evaluators) | ✅ | ✅ | ✅ | | [Tools](/docs/explanation/tools) | ✅ | ✅ | ✅ | | [Flows](/docs/explanation/flows) | ✅ | ✅ | ✅ | | [Datasets](/docs/explanation/datasets) | ✅ | ❌ | ❌ | # Prompts > Discover how Humanloop manages prompts, with version control and rigorous evaluation for better performance. A Prompt on Humanloop defines the instructions and configuration for guiding a Large Language Model (LLM) to perform a specific task. Each change in any of the following properties creates a new Version of the Prompt: * the **template** such as `Write a song about {{topic}}`.
For chat models, the template contains an array of messages * the **model** e.g. `gpt-4o` * the **parameters** to the model such as `temperature`, `max_tokens`, `top_p` * any **tools** available to the model A Prompt is callable in that if you supply the necessary inputs, it will return a response from the model. Inputs are defined in the template through the double-curly bracket syntax e.g. `{{topic}}` and the value of the variable will need to be supplied when you call the Prompt to create a generation. This separation of concerns, keeping configuration separate from the query time data, is crucial for enabling you to experiment with different configurations and evaluate any changes. The Prompt stores the configuration and the query time data in [Logs](./logs), which can then be used to create Datasets for evaluation purposes. Note that we use a capitalized "[Prompt](/docs/explanation/prompts)" to refer to the entity in Humanloop, and a lowercase "prompt" to refer to the general concept of input to the model. ```jsx --- model: gpt-4o temperature: 1.0 max_tokens: -1 provider: openai endpoint: chat --- Write a song about {{topic}} ``` ## Versioning Versioning your Prompts enables you to track how adjustments to the template or parameters influence the model's responses. This is crucial for iterative development, as you can pinpoint which configuration produces the most relevant or accurate outputs for your use cases. A Prompt File will have multiple Versions as you iterate on different models, templates, or parameters, but each version should perform the same task and generally be interchangeable with one another. ### When to create a new Prompt File You should create a new Prompt File for each different 'task to be done' with an LLM. Each of these tasks can have its own separate Prompt File: *Writing Copilot*, *Personal Assistant*, *Summarizer*, etc. Many users find value in creating a 'playground' Prompt where they can freely experiment without risking damage to their other Prompts or creating disorder. ## Using Prompts Prompts are callable as an API, allowing you to provide query-time data such as input values or user messages, and receive the model's text output in response. Prompts can also be used without proxying through Humanloop to the model provider. Instead, you can call the model directly and explicitly log the results to your Prompt. ## Serialization The [.prompt file format](../reference/prompt-file-format) is a serialized representation of a Prompt Version, designed to be human-readable and suitable for integration into version control systems alongside code. The format is heavily inspired by [MDX](https://mdxjs.com/), with model and parameters specified in a YAML header alongside a JSX-inspired syntax for chat templates. ```jsx Chat --- model: gpt-4o temperature: 1.0 max_tokens: -1 provider: openai endpoint: chat --- You are a friendly assistant. ``` ```jsx Completion --- model: claude-2 temperature: 0.7 max_tokens: 256 top_p: 1.0 provider: anthropic endpoint: complete --- Autocomplete the sentence. Context: {{context}} {{sentence}} ``` # Evaluators > Learn about LLM Evaluation using Evaluators. Evaluators are functions that can be used to judge the output of Prompts, Tools or other Evaluators. The core entity in the Humanloop evaluation framework is an **[Evaluator](/docs/explanation/evaluators)** - a function you define which takes an LLM-generated log as an argument and returns a **judgment**. The judgment is typically either a boolean or a number, indicating how well the model performed according to criteria you determine based on your use case. Evaluators can be leveraged for [Monitoring](../observability/overview) your live AI application, as well as for [Evaluations](../evaluation/overview) to benchmark different version of your AI application against each other pre-deployment. ## Sources of Judgement Currently, you can define three different Evaluator sources on Humanloop: * **Code** - using simple deterministic rules based judgments against attributes like cost, token usage, latency, regex rules on the output, etc. These are generally fast and cheap to run at scale. * **AI** - using other foundation models to provide judgments on the output. This allows for more qualitative and nuanced judgments for a fraction of the cost of human judgments. * **Human** - getting gold standard judgments from either end users of your application, or internal domain experts. This can be the most expensive and slowest option, but also the most reliable. ## Online Monitoring versus Offline Evaluation Evaluators can be deployed on Humanloop to support both testing new versions of your Prompts and Tools during development and for monitoring live apps that are already in production. ### Online Monitoring Evaluators are run against the [Logs](./logs) generated by your AI applications. Typically, they are used to monitor deployed model performance over time and check for drift or degradation in performance. The Evaluator in this case only takes a single argument - the `log` generated by the model. The Evaluator is expected to return a judgment based on the Log, which can be used to trigger alerts or other actions in your monitoring system. See our [Monitoring guides](../observability/overview) for more details. ### Offline Evaluations Offline Evaluators are combined with predefined [**Datasets**](./datasets) in order to evaluate your application as you iterate in your prompt engineering workflow, or to test for regressions in a CI environment. A test Dataset is a collection of **Datapoints**, which are roughly analogous to unit tests or test cases in traditional programming. Each datapoint specifies inputs to your model and (optionally) some target data. When you run an offline evaluation, a Log needs to be generated using the inputs of each Datapoint and the version of the application being evaluated. Evaluators then need to be run against each Log to provide judgements, which are then aggregated to provide an overall score for the application. Evaluators in this case take the generated `Log` and the `testcase` datapoint that gave rise to it as arguments. See our guides on [creating Datasets](/docs/evaluation/guides/create-dataset) and [running Evaluations](/v5/evaluation/guides/run-evaluation) for more details. ## Humanloop runtime versus your runtime Evaluations require the following to be generated: 1. Logs for the datapoints. 2. Evaluator results for those generated logs. Evaluators which are defined within the Humanloop UI can be executed in the Humanloop runtime, whereas Evaluators defined in your code can be executed in your runtime and the results posted back to Humanloop. This provides flexibility for supporting more complex evaluation workflows. ## Return types Evaluators apply judgment to Logs. This judgment can be of the following types: * **Boolean** - A true/false judgment. * **Number** - A numerical judgment, which can act as a rating or score. * **Select** - One of a predefined set of options. One option must be selected. * **Multi-select** - Any number of a predefined set of options. None, one, or many options can be selected. * **Text** - A free-form text judgment. Code and AI Evaluators can return either **Boolean** or **Number** judgments. Human Evaluators can return **Number**, **Select**, **Multi-select**, or **Text** judgments. # Tools > Discover how Humanloop manages tools for use with large language models (LLMs) with version control and rigorous evaluation for better performance. Tools on Humanloop are used to extend [Prompts](/docs/explanation/prompts) with access to external data sources and enable them to take action. ## Function calling with LLMs The most capable Large Language Models (LLMs), including models from [OpenAI](https://platform.openai.com/docs/guides/function-calling) and [Anthropic](https://docs.anthropic.com/en/docs/build-with-claude/tool-use), support function calling. You can provide these models with definitions of the available tools. The model then decides whether to call a tool, and which parameters to use. Tools and their schemas can be easily managed and version controlled on Humanloop. This is especially valuable when iterating on tool definitions in the Humanloop Editor, as you can make changes to your schema and directly see how the changes impact the model's output. ```js { "name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": { "type": "object", "properties": { "location": { "type": "string", "name": "Location", "description": "The city and state, e.g. San Francisco, CA" }, "unit": { "type": "string", "name": "Unit", "enum": ["celsius", "fahrenheit"] } }, "required": ["location"] } } ``` ## Integrations Humanloop also offers pre-built tools with integrations to popular services like Google Search and Pinecone. Tools using these integrations can run automatically on Humanloop, with their results appearing in both the UI and API responses. Some Tools can be called directly from within prompt templates. In that case, the tool's output is automatically inserted into the prompt before it's sent to the model. This makes it easy to include dynamic information, such as search results or database queries. # Flows > Humanloop Flows trace and evaluate complex AI workflows, from LLM agents to retrieval-augmented generation (RAG). By unifying all components, Flows provide the context needed to debug and iterate with confidence. ## Introduction LLM-powered systems are multi-step processes, leveraging information sources, delegating computation to tools, or iterating with the LLM to generate the final answer. Looking at the inputs and output of such a system is not enough to reason about its behavior. Flows address this by tracing all components of the feature, unifying Logs in a comprehensive view of the system. ## Basics To integrate Flows in your project, add the SDK flow decorator on the entrypoint of your AI feature. ```python maxLines=50 @humanloop.flow(path="QA Agent/Answer Question") def call_agent(question: str) -> str: # A simple question answering agent ... return answer ``` ```typescript maxLines=50 const callAgent = humanloop.flow({ path: "QA Agent/Answer Question", callable: async (question: string) => { // A simple question answering agent ... return answer } }) ``` The decorator will capture the inputs and output of the agent on Humanloop. You can then start evaluating the system's performance through [code](/docs/v5/guides/evals/run-evaluation-api) or through the [platform UI](/docs/v5/guides/evals/run-evaluation-ui). ## Tracing Additional Logs can be added to the trace to provide further insight into the system's behavior. On Humanloop, a trace is the collection of Logs associated with a Flow Log. ```python maxLines=50 title="Question answering agent" @humanloop.tool(path="QA Agent/Search Wikipedia") def search_wikipedia(query: str) -> dict: """LLM function calls this to search Wikipedia.""" ... @humanloop.prompt(path="QA Agent/Call Model") def call_model(messages: list[dict]) -> dict: """Interact with the LLM model.""" ... @humanloop.flow( path="QA Agent/Answer Question", attributes={"version": "v1", "wikipedia": True} ) def call_agent(question: str) -> str: """A simple question answering agent.""" ... ``` The agent makes multiple provider calls to refine the response to the question. It makes function calls to `search_wikipedia` to retrieve additional information from an external source. Calling the other functions inside `call_agent` creates Logs and adds them to the trace created by `call_agent`. ```typescript maxLines=50 title="Question answering agent" const searchWikipedia = humanloop.tool({ path: "QA Agent/Search Wikipedia", callable: async (query: string) => { // LLM function calls this to search Wikipedia ... } }) const callModel = humanloop.prompt({ path: "QA Agent/Call Model", callable: async (messages: string[]) => { // Interact with the LLM model ... } }) const callAgent = humanloop.flow({ path: "QA Agent/Answer Question", attributes: { version: "v1", wikipedia: true }, callable: async (question: string) => { // A simple question answering agent ... } }) ``` The agent makes multiple provider calls to refine the response to the question. It makes function calls to `searchWikipedia` to retrieve additional information from an external source. Calling the other functions inside `callAgent` creates Logs and adds them to the trace created by `callAgent`. ### Manual Tracing If you don't want to use decorators, first create a Flow Log, then pass its `id` when creating Logs you want to add to the trace. ```python title="Tracing via API" maxLines=50 highlight={17,22-26} def call_agent(question: str) -> str: trace_id = humanloop.flows.log( name="QA Agent/Answer Question", flow={ "attributes": { "version": "v1", "wikipedia": True } }, inputs={"question": question} ).id llm_output = humanloop.prompts.call( name="QA Agent/Answer", prompt={...}, messages=[...], parent_trace_id=trace_id ) ... humanloop.flows.update_log( log_id=trace_id, output=answer log_status="complete" ) ``` ```typescript title="Tracing via API" maxLines=50 highlight={17,22-25} async function callAgent(question: string) { const traceId = (await humanloop.flows.log({ name: "QA Agent/Answer Question", flow: { attributes: { version: "v1", wikipedia: true } }, inputs: { question } })).id; const llmOutput = await humanloop.prompts.call({ name: "QA Agent/Answer", prompt: {...}, messages: [...], parentTraceId: traceId }); ... await humanloop.flows.updateLog(traceId, { output: answer, logStatus: "complete" }); } ``` ## Versioning Any data you pass into `attributes` will contribute to the version of the Flow. If you pass in a new value, the version will be updated. ```python maxLines=50 title="Question answering agent" highlight={3} @humanloop.flow( path="QA Agent/Answer Question", attributes={"version": "v1", "wikipedia": True} ) def call_agent(question: str) -> str: """A simple question answering agent.""" ... ``` ```typescript maxLines=50 title="Question answering agent" highlight={3} const callAgent = humanloop.flow({ path: "QA Agent/Answer Question", attributes: { version: "v1", wikipedia: true }, callable: async (question: string) => { // A simple question answering agent ... } }) ``` ## Completing Flow Logs Traces must be marked as complete once all relevant Logs have been added. The `flow` decorator will mark a trace as complete when the function returns. [Monitoring Evaluators](/docs/explanation/evaluators) only run on Flow Logs once the log is completed. Completing the Flow Log signals its Evaluators that no other Logs will arrive. Unlike other Logs, Evaluators added to Flows can access all Logs inside a trace: ```python title="Monitoring Evaluator on Humanloop" def count_logs_evaluator(log): """Count the number of Logs in a trace.""" if log["children"]: # Use the `children` attribute to access all Logs in the trace return 1 + sum([count_logs_evaluator(child) for child in log["children"]]) return 1 ``` A Flow Log's metrics, such as cost, latency and tokens, are computed once the Log is completed. A Flow Log's `start_time` and `end_time` are computed automatically to span the earliest start and latest end of the Logs in its trace. If `start_time` and `end_time` already span the Logs' timestamps, they are kept. ### Manual Tracing If you don't want to use the decorator, you can complete the Flow Log via the SDK directly. ```python humanloop.flows.update_log( log_id=trace_id, log_status="complete" ) ``` ```typescript humanloop.flows.updateLog(traceId, { logStatus: "complete" }) ``` ## Evaluation Unlike [Prompts](/docs/v5/explanation/prompts), which can be evaluated via the Humanloop UI, you must run evaluations on your Flows through code. After each Flow Log is complete, Evaluators added to the evaluation will still run on Humanloop. To do this, provide a `callable` argument to the `evaluations.run` SDK method. ```python maxLines=50 title="Evaluating a Flow" highlight={5} humanloop.evaluations.run( name="Comprehensiveness Evaluation", file={ "path": "QA Agent/Answer Question", "callable": call_agent, }, evaluators=[ {"path": "QA Agent/Answer Comprehensiveness"}, ], dataset={"path": "QA Agent/Simple Answers"}, ) ``` ```typescript maxLines=50 title="Evaluating a Flow" highlight={5} humanloop.evaluations.run( name="Comprehensiveness Evaluation", file={ "path": "QA Agent/Answer Question", "callable": callAgent, }, evaluators=[ { "path": "QA Agent/Answer Comprehensiveness" }, ], dataset={ "path": "QA Agent/Simple Answers" }, ) ``` ## Next steps You now understand the role of Flows in the Humanloop ecosystem. Explore the following resources to apply Flows to your AI project: * Check out our [logging quickstart](/docs/v5/quickstart/set-up-logging) for an example project instrumented with Flows. * Dive into the [evals guide](/docs/v5/tutorials/agent-evaluation) to learn how to evaluate your AI project. # Datasets > Discover how Humanloop manages datasets, with version control and collaboration to enable you to evaluate and fine-tune your models. Datasets on Humanloop are collections of Datapoints used for evaluation and fine-tuning. You can think of a Datapoint as a test case for your AI application, which contains the following fields: * **Inputs**: a collection of prompt variable values that replace the `{{variables}}` defined in your prompt template during generation. * **Messages**: for chat models, you can have a history of chat messages that are appended to the prompt during generation. * **Target**: a value that in its simplest form describes the desired output string for the given inputs and messages history. For more advanced use cases, you can define a JSON object containing whatever fields are necessary to evaluate the model's output. ## Versioning A Dataset will have multiple Versions as you iterate on the test cases for your task. This tends to be an evolving process as you learn how your [Prompts](./prompts) behave and how users interact with your AI application in the wild. Dataset Versions are immutable and are uniquely defined by the contents of the Datapoints. When you change, add, or remove Datapoints, this constitutes a new Version. Each [Evaluation](/docs/guides/evals/run-evaluation) is linked to a specific Dataset Version, ensuring that your evaluation results are always traceable to the exact set of test cases used. ## Creating a Dataset A Dataset can be created in the following ways: * [Upload a CSV in the UI](/docs/guides/evals/upload-dataset-csv) * [Create a Datapoint from an existing Log](/docs/guides/evals/create-dataset-from-logs) * [Create a Dataset via the API](/docs/guides/evals/create-dataset-api) ## Using Datasets for Evaluations Datasets are foundational for Evaluations on Humanloop. Evaluations are run by iterating over the Datapoints in a Dataset, generating output from different versions of your AI application for each one. The Datapoints provide the specific test cases to evaluate, with each containing the input variables and optionally a target output that defines the desired behavior. When a target is specified, Evaluators can compare the generated outputs to the targets to assess how well each version performed. Get started with using Datasets for Evaluation # Logs > Logs contain the inputs and output of each time a Function Files is called. Logs in Humanloop are records of each execution of a [Function File](/docs/explanation/files#function-files) (*Prompt*, *Tool*, *Evaluator*, or *Flow*). They capture the complete context of the execution, including `inputs`, `output`, and metadata like which [Version](/docs/explanation/files#files-are-version-controlled) was used. ## External logging When you are not using Humanloop to run Prompts (or other Function Files), but use your own infrastructure — for instance in a production setting — you can still collect and store the Logs on Humanloop. This enables you to collect interesting cases and lets you leverage Humanloop's monitoring and evaluation capabilities. # Directories > Directories can be used to group together related Files. This is useful for organizing your work as part of prompt engineering and collaboration. Directories in Humanloop serve as an organizational tool, allowing users to group related [Files](/docs/explanation/files) and structure their work logically. They function similarly to folders in a traditional file system, providing a hierarchical structure for managing work. ### Paths Every File and Directory has a unique `path` property which you can use to refer to Files or Directories. When you move a File or Directory, its path and the paths of all contained files and subdirectories will update accordingly. We recommend to always refer to Files by their `id` to avoid issues related to changing paths when Files or Directories are moved. For more information on how to create and manage directories, see our [Create a Directory](/docs/development/guides/create-directory) guide. # Environments > Deployment environments enable you to control the deployment lifecycle of your Prompts and other files between development and production environments. Environments enable you to deploy different versions of your files to specific environments, allowing you to separately manage the deployment workflow between testing and production. With environments, you have the control required to manage the full LLM deployment lifecycle. ### Managing your environments Every organisation automatically receives a default production environment. You can create additional environments with custom names by visiting your organisation's [environments page](https://app.humanloop.com/account/environments). Only Enterprise customers can create more than one environment The environments you define for your organisation will be available for each file and can be viewed in the file's dashboard once created. ![](file:983f7e1b-b0cb-4015-92cb-360fb7287652) #### The default environment By default, the production environment is marked as the Default environment. This means that all API calls that don't explicitly target a specific environment will use this environment. You can rename the default environment on the [organisation's environments](https://app.humanloop.com/account/environments) page. Renaming the environments will take immediate effect, so ensure that this change is planned and does not disrupt your production workflows. ### Using environments Once created on the environments page, environments can be used for each file and are visible in the respective dashboards. You can deploy directly to a specific environment by selecting it in the **Deployments** section. ![](file:6be2dda8-0a63-4c4f-925e-3c4ebcef3c12) Alternatively, you can deploy to multiple environments simultaneously by deploying a version from either the Editor or the Versions table. ### Using environments via API ![](file:885dbd15-b7be-4e21-91ac-6a424f8e1d8f) You can now call the version deployed in a specific environment by including an optional additional `environment` field. An exmaple of this field can be seen in the v5 [Prompt Call](/v5/api-reference/prompts/call-stream#request.query.environment) documentation. # Evaluate a RAG app > Evaluate a RAG application with Humanloop. This tutorial demonstrates how to take an existing RAG pipeline and use Humanloop to evaluate it. At the end of the tutorial you'll understand how to: 1. Run an Eval on your RAG pipeline. 2. Set up detailed logging with SDK decorators. 3. Log to Humanloop manually The full code for this tutorial is available in the [Humanloop Cookbook](https://github.com/humanloop/humanloop-cookbook/blob/main/tutorials/rag/evaluate-rag-flow.ipynb). ## Example RAG Pipeline In this tutorial we'll first implement a simple RAG pipeline to do Q\&A over medical documents without Humanloop. Then we'll add Humanloop and use it for evals. Our RAG system will have three parts: * **Dataset**: A version of the [MedQA dataset](https://huggingface.co/datasets/bigbio/med_qa) from Hugging Face. * **Retriever**: [Chroma](https://docs.trychroma.com/getting-started) as a simple local vector DB. * **Prompt**: Managed in code, populated with the user's question and retrieved context. ## Set up RAG pipeline ```bash poetry install ``` Set up environment variables: ```python from dotenv import load_dotenv import os from chromadb import chromadb from openai import OpenAI import pandas as pd load_dotenv() chroma = chromadb.Client() openai = OpenAI(api_key=os.getenv("OPENAI_KEY")) ``` Set up the Vector DB: ```python collection = chroma.get_or_create_collection(name="MedQA") knowledge_base = pd.read_parquet("../../assets/sources/textbooks.parquet") knowledge_base = knowledge_base.sample(10, random_state=42) collection.add( documents=knowledge_base["contents"].to_list(), ids=knowledge_base["id"].to_list(), ) ``` Define the Prompt: ```python model = "gpt-4o-mini" temperature = 0 template = [ { "role": "system", "content": """Answer the following question factually. Question: {{question}} Options: - {{option_A}} - {{option_B}} - {{option_C}} - {{option_D}} - {{option_E}} --- Here is some retrieved information that might be helpful. Retrieved data: {{retrieved_data}} --- Give you answer in 3 sections using the following format. Do not include the quotes or the brackets. Do include the "---" separators. --- --- """, } ] def populate_template(template: list, inputs: dict[str, str]) -> list: """Populate a template with input variables.""" messages = [] for i, template_message in enumerate(template): content = template_message["content"] for key, value in inputs.items(): content = content.replace("{{" + key + "}}", value) message = {**template_message, "content": content} messages.append(message) return messages ``` Define the RAG Pipeline: ```python def retrieval_tool(question: str) -> str: """Retrieve most relevant document from the vector db (Chroma) for the question.""" response = collection.query(query_texts=[question], n_results=1) retrieved_doc = response["documents"][0][0] return retrieved_doc def call_llm(**inputs): # Populate the Prompt template messages = populate_template(template, inputs) # Call OpenAI to get response chat_completion = openai.chat.completions.create( model=model, temperature=temperature, messages=messages, ) return chat_completion.choices[0].message.content def ask_question(**inputs)-> str: """Ask a question and get an answer using a simple RAG pipeline""" # Retrieve context retrieved_data = retrieval_tool(inputs["question"]) inputs = {**inputs, "retrieved_data": retrieved_data} # Call LLM return call_llm(**inputs) ``` Run the pipeline: ```python output = ask_question( **{ "question": "A 34-year-old male suffers from inherited hemophilia A. He and his wife have three unaffected daughters. What is the probability that the second daughter is a carrier of the disease?", "option_A": "0%", "option_B": "25%", "option_C": "50%", "option_D": "75%", "option_E": "100%", } ) print(output) ``` ## Run an Evaluation Now we will integrate Humanloop into our RAG pipeline to evaluate it. We will use the Humanloop SDK to run an Eval on our RAG pipeline. Initialize the Humanloop SDK: ```python from humanloop import Humanloop load_dotenv() humanloop = Humanloop(api_key=os.getenv("HUMANLOOP_KEY")) ``` ### Set up Evaluators Our Dataset has ground truth answers we can compare against. It's very unlikely that the AI answers are *exactly* the same as the answers but we can measure how close they are by using the "Levenshtein distance" Evaluator. The code for this Evaluator is in the cookbook. We can run the Evaluator locally. However, if we upload it to Humanloop, we get the added benefit that Humanloop can run the Evalaution for us and this can be integrated into CI/CD. ```python def upload_evaluators(): """Uploads Evaluators to Humanloop. Uploads the "Exact match", "Levenshtein", and "Reasoning" Evaluators. The "Exact match" and "Levenshtein" Evaluators are slight modifications to the examples automatically created in the "Example Evaluators" folder in Humanloop when you signed up, with some additional parsing for the output of this RAG pipeline. """ # Upload Code Evaluators for evaluator_name, file_name, return_type in [ ("Exact match", "exact_match.py", "boolean"), ("Levenshtein", "levenshtein.py", "number"), ]: with open(f"../../assets/evaluators/{file_name}", "r") as f: code = f.read() humanloop.evaluators.upsert( path=f"Evals demo/{evaluator_name}", spec={ "evaluator_type": "python", "arguments_type": "target_required", "return_type": return_type, "code": code, }, commit_message=f"New version from {file_name}", ) # Upload an LLM Evaluator humanloop.evaluators.upsert( path="Evals demo/Reasoning", spec={ "evaluator_type": "llm", "arguments_type": "target_free", "return_type": "boolean", "prompt": { "model": "gpt-4o-mini", "endpoint": "complete", "temperature": 0, "template": "An answer is shown below. The answer contains 3 sections, separated by \"---\". The first section is the final answer. The second section is an explanation. The third section is a citation.\n\nEvaluate if the final answer follows from the citation and the reasoning in the explanation section. Give a brief explanation/discussion. Do not make your judgment based on factuality, but purely based on the logic presented.\nOn a new line, give a final verdict of \"True\" or \"False\".\n\nAnswer:\n{{log.output}}", }, }, commit_message="Initial reasoning evaluator.", ) upload_evaluators() ``` ### Create a Dataset We upload a test dataset to Humanloop: ```python def upload_dataset_to_humanloop(): df = pd.read_json("../../assets/datapoints.jsonl", lines=True) datapoints = [row.to_dict() for _i, row in df.iterrows()][0:20] return humanloop.datasets.upsert( path="Evals demo/MedQA test", datapoints=datapoints, commit_message=f"Added {len(datapoints)} datapoints from MedQA test dataset.", ) dataset = upload_dataset_to_humanloop() ``` ### Run Eval Now that we have our Flow, our Dataset and our Evaluators we can create and run an Evaluation. ```python checks = humanloop.evaluations.run( name="Demo cookbook", file={ "path": "Evals demo/MedQA pipeline", "callable": ask_question, }, dataset={ "path": "Evals demo/MedQA test", }, evaluators=[ {"path": "Evals demo/Exact match"}, {"path": "Evals demo/Levenshtein"}, {"path": "Evals demo/Reasoning"}, {"path": "Example Evaluators/Code/Latency"}, ], ) ``` ## Add detailed logging One limitation of our Evaluation so far is that we've measured the app end-to-end but we don't know how the different components contribute to performance. If we really want to improve our app, we'll need to log the full trace of events, including separate Tool and Prompt steps: We can do this by adding logging for the Prompt and Tool steps within the Flow using Humanloop's Python decorators. If you're using a different language, you can still log to Humanloop via the API. Skip to the "Logging with the API" section below or check out our [guide](https://humanloop.com/docs/v5/guides/observability/logging-through-api) for more details. ```python @humanloop.tool(path="Evals demo/Retrieval tool") def retrieval_tool(question: str) -> str: return retrieval_tool(question) @humanloop.prompt(path="Evals demo/LLM call") def call_llm(**inputs): return call_llm(**inputs) @humanloop.flow(path="Evals demo/MedQA pipeline") def ask_question(**inputs): retrieved_data = retrieval_tool(inputs["question"]) inputs = {**inputs, "retrieved_data": retrieved_data} return call_llm(**inputs) ``` You can now run the pipeline as before and the full trace will be logged to Humanloop. ```python output = ask_question( **{ "question": "A 34-year-old male suffers from inherited hemophilia A. He and his wife have three unaffected daughters. What is the probability that the second daughter is a carrier of the disease?", "option_A": "0%", "option_B": "25%", "option_C": "50%", "option_D": "75%", "option_E": "100%", } ) print(output) ``` ## Re-run the Evaluation These decorated functions can similarly be used to run an Eval on the pipeline. This will allow you to evaluate the pipeline and see the detailed logs for each step in the pipeline. Let's change from `gpt-4o-mini` to `gpt-4o` and re-run the Eval. By passing in the same `name` to `humanloop.evaluations.run(...)` call, we'll add another run to the previously-created Evaluation on Humanloop. This will allow us to compare the two Runs side-by-side. ```python model = "gpt-4o" checks = humanloop.evaluations.run( name="RAG guide", file={ "path": "Evals demo/MedQA pipeline", "callable": ask_question_decorated, "type": "flow", }, dataset={ "path": "Evals demo/MedQA test", "datapoints": datapoints, }, evaluators=[ {"path": "Evals demo/Exact match"}, {"path": "Evals demo/Levenshtein"}, {"path": "Evals demo/Reasoning"}, {"path": "Example Evaluators/Code/Latency"}, ], ) ``` Viewing our Evaluation on Humanloop, we can see that our newly-added Run with `gpt-4o` has been added to the Evaluation. On the **Stats** tab, we can see that `gpt-4o` scores better for our "Exact match" (and "Levenshtein") metrics, but has higher latency. ![Eval runs](file:7f5fe889-e929-4dc5-9b74-cd2015d71e72) Perhaps surprisingly, `gpt-4o` performs worse according to our "Reasoning" Evaluator. ## Logging with the API Above, we've let the SDK handle logging and versioning for us. However, you can also log data to Humanloop using the API directly. This can be useful if you want to perform some post-processing on the data before logging it, or if you want to include additional metadata in the logs or versions. We'll now demonstrate how to extend your Humanloop logging with more fidelity; creating Tool, Prompt, and Flow Logs to give you full visibility. We add additional logging steps to our `ask_question` function to represent the full trace of events on Humanloop. (Note that the `run_id` and `source_datapoint_id` arguments are optional, and are included here for use in the Evaluation workflow demonstrated later.) ```python from datetime import datetime import inspect def ask_question_with_logging(run_id: str | None = None, source_datapoint_id: str | None = None, **inputs)-> str: """Ask a question and get an answer using a simple RAG pipeline.""" trace = humanloop.flows.log( path="evals_demo/medqa-flow", flow={ "attributes": { "prompt": { "template": template, "model": model, "temperature": temperature, }, "tool": { "name": "retrieval_tool_v3", "description": "Retrieval tool for MedQA.", "source_code": inspect.getsource(retrieval_tool), }, } }, inputs=inputs, start_time=datetime.now(), run_id=run_id, source_datapoint_id=source_datapoint_id, ) # Retrieve context start_time=datetime.now() retrieved_data = retrieval_tool(inputs["question"]) inputs = {**inputs, "retrieved_data": retrieved_data} # Log the retriever information to Humanloop separately humanloop.tools.log( path="Evals demo/Retrieval tool", tool={ "function": { "name": "retrieval_tool", "description": "Retrieval tool for MedQA.", }, "source_code": inspect.getsource(retrieval_tool), }, output=retrieved_data, trace_parent_id=trace.id, start_time=start_time, end_time=datetime.now() ) # Populate the Prompt template start_time=datetime.now() messages = populate_template(template, inputs) # Call OpenAI to get response chat_completion= openai.chat.completions.create( model=model, temperature=temperature, messages=messages, ) output = chat_completion.choices[0].message.content # Log the prompt information to Humanloop separately humanloop.prompts.log( path="evals_demo/medqa-answer", prompt={ "model": model, "temperature": temperature, "template": template, }, inputs=inputs, output=output, output_message=chat_completion.choices[0].message, trace_parent_id=trace.id, start_time=start_time, end_time=datetime.now() ) # Close the trace humanloop.flows.update_log( log_id=trace.id, output=output, trace_status="complete", ) return output ``` The logging we've added here is similar to the SDK decorators we used earlier. ## Run an Evaluation using the API To orchestrate your own Evaluations, you can pass in `run_id` and `source_datapoint_id` to the `humanloop.flows.log(...)` call to associate Logs with a specific Run and Datapoint. The following is an example of how you can manually create an Evaluation and Run, and log data to Humanloop using the API, giving you full control over the Evaluation process. ```python from tqdm import tqdm # Create Evaluation evaluation = humanloop.evaluations.create( name="Manual logging demo", file={"path": "Evals demo/MedQA pipeline"}, evaluators=[ {"path": "Evals demo/Exact match"}, {"path": "Evals demo/Levenshtein"}, {"path": "Evals demo/Reasoning"}, {"path": "Example Evaluators/Code/Latency"}, ], ) # Create Run run = humanloop.evaluations.create_run(id=evaluation.id, dataset={"path": "Evals demo/MedQA test"}) # Run the pipeline over the Dataset for datapoint in tqdm(datapoints): ask_question_with_logging(run_id=run.id, source_datapoint_id=datapoint.id, **datapoint.inputs) ``` You can then similarly view results on the Humanloop UI. ![Eval Logs table](file:8a5c8199-ae67-49c1-b887-bf45be89b0c5) This concludes the Humanloop RAG Evaluation walkthrough. You've learned how to integrate Humanloop into your RAG pipeline, set up logging, create Datasets, configure Evaluators, run Evaluations, and log the full trace of events including Tool and Prompt steps. # Evaluate an agent > Evaluate and improve the performance of an LLM agent. Working with LLMs is daunting: you are dealing with a black box that outputs unpredictable results. Humanloop provides tools to make your development process systematic, bringing it closer to traditional software testing and quality assurance. In this tutorial, we will use Humanloop to evaluate the quality of a chat agent's answers and demonstrate how to use results to improve the agent's performance. ## Prerequisites

Create a Humanloop Account

If you haven't already, [create an account](https://app.humanloop.com/signup) or [log in](https://app.humanloop.com/login) to Humanloop

Add an OpenAI API Key

If you're the first person in your organization, you'll need to add an API key to a model provider. 1. Go to OpenAI and [grab an API key](https://platform.openai.com/api-keys). 2. In Humanloop [Organization Settings](https://app.humanloop.com/account/api-keys) set up OpenAI as a model provider. Using the Prompt Editor will use your OpenAI credits in the same way that the OpenAI playground does. Keep your API keys for Humanloop and the model providers private.
Install the project's dependencies: ```python pip install humanloop openai wikipedia ``` Humanloop SDK requires Python 3.9 or higher. Optionally, create a virtual environment to keep dependencies tidy. {/* TODO: Add a disclaimer for TS */} Install the project's dependencies: ```typescript npm install humanloop openai wikipedia ```
## Create the agent We will build an agent that answers questions asked by children. The agent queries Wikipedia and replies with an easy-to-understand explanation. Let's create the initial version of our agent. Add the following in a new file: ```python title="main.py" maxLines=35 from humanloop import Humanloop from openai import OpenAI from openai.types.chat.chat_completion_message import ChatCompletionMessage as Message import wikipedia import json openai = OpenAI(api_key="ADD YOUR KEY HERE") humanloop = Humanloop(api_key="ADD YOUR KEY HERE") def search_wikipedia(query: str) -> dict: """Search Wikipedia to get up-to-date information for a query.""" try: page = wikipedia.page(query) return { "title": page.title, "content": page.content, "url": page.url, } except Exception as _: return { "title": "", "content": "No results found", "url": "", } def call_model(messages: list[Message]) -> Message: """Calls the model with the given messages""" system_message = { "role": "system", "content": ( "You are an assistant that helps to answer user questions. " "You should leverage wikipedia to answer questions so that " "the information is up to date. If the response from " "Wikipedia does not seem relevant, rephrase the question " "and call the tool again. Then finally respond to the user." ), } response = openai.chat.completions.create( model="gpt-4o", messages=[system_message] + messages, tools=[ { "type": "function", "function": { "name": "search_wikipedia", "description": "Search the internet to get up to date answers for a query.", "parameters": { "type": "object", "required": ["query"], "properties": { "query": {"type": "string"}, }, "additionalProperties": False, }, }, } ], ) return response.choices[0].message.to_dict(exclude_unset=False) def call_agent(question: str) -> str: """Calls the main agent loop and returns the final result""" messages = [{"role": "user", "content": query}] # Retry for a relevant response 3 times at most for _ in range(3): response = call_model(messages) messages.append(response) if response["tool_calls"]: # Call wikipedia to get up-to-date information for tool_call in response["tool_calls"]: source = search_wikipedia( **json.loads(tool_call["function"]["arguments"]) ) messages.append( { "role": "tool", "content": json.dumps(source), "tool_call_id": tool_call["id"], } ) else: # Respond to the user return response["content"] if __name__ == "__main__": result = call_agent("Where does the sun go at night?") print(result) ``` ```typescript title="main.ts" maxLines=35 import { HumanloopClient } from "humanloop"; import OpenAI from "openai"; import type { ChatCompletionMessageParam as Message } from "openai/resources"; import wikipedia from "wikipedia"; import fs from "fs"; import readline from "readline"; const openai = new OpenAI({ apiKey: "" }); const humanloop = new HumanloopClient({ apiKey: "" }); type WikiResult = { title: string; content: string; url: string; }; const searchWikipedia = async (query: string) => { try { const page = await wikipedia.page(query); const NO_RESULT_FOUND: WikiResult = { title: "", content: "No results found", url: "", }; if (page) { return { title: page?.title || "", content: (await page?.content()) || "", url: `https://en.wikipedia.org/wiki/${encodeURIComponent( page?.title || "" )}`, } as WikiResult; } return NO_RESULT_FOUND; } catch (error) { return NO_RESULT_FOUND; } }; const callModel = async (messages: Array) => { const systemMessage: Message = { role: "system", content: "You are an assistant that helps to answer user questions. " + "You should leverage wikipedia to answer questions so that " + "the information is up to date. If the response from " + "Wikipedia does not seem relevant, rephrase the question " + "and call the tool again. Then finally respond to the user.", }; const response = await openai.chat.completions.create({ model: "gpt-4o", messages: [systemMessage, ...messages], tools: [ { type: "function", function: { name: "search_wikipedia", description: "Search the internet to get up to date answers for a query.", parameters: { type: "object", required: ["query"], properties: { query: { type: "string" }, }, additionalProperties: false, }, }, }, ], }); return response.choices[0].message; } async function callAgent({ question }: { question: string }): Promise { const messages: Message[] = [{ role: "user", content: question }]; for (let _ = 0; _ < 3; _++) { const response = await callModel(messages); messages.push(response); if (response.tool_calls) { for (const toolCall of response.tool_calls) { const args = JSON.parse(toolCall.function.arguments); const source = await searchWikipedia(args.query); messages.push({ role: "tool", content: JSON.stringify(source), tool_call_id: toolCall.id, }); } } else { return response.content || ""; } } return "Could not get a relevant response after multiple attempts."; } async function main() { const result = await callAgent({ question: "Where does the sun go at night?", }); console.log(result); } main(); ``` Run the agent and check if it works: ```bash python main.py ``` ```plaintext Okay! Imagine the Earth is like a big ball, and we live on it. The sun doesn't really “go” anywhere—it stays in the same spot, shining all the time. But our Earth is spinning like a top! ``` ```bash npx tsx main.ts ``` ```plaintext Okay! Imagine the Earth is like a big ball, and we live on it. The sun doesn't really “go” anywhere—it stays in the same spot, shining all the time. But our Earth is spinning like a top! ``` ## Evaluate the agent Evaluators are callables that take the Log's dictionary representation as input and return a judgment. The Evaluator's judgment should respect the `return_type` present in Evaluator's [specification](https://humanloop.com/docs/v5/api-reference/evaluators/upsert#request.body.spec). The Evaluator can take an additional `target` argument to compare the Log against. The target is provided in an Evaluation context by the validation [Dataset](/docs/v5/explanation/datasets). For more details, check out our [Evaluator explanation](/docs/v5/explanation/evaluators). Let's check if the agent respects the requirement of providing easy-to-understand answers. We will create an [Evaluation](/docs/v5/guides/evals/run-evaluation-ui) to benchmark the performance of the agent. An Evaluation requires a [Dataset](/docs/v5/guides/explanations/datasets) and at least one [Evaluator](/docs/v5/guides/explanations/evaluators). ### Create LLM judge We will use an LLM judge to automatically evaluate the agent's responses. We will define the Evaluator in code, but you can also [manage Evaluators in the UI](/docs/v5/guides/evals/llm-as-a-judge). Add this to your `main` function: ```python title="main.py" if __name__ == "__main__": humanloop.evaluators.upsert( path="QA Agent/Comprehension", spec={ "arguments_type": "target_free", "return_type": "number", "evaluator_type": "llm", "prompt": { "model": "gpt-4o", "endpoint": "complete", "template": ( "You must decide if an explanation is simple " "enough to be understood by a 5-year old. " "A better explanation is shorter and uses less jargon. " "Rate the answer from 1 to 10, where 10 is the best.\n" "\n\n{{log.inputs.question}}\n\n\n" "\n\n{{log.output}}\n\n" "First provide your rationale, then on a newline, " "output your judgment." ), "provider": "openai", "temperature": 0, }, }, ) ``` ```typescript title="main.ts" async function main() { await humanloop.evaluators.upsert({ path: "QA Agent/Comprehension", spec: { argumentsType: "target_free", returnType: "number", evaluatorType: "llm", prompt: { model: "gpt-4o", endpoint: "complete", template: "You must decide if an explanation is simple " + "enough to be understood by a 5-year old. " + "A better explanation is shorter and uses less jargon. " + "Rate the answer from 1 to 10, where 10 is the best.\n" + "\n\n{{log.inputs.question}}\n\n\n" + "\n\n{{log.output}}\n\n" + "First provide your rationale, then on a newline, " + "output your judgment.", provider: "openai", temperature: 0, }, }, }); ``` ### Add Dataset Create a file called `dataset.jsonl` and add the following: ```jsonl title="dataset.jsonl" maxLines=5 {"inputs": {"question": "Why is the sky blue?"}} {"inputs": {"question": "Where does the sun go at night?"}} {"inputs": {"question": "Why do birds fly?"}} {"inputs": {"question": "What makes rainbows?"}} {"inputs": {"question": "Why do we have to sleep?"}} {"inputs": {"question": "How do fish breathe underwater?"}} {"inputs": {"question": "Why do plants need water?"}} {"inputs": {"question": "How does the moon stay in the sky?"}} {"inputs": {"question": "What are stars made of?"}} {"inputs": {"question": "Why do we have seasons?"}} {"inputs": {"question": "How does the TV work?"}} {"inputs": {"question": "Why do dogs wag their tails?"}} {"inputs": {"question": "What makes cars go?"}} {"inputs": {"question": "Why do we need to brush our teeth?"}} {"inputs": {"question": "What do ants eat?"}} {"inputs": {"question": "Why does the wind blow?"}} {"inputs": {"question": "How do airplanes stay in the air?"}} {"inputs": {"question": "Why does the ocean look so big?"}} {"inputs": {"question": "What makes the grass green?"}} {"inputs": {"question": "Why do we have to eat vegetables?"}} {"inputs": {"question": "How do butterflies fly?"}} {"inputs": {"question": "Why do some animals live in the zoo?"}} {"inputs": {"question": "How do magnets stick to the fridge?"}} {"inputs": {"question": "What makes fire hot?"}} {"inputs": {"question": "Why do leaves change color?"}} {"inputs": {"question": "What happens when we flush the toilet?"}} {"inputs": {"question": "Why do we have belly buttons?"}} {"inputs": {"question": "What makes the clouds move?"}} {"inputs": {"question": "Why do we have eyebrows?"}} {"inputs": {"question": "How do seeds turn into plants?"}} {"inputs": {"question": "Why does the moon change shape?"}} {"inputs": {"question": "Why do bees make honey?"}} {"inputs": {"question": "What makes ice melt?"}} {"inputs": {"question": "Why do we sneeze?"}} {"inputs": {"question": "How do trains stay on the tracks?"}} {"inputs": {"question": "Why do stars twinkle?"}} {"inputs": {"question": "Why can't we see air?"}} {"inputs": {"question": "What makes the Earth spin?"}} {"inputs": {"question": "Why do frogs jump?"}} {"inputs": {"question": "Why do cats purr?"}} {"inputs": {"question": "How do phones let us talk to people far away?"}} {"inputs": {"question": "Why does the moon follow us?"}} {"inputs": {"question": "What makes lightning?"}} {"inputs": {"question": "Why does it snow?"}} {"inputs": {"question": "Why do we have shadows?"}} {"inputs": {"question": "Why do boats float?"}} {"inputs": {"question": "What makes our heart beat?"}} {"inputs": {"question": "Why do some animals sleep all winter?"}} {"inputs": {"question": "Why do we have to wear shoes?"}} {"inputs": {"question": "What makes music?"}} ``` ### Run an Evaluation Add this to your `main` function: ```python title="main.py" maxLines=100 highlight={4-24} if __name__ == "__main__": # ... # Read the evaluation dataset with open("dataset.jsonl", "r") as fp: dataset = [json.loads(line) for line in fp] humanloop.evaluations.run( name="QA Agent Answer Check", file={ "path": "QA Agent/Agent", "callable": call_agent, }, evaluators=[{"path": "QA Agent/Comprehension"}], dataset={ "path": "QA Agent/Dataset", "datapoints": dataset, }, workers=8, ) ``` ```typescript title="main.ts" maxLines=100 highlight={4-29} async function main() { // ... // Read the evaluation dataset const dataset: any[] = []; const fileStream = fs.createReadStream("dataset.jsonl"); const rl = readline.createInterface({ input: fileStream, crlfDelay: Infinity, }); for await (const line of rl) { dataset.push(JSON.parse(line)); } // Run the evaluation await humanloop.evaluations.run({ name: "QA Agent Answer Check", file: { path: "QA Agent/Agent", callable: callAgent, }, evaluators: [{ path: "QA Agent/Comprehension" }], dataset: { path: "QA Agent/Comprehension", datapoints: dataset, }, concurrency: 8, }); } ``` Run your file and let the Evaluation finish: ```bash title="Terminal" python main.py ``` ```bash title="Terminal" maxLines=50 Navigate to your Evaluation: https://app.humanloop.com/project/fl_9CCIoTzySPfUFeIxfYE6g/evaluations/evr_67tEc2DiR83fy9iTaqyPA/stats Flow Version ID: flv_9ECTrfeZYno2OIj9KAqlz Run ID: rn_67tEcDYV6mqUS86hD8vrP Running 'Agent' over the Dataset 'Children Questions' using 8 workers [##############--------------------------] 15/50 (30.00%) | ETA: 14 ... 📊 Evaluation Results for QA Agent/Agent +------------------------+---------------------+ | | Latest | +------------------------+---------------------+ | Run ID | 67tEc | +------------------------+---------------------+ | Version ID | 9ECTr | +------------------------+---------------------+ | Added | 2024-11-19 21:49:02 | +------------------------+---------------------+ | Evaluators | | +------------------------+---------------------+ | QA Agent/Comprehension | 3.24 | +------------------------+---------------------+ ``` ```bash title="Terminal" maxLines=50 npx tsx main.ts ``` ```bash title="Terminal" maxLines=50 Navigate to your Evaluation: https://app.humanloop.com/project/fl_9CCIoTzySPfUFeIxfYE6g/evaluations/evr_67tEc2DiR83fy9iTaqyPA/stats Flow Version ID: flv_9ECTrfeZYno2OIj9KAqlz Run ID: rn_67tEcDYV6mqUS86hD8vrP Running 'Agent' over the Dataset 'Children Questions' using 8 workers [##############--------------------------] 15/50 (30.00%) | ETA: 14 ... 📊 Evaluation Results for QA Agent/Agent +------------------------+---------------------+ | | Latest | +------------------------+---------------------+ | Run ID | 67tEc | +------------------------+---------------------+ | Version ID | 9ECTr | +------------------------+---------------------+ | Added | 2024-11-19 21:49:02 | +------------------------+---------------------+ | Evaluators | | +------------------------+---------------------+ | QA Agent/Comprehension | 3.24 | +------------------------+---------------------+ ``` ## Iterate and evaluate again The score of the initial setup is quite low. Click the Evaluation link from the terminal and switch to the Logs view. You will see that the model tends to provide elaborate answers. Let's modify the LLM prompt inside `call_model`: ```python title="main.py" maxLines=100 highlight={11-12} def call_model(messages: list[Message]) -> Message: """Calls the model with the given messages""" system_message = { "role": "system", "content": ( "You are an assistant that help to answer user questions. " "You should leverage wikipedia to answer questions so that " "the information is up to date. If the response from Wikipedia " "does not seem relevant, rephrase the question and call the " "tool again. Then finally respond to the user. " "Formulate the response so that it is easy to understand " "for a 5 year old." ) } response = openai.chat.completions.create( model="gpt-4o", messages=[system_message] + messages, tools=[ { "type": "function", "function": { "name": "search_wikipedia", "description": "Search the internet to get up to date answers for a query.", "parameters": { "type": "object", "required": ["query"], "properties": { "query": {"type": "string"}, }, "additionalProperties": False, }, } } ], ) return response.choices[0].message.to_dict(exclude_unset=False) ``` ```typescript title="main.ts" maxLines=100 highlight={10-11} const callModel = async (messages: Array) => { const systemMessage: Message = { role: "system", content: "You are an assistant that helps to answer user questions. " + "You should leverage wikipedia to answer questions so that " + "the information is up to date. If the response from " + "Wikipedia does not seem relevant, rephrase the question " + "and call the tool again. Then finally respond to the user. "+ "Formulate the response so that it is easy to understand " + "for a 5 year old.", }; const response = await openai.chat.completions.create({ model: "gpt-4o", messages: [systemMessage, ...messages], tools: [ { type: "function", function: { name: "search_wikipedia", description: "Search the internet to get up to date answers for a query.", parameters: { type: "object", required: ["query"], properties: { query: { type: "string" }, }, additionalProperties: false, }, }, }, ], }); return response.choices[0].message; } ``` Run the agent again and let the Evaluation finish: ```python python main.py ``` ```typescript npx tsx main.ts ``` ```bash title="Terminal" maxLines=50 Flow Version ID: flv_9ECTrfeZYno2OIj9KAqlz Run ID: rn_WnIwPSI7JFKEtwTS0l3mj Navigate to your Evaluation: https://app.humanloop.com/project/fl_9CCIoTzySPfUFeIxfYE6g/evaluations/rn_WnIwPSI7JFKEtwTS0l3mj/stats Running 'Agent' over the Dataset 'Children Questions' using 8 workers [######################------------------] 34/50 (68.00%) | ETA: 14 ... +------------------------+---------------------+---------------------+ | | Control | Latest | +------------------------+---------------------+---------------------+ | Run ID | 67tEc | WnIwP | +------------------------+---------------------+---------------------+ | Version ID | 9ECTr | 9ECTr | +------------------------+---------------------+---------------------+ | Added | 2024-11-19 22:05:17 | 2024-11-19 22:24:13 | +------------------------+---------------------+---------------------+ | Evaluators | | | +------------------------+---------------------+---------------------+ | QA Agent/Comprehension | 3.24 | 8.04 | +------------------------+---------------------+---------------------+ Change of [4.80] for Evaluator QA Agent/Comprehension ``` Click the Evaluation link again. The agent's performance has improved significantly. ## Add detailed logging If you use a programming language not supported by the SDK, or want more control, see our guide on [logging through the API](/docs/v5/guides/observability/logging-through-api) for an alternative to decorators. Up to this point, we have treated the agent as a black box, reasoning about its behavior by looking at the inputs and outputs. Let's use Humanloop logging to observe the step-by-step actions taken by the agent. Modify `main.py`: ```python title="main.py" maxLines=100 highlight={1,5,10,15} @humanloop.tool(path="QA Agent/Search Wikipedia") def search_wikipedia(query: str) -> dict: ... @humanloop.prompt(path="QA Agent/Prompt") def call_model(messages: list[Message]) -> Message: ... @humanloop.flow(path="QA Agent/Agent") def call_agent(question: str) -> str: ... ``` To auto-instrument calls to OpenAI, pass the module in the Humanloop constructor: ```typescript const humanloop = new HumanloopClient({ apiKey: process.env.HUMANLOOP_API_KEY, providers: { // Pass the OpenAI module, not the initialized client OpenAI } }); ``` Modify `main.ts`: ```typescript title="main.ts" maxLines=100 highlight={20-21,26-27,32-33} const searchWikipedia = humanloop.tool({ path: "QA Agent/Search Wikipedia", version: { function: { name: "Search Wikipedia", description: "Search Wikipedia for the best article to answer a question", strict: true, parameters: { type: "object", properties: { query: { type: "string", description: "The question to search Wikipedia for", }, }, required: ["query"], }, }, }, // Wraps the initial function body callable: async ({ query }) => { ... }, }); const callModel = humanloop.prompt({ path: "QA Agent/Prompt", // Wraps the initial function body callable: async ({ messages }) => { ... }, }); const callAgent = humanloop.flow({ path: "QA Agent/Agent", // Wraps the initial function body callable: async ({ question }) => { ... }, }); ``` Evaluate the agent again. When it's done, head to your workspace and click the **Agent** [Flow](/docs/v5/guides/explanations/flows) on the left. Select the Logs tab from the top of the page. The decorators divide the code in logical components, allowing you to observe the steps taken to answer a question. Every step taken by the agent creates a Log. ## Next steps We've built a complex agentic workflow and learned how to use Humanloop to add logging to it and evaluate its performance. Take a look at these resources to learn more about evals on Humanloop: * Learn how to [create a custom dataset](/docs/v5/guides/evals/create-dataset) for your project. * Learn more about using [LLM Evaluators](/docs/v5/guides/evals/llm-as-a-judge) on Humanloop. # Capture user feedback > Collect feedback from your users to improve your AI product. In this tutorial, we'll show how you can gather valuable insights from your users to evaluate and improve your AI product. We'll deploy a simple chat app that allows users to interact with an AI model. Later, we'll modify the source code to capture user feedback and show how these insights are used to improve the AI product. ### Prerequisites

Create a Humanloop Account

If you haven't already, [create an account](https://app.humanloop.com/signup) or [log in](https://app.humanloop.com/login) to Humanloop

Add an OpenAI API Key

If you're the first person in your organization, you'll need to add an API key to a model provider. 1. Go to OpenAI and [grab an API key](https://platform.openai.com/api-keys). 2. In Humanloop [Organization Settings](https://app.humanloop.com/account/api-keys) set up OpenAI as a model provider. Using the Prompt Editor will use your OpenAI credits in the same way that the OpenAI playground does. Keep your API keys for Humanloop and the model providers private.
## Capture user feedback You can grab the source code used in this tutorial here: [hl-chatgpt-clone-typescript](https://github.com/humanloop/hl-chatgpt-clone-typescript) ### Clone and start a chat app server ```bash git clone https://github.com/humanloop/hl-chatgpt-clone-typescript # add Humanloop API key touch .env.local echo HUMANLOOP_API_KEY=YOUR_API_KEY >> .env.local # optionally add OpenAI key, if you haven't already in Humanloop app echo OPENAI_API_KEY=YOUR_API_KEY >> .env.local # run the app bun install bun run dev ``` ### Use the chat app Open the chat app in your browser and start chatting with the AI model. Chat Agent Every time the user presses the Send button, Humanloop receives the request and calls the AI model. The response from the model is then stored as a Log. Let's check the `api/chat/route.ts` file to see how it works. * The `path` parameter is the path to the Prompt in the Humanloop workspace. If the Prompt doesn't exist, it will be created. * The `prompt` parameter is the configuration of the Prompt. In this case we manage our Prompt in code; if the configuration of the Prompt changes, a new version of the Prompt will automatically be created on Humanloop. Prompts can alternatively be managed [directly on Humanloop](/docs/development/guides/call-prompt). * The `messages` parameter is the list of all messages exchanged between the Model and the User. To learn more about calling Prompts with the Humanloop SDK, see the [Prompt Call](/v5/api-reference/prompts/call-stream) API reference. ```typescript api/chat/route.ts const response = await humanloop.prompts.callStream({ // if Prompt doesn't exist, it will be created path: "chatgpt-clone-tutorial/customer-support-agent", prompt: { model: "gpt-4", template: [ { role: "system", content: "You are a helpful assistant.", }, ], }, // messages is a list of objects: [{role: string, content: string}, ...]. // Role is either "user", "assistant", "system", or "tool". messages, providerApiKeys: { // OpenAI API key, if you haven't already set it in Humanloop app openai: process.env.OPENAI_API_KEY, }, }); ``` ### Review the logs in Humanloop After chatting with the AI model, go to the Humanloop app and review the logs. Click on the `chatgpt-clone-tutorial/customer-support-agent` Prompt, then click on the Logs tab at the top of the page. You see that all the interactions with the AI model are logged here. The code will generate a new Prompt `chatgpt-clone-tutorial/customer-support-agent` in the Humanloop app. To change the path, modify the variable `PROMPT_HUMANLOOP_PATH` in the `api/chat/route.ts` file. Chat Agent ### Modify the code to capture user feedback Now, let's modify the code to start getting user feedback! Go back to the code editor and uncomment lines 174-193 in the `page.tsx` file. This snippet will add 👍 and 👎 buttons, that users can press to give feedback on the model's responses. ```typescript highlight={13-31} return (
{message.role}
{message.content ? (
{message.content as string}
) : (
...
)} {logId && (
)}
); ``` To understand how the feedback is captured and sent to Humanloop, let's check the `api/feedback/route.ts` file. We use [Humanloop TypeScript SDK](https://www.npmjs.com/package/humanloop) to make calls to Humanloop. To attach user feedback, we only need three parameters: * `parentId` is the Id of the [Log](../../explanation/logs) to which we want to attach feedback. The `page.txs` file stores all log Ids for model responses. * `path` is the path to the Evaluator. In this example, we're using an example 'rating' Evaluator. * `judgment` is the user feedback. ```typescript api/feedback/route.ts const response = await humanloop.evaluators.log({ // Pass the `logId` of the Prompt Log to record feedback against. parentId: logId, // Here, we're recording feedback against an example "rating" Evaluator, // which is of type `select` and has two possible options: "good" and "bad." path: "Example Evaluators/Human/rating", // Alternatively, we advise to specify Evaluator by id. This is more robust and less error-prone. // versionId: "ev_9WiSw2VYWjAb22duuQ"; judgment: judgment, //user feedback }); ``` ### Capture user feedback Refresh the page in your browser and give 👍 or 👎 to the model's responses. Chat Agent In the tutorial, we used the 'rating' Evaluator to capture user feedback. However, different use cases and user interfaces may require various types of feedback that need to be mapped to the appropriate end-user interactions. There are broadly 3 important kinds of feedback: 1. **Explicit feedback**: these are purposeful actions to review the generations. For example, ‘thumbs up/down’ button presses. 2. **Implicit feedback**: indirect actions taken by your users may signal whether the generation was good or bad, for example, whether the user ‘copied’ the generation, ‘saved it’ or ‘dismissed it’ (which is negative feedback). 3. **Free-form feedback**: Corrections and explanations provided by the end-user on the generation. You should create Human Evaluators structured to capture the feedback you need. For example, a Human Evaluator with return type "text" can be used to capture free-form feedback, while a Human Evaluator with return type "multi\_select" can be used to capture user actions that provide implicit feedback. If you have not done so, you can follow our guide to [create a Human Evaluator](/docs/evaluation/guides/human-evaluator) to set up the appropriate feedback schema. ### Review the logs in Humanloop With the user feedback captured, go back to the Humanloop app and review the logs. On the Performance tab, you can see all Evaluators and their values. The user feedback is captured in the rating Evaluator ('good' for 👍 and 'bad' for 👎). Chat Agent
## Use the logs to improve your AI product After you collect enough data, you can leverage the user feedback to improve your AI product. Navigate back to the Logs view and filter all Logs that have a 'bad' rating to review the model's responses that need improvement. Run Evals with Dataset on Humanloop. Click on Log and then on **Editor ->** button in the top right corner to open the Prompt Editor. In the Prompt Editor, you can make changes to the instructions and the model's parameters to improve the model's performance. Once you're happy with the changes, deploy the new version of the Prompt. Run Evals with Dataset on Humanloop. When users start interacting with the new version, compare the "good" to "bad" ratio to see if the changes have improved your users' experience. ## Next steps Now that you've successfully captured user feedback, you can explore more ways to improve your AI product: * If you found that your Prompt doesn't perform well, see our guide on [Comparing and Debugging Prompts](/docs/evaluation/guides/comparing-prompt-editor). * Leverage [Code](/docs/evaluation/guides/code-based-evaluator), [AI](/docs/evaluation/guides/llm-as-a-judge) and [Human](/docs/evaluation/guides/human-evaluators) Evaluators to continuously monitor and improve your AI product. # Run an Evaluation via the UI > How to use Humanloop to evaluate multiple different Prompts across a Dataset. An **Evaluation** on Humanloop leverages a [Dataset](../../explanation/datasets), a set of [Evaluators](../../explanation/evaluators) and different versions of a [Prompt](../../explanation/prompts) to compare. The Dataset contains datapoints describing the inputs (and optionally the expected results) for a given task. The Evaluators define the criteria for judging the performance of the Prompts when executed using these inputs. [comment]: <> "The above should be in Explanation section but we haven't move it there" Prompts, when evaluated, produce [Logs](../../explanation/logs). These Logs are then judged by the Evaluators. You can see the summary of Evaluators judgments to systematically compare the performance of the different Prompt versions. ### Prerequisites * A set of [Prompt](../../explanation/prompts) versions you want to compare - see the guide on [creating Prompts](./comparing-prompt-editor). * A [Dataset](../../explanation/datasets) containing datapoints for the task - see the guide on [creating a Dataset](./create-dataset). * At least one [Evaluator](../../explanation/evaluators) to judge the performance of the Prompts - see the guides on creating [Code](/docs/evaluation/guides/code-based-evaluator), [AI](/docs/evaluation/guides/llm-as-a-judge) and [Human](/docs/evaluation/guides/human-evaluators) Evaluators. ## Run an Evaluation via UI For this example, we're going to evaluate the performance of a Support Agent that responds to user queries about Humanloop's product and documentation. Our goal is to understand which base model between `gpt-4o`, `gpt-4o-mini` and `claude-3-5-sonnet-20241022` is most appropriate for this task. ### Navigate to the Evaluations tab of your Prompt * Go to the Prompt you want to evaluate and then click on **Evaluations** tab at the top of the page. * Click the **Evaluate** button top right to create a new Evaluation. * Click the **+Run** button top right to create a new Evaluation Run. Prompt Evaluations Run tab. ### Set up an Evaluation Run * Select a Dataset using **+Dataset**. * Add the Prompt versions you want to compare using **+Prompt**. * Add the Evaluators you want to use to judge the performance of the Prompts using **+Evaluator**. By default the system will re-use Logs if they exist for the chosen Dataset, Prompts and Evaluators. This makes it easy to extend Evaluation Run without paying the cost of re-running your Prompts and Evaluators. If you want to force the system to re-run the Prompts against the Dataset producing a new batch of Logs, you can click on regenerate button next to the Logs count * Click **Save**. Humanloop will start generating Logs for the Evaluation. In progress Evaluation run This guide assumes both the Prompt and Evaluator Logs are generated using the Humanloop runtime. For certain use cases where more flexibility is required, the runtime for producing Logs instead lives in your code - see our guide on [Logging](../../development/guides/logging), which also works with our Evaluations feature. We have a guide for how to run Evaluations with Logs generated in your code coming soon! ### Review the results Once the Logs are produced, you can review the performance of the different Prompt versions by navigating to the **Stats** tab. * The top spider plot provides you with a summary of the average Evaluator performance across all the Prompt versions. In our case, `gpt-4o`, although on average slightly slower and more expensive on average, is significantly better when it comes to **User Satisfaction**. Evaluation Spider plot * Below the spider plot, you can see the breakdown of performance per Evaluator. Evaluation Evaluator stats breakdown * To drill into and debug the Logs that were generated, navigate to the **Review** tab at top left of the Run page. The Review view allows you to better understand performance and replay logs in our Prompt Editor. Drill down to Evaluatoin Logs. ### Next Steps * Incorporate this Evaluation process into your Prompt engineering and deployment workflow. * Setup Evaluations where the runtime for producing Logs lives in your code - see our guide on [Logging](/docs/development/guides/log-to-a-prompt). * Utilise Evaluations as part of your [CI/CD pipeline](/docs/evaluation/guides/cicd-integration) # Run an Evaluation via the API > In this guide, we will walk through how to programmatically evaluate multiple different Prompts to compare the quality and performance of each version. An **Evaluation** on Humanloop leverages a [Dataset](../../explanation/datasets), a set of [Evaluators](../../explanation/evaluators) and different versions of a [Prompt](../../explanation/prompts) to compare. In this guide, we use a Dataset to evaluate the performance of different Prompt versions. To learn how to evaluate Prompts without a Dataset, see the guide on [Spot-check your Logs](./spot-check-logs). ### Prerequisites * A set of [Prompt](../../explanation/prompts) versions you want to compare - see the guide on [creating Prompts](./comparing-prompt-editor). * A [Dataset](../../explanation/datasets) containing datapoints for the task - see the guide on [creating a Dataset via API](./create-dataset-via-api). * At least one [Evaluator](../../explanation/evaluators) to judge the performance of the Prompts - see the guides on creating [Code](/docs/evaluation/guides/code-based-evaluator), [AI](/docs/evaluation/guides/llm-as-a-judge) and [Human](/docs/evaluation/guides/human-evaluators) Evaluators. ## Run an Evaluation For this guide, we're going to evaluate the performance of a Support Agent that responds to user queries about Humanloop's product and documentation. Our goal is to understand which base model between `gpt-4o`, `gpt-4o-mini` and `claude-3-5-sonnet-20241022` is most appropriate for this task. ### Create a Prompt Create a Support Agent Prompt with three versions each using a different base model. ```python Python from humanloop import Humanloop humanloop = Humanloop(api_key="YOUR_API_KEY") system_message = "You are a helpful assistant. Your job is to respond to FAQ style queries about the Humanloop documentation and platform. Be polite and succinct." gpt_4o = humanloop.prompts.upsert( path="Run Evaluation via API/Support Agent", model="gpt-4o", endpoint="chat", template=[ { "content": system_message, "role": "system", } ], provider="openai", commit_message="gpt-4o", ) gpt_4o_mini = humanloop.prompts.upsert( path="Run Evaluation via API/Support Agent", model="gpt-4o-mini", endpoint="chat", template=[ { "content": system_message, "role": "system", } ], provider="openai", commit_message="gpt-4o-mini", ) sonnet = humanloop.prompts.upsert( path="Run Evaluation via API/Support Agent", model="claude-3-5-sonnet-20241022", endpoint="chat", template=[ { "content": system_message, "role": "system", } ], provider="anthropic", commit_message="claude-3-5-sonnet-20241022", ) # store prompt versions for later use prompt_versions = [gpt_4o.version_id, gpt_4o_mini.version_id, sonnet.version_id] ``` ```typescript TypeScript import { HumanloopClient } from "humanloop"; const humanloop = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); const systemMessage = "You are a helpful assistant. Your job is to respond to FAQ style queries about the Humanloop documentation and platform. Be polite and succinct."; const gpt_4o = await humanloop.prompts.upsert({ path: "Run Evaluation via API/Support Agent", model: "gpt-4o", endpoint: "chat", template: [ { "content": systemMessage, "role": "system", } ], provider:"openai", commitMessage: "gpt-4o", }) const gpt_4o_mini = await humanloop.prompts.upsert({ path: "Run Evaluation via API/Support Agent", model: "gpt-4o-mini", endpoint: "chat", template: [ { "content": systemMessage, "role": "system", } ], provider:"openai", commitMessage: "gpt-4o-mini", }) const sonnet = await humanloop.prompts.upsert({ path: "Run Evaluation via API/Support Agent", model: "claude-3-5-sonnet-20241022", endpoint: "chat", template: [ { "content": systemMessage, "role": "system", } ], provider:"anthropic", commitMessage: "claude-3-5-sonnet-20241022", }) // store prompt versions for later use const promptVersions = [gpt_4o.versionId, gpt_4o_mini.versionId, sonnet.versionId] ``` ### Create a Dataset We defined sample data that contains user messages and desired responses for the Support Agent Prompt. We will now create a Dataset with these datapoints. ```python Python humanloop.datasets.upsert( path="Run Evaluation via API/Dataset with user questions", datapoints=[ { "messages": [{ "role": "user", "content": "How do i manage my organizations API keys?", }], "target": {"answer": "Hey, thanks for your questions. Here are steps for how to achieve: 1. Log in to the Humanloop Dashboard \n\n2. Click on \"Organization Settings.\"\n If you do not see this option, you might need to contact your organization admin to gain the necessary permissions.\n\n3. Within the settings or organization settings, select the option labeled \"API Keys\" on the left. Here you will be able to view and manage your API keys.\n\n4. You will see a list of existing API keys. You can perform various actions, such as:\n - **Generate New API Key:** Click on the \"Generate New Key\" button if you need a new API key.\n - **Revoke an API Key:** If you need to disable an existing key, find the key in the list and click the \"Revoke\" or \"Delete\" button.\n - **Copy an API Key:** If you need to use an existing key, you can copy it to your clipboard by clicking the \"Copy\" button next to the key.\n\n5. **Save and Secure API Keys:** Make sure to securely store any new or existing API keys you are using. Treat them like passwords and do not share them publicly.\n\nIf you encounter any issues or need further assistance, it might be helpful to engage with an engineer or your IT department to ensure you have the necessary permissions and support.\n\nWould you need help with anything else?"}, }, { "messages":[{ "role": "user", "content": "Hey, can do I use my code evaluator for monitoring my legal-copilot prompt?", }], "target": {"answer": "Hey, thanks for your questions. Here are steps for how to achieve: 1. Navigate to your Prompt dashboard. \n 2. Select the `Monitoring` button on the top right of the Prompt dashboard \n 3. Within the model select the Version of the Evaluator you want to turn on for monitoring. \n\nWould you need help with anything else?"}, }, ], action="set", commit_message="Add two new questions and answers", ) ``` ```typescript TypeScript await humanloop.datasets.upsert({ path: "Run Evaluation via API/Dataset with user questions", datapoints: [{ "messages": [{ "role": "user", "content": "How do i manage my organizations API keys?", }], "target": {"answer": "Hey, thanks for your questions. Here are steps for how to achieve: 1. Log in to the Humanloop Dashboard \n\n2. Click on \"Organization Settings.\"\n If you do not see this option, you might need to contact your organization admin to gain the necessary permissions.\n\n3. Within the settings or organization settings, select the option labeled \"API Keys\" on the left. Here you will be able to view and manage your API keys.\n\n4. You will see a list of existing API keys. You can perform various actions, such as:\n - **Generate New API Key:** Click on the \"Generate New Key\" button if you need a new API key.\n - **Revoke an API Key:** If you need to disable an existing key, find the key in the list and click the \"Revoke\" or \"Delete\" button.\n - **Copy an API Key:** If you need to use an existing key, you can copy it to your clipboard by clicking the \"Copy\" button next to the key.\n\n5. **Save and Secure API Keys:** Make sure to securely store any new or existing API keys you are using. Treat them like passwords and do not share them publicly.\n\nIf you encounter any issues or need further assistance, it might be helpful to engage with an engineer or your IT department to ensure you have the necessary permissions and support.\n\nWould you need help with anything else?"}, }, { "messages":[{ "role": "user", "content": "Hey, can do I use my code evaluator for monitoring my legal-copilot prompt?", }], "target": {"answer": "Hey, thanks for your questions. Here are steps for how to achieve: 1. Navigate to your Prompt dashboard. \n 2. Select the `Monitoring` button on the top right of the Prompt dashboard \n 3. Within the model select the Version of the Evaluator you want to turn on for monitoring. \n\nWould you need help with anything else?"}, }], action: "set", commitMessage: "Add two new questions and answers" }); ``` ### Create an Evaluation We create an Evaluation Run to compare the performance of the different Prompts using the Dataset we just created. For this guide, we selected *Semantic similarity*, *Cost* and *Latency* Evaluators. You can find these Evaluators in the **Example Evaluators** folder in your workspace. "Semantic similarity" Evaluator measures the degree of similarity between the model's response and the expected output. The similarity is rated on a scale from 1 to 5, where 5 means very similar. ```python Python evaluation = humanloop.evaluations.create( name="Evaluation via API", file={ "path": "Run Evaluation via API/Support Agent", }, evaluators=[{"path": "Example Evaluators/AI/Semantic similarity"}, {"path": "Example Evaluators/Code/Cost"}, {"path": "Example Evaluators/Code/Latency"}], ) # Create a Run for each prompt version for prompt_version in prompt_versions: humanloop.evaluations.create_run( id=evaluation.id, dataset={"path": "Run Evaluation via API/Dataset with user questions"}, version={"version_id": prompt_version}, ) ``` ```typescript TypeScript const evaluation = await humanloop.evaluations.create({ name: "Evaluation via API", file: { "path": "Run Evaluation via API/Support Agent", }, evaluators: [{"path": "Example Evaluators/AI/Semantic similarity"}, {"path": "Example Evaluators/Code/Cost"}, {"path": "Example Evaluators/Code/Latency"}], }); for (const promptVersion of promptVersions) { await humanloop.evaluations.createRun(evaluation.id, { dataset: { path: "Run Evaluation via API/Dataset with user questions" }, version: { version_id: promptVersion }, }); } ``` ### Inspect the Evaluation stats When Runs are completed, you can inspect the Evaluation Stats to see the summary of the Evaluators judgments. ```python Python evaluation_stats = humanloop.evaluations.get_stats( id=evaluation.id, ) print(evaluation_stats.report) ``` ```typescript TypeScript const evaluationStats = await humanloop.evaluations.getStats(evaluation.id); console.log(evaluationStats.report); ``` Drill down to Evaluatoin Logs. Alternatively you can see detailed stats in the Humanloop UI. Navigate to the Prompt, click on the **Evaluations** tab at the top of the page and select the Evaluation you just created. The stats are displayed in the **Stats** tab. Drill down to Evaluatoin Logs. # Run an Evaluation using your runtime If you choose to execute Prompts using your own Runtime, you still can benefit from Humanloop Evaluations. In code snippet below, we run Evaluators hosted on Humanloop using logs produced by the OpenAI client. ```python Python # create new Humanloop prompt prompt = humanloop.prompts.upsert( path="Run Evaluation via API/Support Agent my own runtime", model="gpt-4o", endpoint="chat", template=[ { "content": "You are a helpful assistant. Your job is to respond to FAQ style queries about the Humanloop documentation and platform. Be polite and succinct.", "role": "system", } ], provider="openai", ) # create the evaluation evaluation = humanloop.evaluations.create( name="Evaluation via API using my own runtime", file={ "path": "Run Evaluation via API/Support Agent my own runtime", }, evaluators=[{"path": "Example Evaluators/AI/Semantic similarity"}, {"path": "Example Evaluators/Code/Cost"}, {"path": "Example Evaluators/Code/Latency"}], ) # use dataset created in previous steps datapoints = humanloop.datasets.list_datapoints(dataset.id) import openai openai_client = openai.OpenAI(api_key="USE_YOUR_OPENAI_API_KEY") # create a run run = humanloop.evaluations.create_run( id=evaluation.id, dataset={"version_id": dataset.version_id}, version={"version_id": prompt.version_id}, ) # for each datapoint in the dataset, create a chat completion for datapoint in datapoints: # create a run chat_completion = openai_client.chat.completions.create( messages=datapoint.messages, model=prompt.model ) # log the prompt humanloop.prompts.log( id=prompt.id, run_id=run.id, version_id=prompt.version_id, source_datapoint_id=datapoint.id, output_message=chat_completion.choices[0].message, messages=datapoint.messages, ) ``` ## Next steps * Learn how to [set up LLM as a Judge](./llm-as-a-judge) to evaluate your AI applications. # Upload a Dataset from CSV > Learn how to create Datasets in Humanloop to define fixed examples for your projects, and build up a collection of input-output pairs for evaluation and fine-tuning. [Datasets](../../explanation/datasets) are a collection of input-output pairs that can be used to evaluate your Prompts, Tools or even Evaluators. ### Prerequisites You should have an existing [Prompt](../../explanation/prompts) on Humanloop with a variable defined with our double curly bracket syntax `{{variable}}`. If not, first follow our guide on [creating a Prompt](../prompts/create-prompt). In this example, we'll use a Prompt that categorises user queries about Humanloop's product and docs by which feature they relate to. An example Prompt with a variable `{{query}}`. ## Steps To create a dataset from a CSV file, we'll first create a CSV in Google Sheets that contains values for our Prompt variable `{{query}}` and then upload it to a Dataset on Humanloop. ### Create a CSV file. * In our Google Sheets example below, we have a column called `query` which contains possible values for our Prompt variable `{{query}}`. You can include as many columns as you have variables in your Prompt template. * There is additionally a column called `target` which will populate the target output for the classifier Prompt. In this case, we use simple strings to define the target. * More complex Datapoints that contain `messages` and structured objects for targets are supported, but are harder to incorporate into a CSV file as they tend to be hard-to-read JSON. If you need more complex Datapoints, [use the API](./create-dataset-api) instead. A CSV file in Google Sheets defining query and target pairs for our Classifier Prompt. ### Export the Google Sheet to CSV In Google Sheets, choose **File** → **Download** → **Comma-separated values (.csv)** ### Create a new Dataset File On Humanloop, select *New* at the bottom of the left-hand sidebar, then select *Dataset*. Create a new File from the sidebar on Humanloop. ### Click **Upload CSV** First name your dataset when prompted in the sidebar, then select the **Upload CSV** button and drag and drop the CSV file you created above using the file explorer. You will then be prompted to provide a commit message to describe the initial state of the dataset. Uploading a CSV file to create a dataset. ### Map the CSV columns Map each of the CSV columns into one of `input`, `message`, `target`. To avoid uploading a column of your CSV you can map it to the `exclude` option. To map in columns to Messages, they need to be in a specific format. An example of this can be seen in our example Dataset or below: ``` "[{""role"": ""user"", ""content"": ""Tell me about the weather""}]" ``` Once you have mapped your columns, press **Extend Current Dataset** Mapping columns of a CSV into specific values of a dataset. ### Review your uploaded datapoints You'll see the input-output pairs that were included in the CSV file and you can review the rows to inspect and edit the individual Datapoints. Inspect the Dataset created from the CSV file. ### Commit the dataset Click the **commit** button at the top of the Dataset editor and fill in a commit message. Press **Commit** again. Commit a Dataset. Your dataset is now uploaded and ready for use. ## Next steps 🎉 Now that you have Datasets defined in Humanloop, you can leverage our [Evaluations](./overview) feature to systematically measure and improve the performance of your AI applications. See our guides on [setting up Evaluators](./llm-as-a-judge) and [Running an Evaluation](./run-evaluation) to get started. For different ways to create datasets, see the links below: * [Create a Dataset from existing Logs](./create-dataset-from-logs) - useful for curating Datasets based on how your AI application has been behaving in the wild. * [Upload via API](./create-dataset-api) - useful for uploading more complex Datasets that may have nested JSON structures, which are difficult to represent in tabular .CSV format, and for integrating with your existing data pipelines. # Create a Dataset via the API > Learn how to create Datasets in Humanloop to define fixed examples for your projects, and build up a collection of input-output pairs for evaluation and fine-tuning. [Datasets](../../explanation/datasets) are a collection of input-output pairs that can be used to evaluate your Prompts, Tools or even Evaluators. ### Prerequisites If you are using the SDK, the only prerequisite is to have the SDK installed and configured. If you are using the API directly, you will need to have an API key. First you need to install and initialize the SDK. If you have already done this, skip to the next section. Open up your terminal and follow these steps: 1. Install the Humanloop SDK: ```python pip install humanloop ``` ```typescript npm install humanloop ``` 2. Initialize the SDK with your Humanloop API key (you can get it from the [Organization Settings page](https://app.humanloop.com/account/api-keys)). ```python from humanloop import Humanloop humanloop = Humanloop(api_key="") # Check that the authentication was successful print(humanloop.prompts.list()) ``` ```typescript import { HumanloopClient, Humanloop } from "humanloop"; const humanloop = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); // Check that the authentication was successful console.log(await humanloop.prompts.list()); ``` ## Steps Using the API is a great way to integrate Humanloop with your existing data pipeline or just to once-off upload a more complex Dataset that is hard to represent in a CSV file, such as one that contains an array of messages and JSON targets. ### Post data to the Datasets API We first define some sample data that contains user messages and desired responses from our [Support Agent Prompt](./create-dataset-from-logs) and call the `POST /datasets` endpoint to upload it as follows: ### Inspect the uploaded Dataset After running this code, in your Humanloop workspace you will now see a Dataset called `Support Query Ground Truth` (or whatever value was in `path`) with your sample data. Inspect the Dataset uploaded via API. ## Next steps 🎉 Now that you have Datasets defined in Humanloop, you can leverage our [Evaluations](./overview) feature to systematically measure and improve the performance of your AI applications. See our guides on [setting up Evaluators](./llm-as-a-judge) and [Running an Evaluation](./run-evaluation) to get started. For different ways to create datasets, see the links below: * [Create a Dataset from existing Logs](./create-dataset-from-logs) - useful for curating Datasets based on how your AI application has been behaving in the wild. * [Upload data from CSV](./upload-dataset-csv) - useful for quickly uploading existing tabular data you've collected outside of Humanloop. # Create a Dataset from existing Logs > Learn how to create Datasets in Humanloop to define fixed examples for your projects, and build up a collection of input-output pairs for evaluation and fine-tuning. [Datasets](../../explanation/datasets) are a collection of input-output pairs that can be used to evaluate your Prompts, Tools or even Evaluators. This guide will show you how to create Datasets in Humanloop from your Logs. ### Prerequisites You should have an existing [Prompt](../../explanation/prompts) on Humanloop and already generated some [Logs](../../explanation/logs). Follow our guide on [creating a Prompt](../prompts/create-prompt). ## Steps To create a Dataset from existing Logs: ### Navigate to the **Logs** of your Prompt Our Prompt in this example is a Support Agent that answers user queries about Humanloop's product and docs: Navigate to the Logs table of your Prompt. ### Select a subset of the Logs to add Filter logs on a criteria of interest, such as the version of the Prompt used, then multi-select Logs. In the menu in the top right of the page, select **Add to Dataset**. Filter and select logs of interest. ### Add to a new Dataset Provide a name of the new Dataset and click **Create** (or you can click **add to existing Dataset** to append the selection to an existing Dataset). Then provide a suitable commit message describing the datapoints you've added. Create a new dataset from logs. You will then see the new Dataset appear at the same level in the filesystem as your Prompt. ## Next steps 🎉 Now that you have Datasets defined in Humanloop, you can leverage our [Evaluations](./overview) feature to systematically measure and improve the performance of your AI applications. See our guides on [setting up Evaluators](./llm-as-a-judge) and [Running an Evaluation](./run-evaluation) to get started. For different ways to create datasets, see the links below: * [Upload data from CSV](./upload-dataset-csv) - useful for quickly uploading existing tabular data you've collected outside of Humanloop. * [Upload via API](./create-dataset-api) - useful for uploading more complex Datasets that may have nested JSON structures, which are difficult to represent in tabular .CSV format, and for integrating with your existing data pipelines. # Set up a code Evaluator > Learn how to create a code Evaluators in Humanloop to assess the performance of your AI applications. This guide covers setting up an offline evaluator, writing evaluation logic, and using the debug console. A code [Evaluator](../../explanation/evaluators) is a Python function that takes a generated [Log](../../explanation/logs) (and optionally a testcase [Datapoint](../../explanation/datasets) if comparing to expected results) as input and returns a **judgement**. The judgement is in the form of a boolean or number that measures some criteria of the generated Log defined within the code. Code Evaluators provide a flexible way to evaluate the performance of your AI applications, allowing you to re-use existing evaluation packages as well as define custom evaluation heuristics. We support a fully featured Python environment; details on the supported packages can be found in the [environment reference](/docs/v5/reference/python-environment) ### Prerequisites You should have an existing [Prompt](../../explanation/prompts) to evaluate and already generated some [Logs](../../explanation/logs). Follow our guide on [creating a Prompt](../../development/guides/create-prompt). In this example, we'll reference a Prompt that categorises a user query about Humanloop's product and docs by which feature it relates to. An example Prompt with a variable `{{query}}`. ## Create a code Evaluator ### Create a new Evaluator * Click the **New** button at the bottom of the left-hand sidebar, select **Evaluator**, then select **Code**. Create code evaluator. * Give the Evaluator a name when prompted in the sidebar, for example `Category Validator`. ### Define the Evaluator code After creating the Evaluator, you will automatically be taken to the code editor. For this example, our Evaluator will check that the feature category returned by the Prompt is from the list of allowed feature categories. We want to ensure our categoriser isn't hallucinating new features. * Make sure the **Mode** of the Evaluator is set to **Online** in the options on the left. * Copy and paste the following code into the code editor: ```python Python ALLOWED_FEATURES = [ "Prompt Editor", "Model Integrations", "Online Monitoring", "Offline Evaluations", "Dataset Management", "User Management", "Roles Based Access Control", "Deployment Options", "Collaboration", "Agents and chaining" ] def validate_feature(log): print(f"Full log output: \n {log['output']}") # Parse the final line of the log output to get the returned category feature = log["output"].split("\n")[-1] return feature in ALLOWED_FEATURES ``` You can define multiple functions in the code Editor to organize your evaluation logic. The final function defined is used as the main Evaluator entry point that takes the Log argument and returns a valid judgement. ### Debug the code with Prompt Logs * In the debug console beneath where you pasted the code, click **Select Prompt or Dataset** and find and select the Prompt you're evaluating. The debug console will load a sample of Logs from that Prompt. The debug console for testing the code. * Click the **Run** button at the far right of one of the loaded Logs to trigger a debug run. This causes the code to be executed with the selected Log as input and populates the **Result** column. * Inspect the output of the executed code by selecting the arrow to the right of **Result**. Inspect evaluator log in debug console. ### Commit the code Now that you've validated the behaviour, commit the code by selecting the **Commit** button at the top right of the Editor and provide a suitable commit message describing your changes. ### Inspect Evaluator logs Navigate to the **Logs** tab of the Evaluator to see and debug all the historic usages of this Evaluator. Evaluator logs table. ## Monitor a Prompt Now that you have an Evaluator, you can use it to monitor the performance of your Prompt by linking it so that it is automatically run on new Logs. ### Link the Evaluator to the Prompt * Navigate to the **Dashboard** of your Prompt * Select the **Monitoring** button above the graph and select **Connect Evaluators**. * Find and select the Evaluator you just created and click **Chose**. Select Evaluator for monitoring. You can link to a deployed version of the Evaluator by choosing the environment such as `production`, or you can link to a specific version of the Evaluator. If you want changes deployed to your Evaluator to be automatically reflected in Monitoring, link to the environment, otherwise link to a specific version. This linking results in: - An additional graph on your Prompt dashboard showing the Evaluator results over time. - An additional column in your Prompt Versions table showing the aggregated Evaluator results for each version. - An additional column in your Logs table showing the Evaluator results for each Log. ### Generate new Logs Navigate to the **Editor** tab of your Prompt and generate a new Log by entering a query and clicking **Run**. ### Inspect the Monitoring results Navigate to the **Logs** tab of your Prompt and see the result of the linked Evaluator against the new Log. You can filter on this value in order to [create a Dataset](/docs/evaluation/guides/create-dataset) of interesting examples. See the results of monitoring on your logs. ## Evaluating a Dataset When running a code Evaluator on a [Dataset](../../explanation/datasets), you can compare a generated [Log](../../explanation/logs) to each Datapoint's target. For example, here's the code of our example Exact Match code evaluator, which checks that the log output exactly matches our expected target. ```python Python def exact_match(log, testcase): target = testcase["target"]["output"] generation = log["output"] return target == generation ``` ## Next steps * Explore [AI Evaluators](/docs/evaluation/guides/llm-as-a-judge) and [Human Evaluators](/docs/evaluation/guides/human-evaluators) to complement your code-based judgements for more qualitative and subjective criteria. * Combine your Evaluator with a [Dataset](/docs/explanation/datasets) to run [Evaluations](/docs/evaluation/guides/run-evaluation) to systematically compare the performance of different versions of your AI application. # Set up LLM as a Judge > Learn how to use LLM as a judge to check for PII in Logs. LLMs can be used for evaluating the quality and characteristics of other AI-generated outputs. When correctly prompted, LLMs can act as impartial judges, providing insights and assessments that might be challenging or time-consuming for humans to perform at scale. In this guide, we'll explore how to setup an LLM as an [AI Evaluator](../../explanation/evaluators) in Humanloop, demonstrating their effectiveness in assessing various aspects of AI-generated content, such as checking for the presence of Personally Identifiable Information (PII). An AI [Evaluator](../../explanation/evaluators) is a Prompt that takes attributes from a generated [Log](../../explanation/logs) (and optionally from a testcase [Datapoint](../../explanation/dataset) if comparing to expected results) as context and returns a **judgement**. The judgement is in the form of a boolean or number that measures some criteria of the generated Log defined within the Prompt instructions. ### Prerequisites You should have an existing [Prompt](../../explanation/prompts) to evaluate and already generated some [Logs](../../explanation/logs). Follow our guide on [creating a Prompt](../../development/guides/create-prompt). In this example we will use a simple Support Agent Prompt that answers user queries about Humanloop's product and docs. Support agent base prompt. ## Create an LLM Evaluator ### Create a new Evaluator * Click the **New** button at the bottom of the left-hand sidebar, select **Evaluator**, then select **AI**. * Give the Evaluator a name when prompted in the sidebar, for example `PII Identifier`. ### Define the Evaluator Prompt After creating the Evaluator, you will automatically be taken to the Evaluator editor. For this example, our Evaluator will check whether the request to, or response from, our support agent contains PII. We want to understand whether this is a potential issue that we wish to mitigate with additional [Guardrails](../../observability/alerts-and-guardails) in our agent workflow. * Make sure the **Mode** of the Evaluator is set to **Online** in the options on the left. * Copy and paste the following Prompt into the Editor: ```text You are a helpful assistant. Your job is to observe the requests and outputs to a support agent and identify whether or not they contain any PII. Examples of PII information are: - Names - Addresses - Bank account information - Job information Here is the request and response information: ### Request: {{log.messages}} ### Response: {{log.output_message}} ### Your response should contain the rationale and the final binary true/false verdict as to whether PII exists in the request resposne. The final true/false verdit should be on a new line at the end. ``` In the Prompt Editor for an LLM evaluator, you have access to the underlying `log` you are evaluating as well as the `testcase` Datapoint that gave rise to it if you are using a Dataset for **offline** Evaluations. These are accessed with the standard `{{ variable }}` syntax, enhanced with a familiar dot notation to pick out specific values from inside the `log` and `testcase` objects. For example, suppose you are evaluating a Log object like this. ```json { "id": "data_B3RmIu9aA5FibdtXP7CkO", "prompt": {...}, "inputs": { "query": "What is the meaning of life?", }, "messages": [] "output": "I'm sorry, as an AI I don't have the capacity to understand the meaning of life.", "metadata": {...}, ...etc } ``` In the LLM Evaluator Prompt, `{{ log.inputs.query }}` will be replaced with the actual query in the final prompt sent to the LLM Evaluator. In order to get access to the fully populated Prompt that was sent in the underlying Log, you can use the special variable `{{ log_prompt }}`. ### Debug the code with Prompt Logs * In the debug console beneath where you pasted the code, click **Select Prompt or Dataset** and find and select the Prompt you're evaluating. The debug console will load a sample of Logs from that Prompt. The debug console for testing the Evaluator Prompt. * Click the **Run** button at the far right of one of the loaded Logs to trigger a debug run. This causes the Evaluator Prompt to be called with the selected Log attributes as input and populates the **Result** column. * Inspect the output of the executed code by selecting the arrow to the right of **Result**. Inspect evaluator log in debug console. ### Commit the code Now that you've validated the behaviour, commit the Evaluator Prompt by selecting the **Commit** button at the top right of the Editor and provide a suitable commit message describing your changes. ### Inspect Evaluator logs Navigate to the **Logs** tab of the Evaluator to see and debug all the historic usages of this Evaluator. Evaluator logs table. ## Next steps * Explore [Code Evaluators](./ocde-based-evaluator) and [Human Evaluators](./human-evaluator) to complement your AI judgements. * Combine your Evaluator with a [Dataset](../../explanation/datasets) to run [Evaluations](./run-evaluation) to systematically compare the performance of different versions of your AI application. # Set up a Human Evaluator > Learn how to set up a Human Evaluator in Humanloop. Human Evaluators allow your subject-matter experts and end-users to provide feedback on Prompt Logs. Human Evaluators allow your subject-matter experts and end-users to provide feedback on Prompt Logs. These Evaluators can be attached to Prompts and Evaluations. ## Creating a Human Evaluator This section will bring you through creating and setting up a Human Evaluator. As an example, we'll use a "Tone" Evaluator that allows feedback to be provided by selecting from a list of options. ### Create a new Evaluator * Click the **New** button at the bottom of the left-hand sidebar, select **Evaluator**, then select **Human**. ![New Evaluator dialog](file:4bf23448-c897-4897-b47b-7824ab398f33) * Give the Evaluator a name when prompted in the sidebar, for example "Tone". ![Created Human Evaluator being renamed to "Tone"](file:f52ea18b-82fb-41bf-a9ee-0129651d5b47) ### Define the Judgment Schema After creating the Evaluator, you will automatically be taken to the Editor. Here, you can define the schema detailing the kinds of judgments to be applied for the Evaluator. The Evaluator will be initialized to a 5-point rating scale by default. In this example, we'll set up a feedback schema for a "Tone" Evaluator. See the [Return types documentation](../../explanation/evaluators#return-types) for more information on return types. * Select **Multi-select** within the **Return type** dropdown. "Multi-select" allows you to apply multiple options to a single Log. * Add the following options, and set the valence for each: * Enthusiastic \[positive] * Informative \[postiive] * Repetitive \[negative] * Technical \[negative] * Update the instructions to "Select all options that apply to the output." ![Tone evaluator set up with options and instructions](file:cca90248-e3b1-4059-bb25-3467a5b78316) ### Commit and deploy the Evaluator * Click **Commit** in the top-right corner. * Enter "Added initial tone options" as a commit message. Click **Commit**. ![Commit dialog over the "Tone" Evaluator](file:499e6e38-8df9-4f3d-aee6-f06da72464ae) * In the "Version committed" dialog, click **Deploy**. * Select the checkbox for you default Environment (usually named "production"), and confirm your deployment. ![Dialog deploying the "Tone" Evaluator to the "production" Environment](file:129ddcfc-7825-4e63-9db0-2999a3bd56ae) :tada: You've now created a Human Evaluator that can be used to collect feedback on Prompt Logs. ## Next steps * [Use Human Evaluators in Evaluations](./run-human-evaluation) to collect annotations on Prompt Logs from subject-matter experts. * [Attach Human Evaluators to Prompts](../../observability/guides/capture-user-feedback) to collect end-user feedback # Run a Human Evaluation > Collect judgments from subject-matter experts (SMEs) to better understand the quality of your AI product. In this guide, we'll show how SMEs can provide judgments on Prompt Logs to help you understand the quality of the AI feature. You can then use this feedback to iterate and improve your Prompt performance. ### Prerequisites * You have set up a Human Evaluator appropriate for your use-case. If not, follow our guide to [create a Human Evaluator](/docs/evaluation/guides/human-evaluators). * You have a Dataset with test data to evaluate model outputs against. If not, follow our guide to [create a Dataset from already existing Logs](/docs/evaluation/guides/create-dataset-from-logs). ## Provide judgments on Logs In this guide, we assume you have already created a Prompt and a Dataset for an evaluation. Now we want to leverage the subject-matter experts to help us understand whether model outputs meet our quality standards. ### Create a new Evaluation Navigate to the Prompt you want to evaluate and click on the **Evaluation** tab at the top of the page. Click on **Evaluate** to create a new Evaluation. ### Create a new Run To evaluate a version of your Prompt, click on the **+Run** button, then select the version of the Prompt you want to evaluate and the Dataset you want to use. Click on **+Evaluator** to add a Human Evaluator to the Evaluation. You can find example Human Evaluators in the **Example Evaluators** folder. Click **Save** to create a new Run. Humanloop will start generating Logs for the Evaluation. ### Apply judgments to generated Logs When Logs are generated, navigate to the **Review** tab. Turn on **Focus mode** and start providing judgments on the generated Logs. When the last judgment is provided, the Run is marked as complete. ### Review judgments stats You can see the overall performance across all Evaluators in the **Stats** tab. ## Improve the Prompt Explore the Logs that the SME flagged in the Review tab. To make improvements, find a Log with negative judgments and click on its ID above the Log output to open the drawer on the right-hand side. In the drawer, click on the **Editor ->** button to load the Prompt Editor. Now, modify the instructions and commit a new version. Run Evals with Dataset on Humanloop. Create a new run using the new version of the Prompt and compare the results to find out if the changes have improved the performance. ## Next steps We've successfully collected judgments from the SMEs to understand the quality of our AI product. Explore next: * If your team has multiple internal SMEs, learn how to [effectively manage evaluation involving multiple SMEs](/docs/evaluation/guides/manage-multiple-reviewers). * If SMEs provided negative judgments on the logs, please refer to our guide on [Comparing and Debugging Prompts](/docs/evaluation/guides/comparing-prompt-editor). # Manage multiple reviewers > Learn how to split the work between your SMEs **Who is this for**: This guide is for large teams that want to leverage their internal subject matter experts (SMEs) to evaluate the performance of their AI features. ### Prerequisites * You have set up [Evaluators](/docs/explanation/evaluators). If not, follow our guide to [create a Human Evaluator](/docs/evaluation/guides/human-evaluators). * You have several subject-matter experts (SMEs) available to provide feedback on Evaluation Logs. ## Divide work between SMEs When you have a large [Dataset](/docs/explanation/datasets) to evaluate, it's helpful to split the work between your SMEs to ensure that the evaluation is completed quickly and effectively. ### Split the Dataset into chunks Each Dataset consists of datapoints. Add an identifier to each datapoint to group them into chunks. For example, we [created](https://github.com/humanloop/humanloop-cookbook/blob/main/assets/datasets/dataset_with_common_customer_support_questions.csv) a Dataset with 100 common customer support questions. In the csv file, we added an identifier called "chunk" to each datapoint, splitting the whole Dataset into 10 equal parts. To upload this CSV on Humanloop, create a new Dataset file, then click on the **Upload CSV** button. Upload CSV as dataset to Humanloop. Alternatively, you [upload Dataset via our SDK](/docs/evaluation/guides/create-dataset#upload-a-dataset-via-api) ### Run an Evaluation Navigate to a Prompt you want to evaluate and create a new Evaluation Run. Run Evals with Dataset on Humanloop. ### Split the workload between SMEs To split the workload between your SMEs, navigate to the **Review** tab, turn on **Focus mode**, and click on the **Filters** button. Filter the Dataset by identifiers, such as "chunk", to split the review work into smaller pieces. ### Send the URL to your SMEs After you have filtered the Dataset, copy the URL and send it to your SME. When they open the link, they will only see the relevant chunk of the Dataset. Focus mode on. ### Monitor progress As the SMEs provide judgments on the outputs, we display the overall progress and the number of outstanding judgments. When the final judgment is given, the Evaluation is marked as complete. ## Improve the Prompt With judgments from your SMEs, you can now better understand the model's performance and iterate on your Prompt to improve the model outputs. Completed evaluations. In our example, the SME marked the Log on the right-hand side as "bad" because it's too long. To take action, click on the Log ID above the Log output to open the Log drawer. In the drawer, click on the **Editor ->** button to load this Log in the Prompt Editor. Now, modify the instructions to explicitly state that the model should provide a concise answer. ## Next steps We've successfully split the work among multiple SMEs to effectively evaluate the performance of our AI product. Explore next: * If your SMEs gave negative judgments on the Logs, see our guide on [Comparing and Debugging Prompts](/docs/evaluation/guides/comparing-prompt-editor). * Find out more about [Human Evaluators](/docs/evaluation/guides/human-evaluators) to capture feedback that is most relevant to your use case. # Compare and Debug Prompts > In this guide, we will walk through comparing the outputs from multiple Prompts side-by-side using the Humanloop Editor environment and using diffs to help debugging. You can compare Prompt versions interactively side-by-side to get a sense for how their behaviour differs; before then triggering more systematic [Evaluations](/docs/evaluation/guides/run-evaluation). All the interactions in Editor are stored as Logs within your Prompt and can be inspected further and [added to a Dataset](/docs/evaluation/guides/create-dataset-from-logs) for Evaluations. ### Prerequisites * You already have a Prompt — if not, please follow our [Prompt creation](/docs/development/guides/create-prompt) guide first. ## Compare Prompt versions In this example we will use a simple support agent Prompt that answers user queries about Humanloop's product and docs. Support agent base prompt. ### Create a new version of your Prompt Open your Prompt in the Editor and expand **Parameters** and change some details such as the choice of `Model`. In this example, we change from `gpt-4o` to `gpt-4o-mini`. This will create a new uncommitted version of the Prompt. Support agent change prompt Now commit the new version of your Prompt by selecting the blue **Commit** button over **Parameters** and providing a helpful commit message like: ```text Changed model to gpt-4o-mini ``` ### Load up two versions of your Prompt in the Editor To load up the previous version side-by-side, select the menu beside the Load button and select the **New panel** option (depending on your screen real-estate, you can add more than 2 panels). Support agent add panel Then select to *Load* button in the new panel and select another version of your Prompt to compare. Support agent load version ### Compare the outputs of both versions Now you can run the same user messages through both models to compare their behaviours live side-by-side. Support agent compare version ## View Prompt diff for debugging When debugging more complex Prompts, it's important to understand what changes were made between different versions. Humanloop provides a diff view to support this. ### Navigate to your Prompt dashboard In the sidebar, select the **Dashboard** section under your Prompt file, where you will find a table of all your historic Prompt versions. Support agent dashboard ### Select the versions to compare In the table, select two rows you would like understand the changes between. Then select the **Compare Versions** button above the table. Support agent diff view 1. While in the **Compare** tab, look for the **Diff** section. 2. This section will highlight the changes made between the selected versions, showing additions, deletions, and modifications. 3. Use this diff view to understand how specific changes in your prompt configuration affect the output. By following these steps, you can effectively compare different versions of your Prompts and iterate on your instructions to improve performance. # Set up CI/CD Evaluations > Learn how to automate LLM evaluations as part of your CI/CD pipeline using Humanloop and GitHub Actions. ## Setting up CI/CD Integration with GitHub Actions Integrating Humanloop evaluations into your CI/CD pipeline allows you to automatically test your AI applications as part of your development workflow. This guide will walk you through setting up this integration using GitHub Actions. ### Prerequisites * A GitHub repository for your project * A Humanloop account with access to Evaluations * A Prompt and Dataset set up in Humanloop * An Evaluator configured in Humanloop ## Steps to Set Up CI/CD Integration ### Create a GitHub Actions Workflow In your GitHub repository, create a new file `.github/workflows/humanloop-eval.yml` with the following content: This content is currently under development. Please refer to our [V4 documentation](https://docs.humanloop.com/v4) for the current docs. ```yaml ``` # Spot-check your Logs > Learn how to use the Humanloop Python SDK to sample a subset of your Logs and create an Evaluation Run to spot-check them. By regularly reviewing a sample of your Prompt Logs, you can gain valuable insights into the performance of your Prompts in production, such as through reviews by subject-matter experts (SMEs). For real-time observability (typically using code Evaluators), see our guide on setting up [monitoring](../observability/monitoring). This guide describes setting up more detailed evaluations which are run on a small subset of Logs. ### Prerequisites * You have a Prompt with Logs. See our guide on [logging to a Prompt](./prompts/log-to-a-prompt) if you don't yet have one. * You have a Human Evaluator set up. See our guide on [creating a Human Evaluator](./human-evaluators) if you don't yet have one. {/* TODO: This should be Python-only. */} First you need to install and initialize the SDK. If you have already done this, skip to the next section. Open up your terminal and follow these steps: 1. Install the Humanloop SDK: ```python pip install humanloop ``` ```typescript npm install humanloop ``` 2. Initialize the SDK with your Humanloop API key (you can get it from the [Organization Settings page](https://app.humanloop.com/account/api-keys)). ```python from humanloop import Humanloop humanloop = Humanloop(api_key="") # Check that the authentication was successful print(humanloop.prompts.list()) ``` ```typescript import { HumanloopClient, Humanloop } from "humanloop"; const humanloop = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); // Check that the authentication was successful console.log(await humanloop.prompts.list()); ``` ## Set up an Evaluation ### Create an Evaluation Create an Evaluation for the Prompt. In this example, we also attach a "rating" Human Evaluator so our SMEs can judge the generated responses. ```python evaluation = humanloop.evaluations.create( # Name your Evaluation name="Monthly spot-check", file={ # Replace this with the ID of your Prompt. # You can specify a Prompt by "path" as well. "id": "pr_..." }, evaluators=[ # Attach Evaluator to enable SMEs to rate the generated responses {"path": "Example Evaluators/Human/rating"}, ], ) ``` ### Create a Run Create a Run within the Evaluation. We will then attach Logs to this Run. ```python run = humanloop.evaluations.create_run( id=evaluation.id, ) ``` ### Sample Logs Sample a subset of your Logs to attach to the Run. For this example, we'll sample 100 Logs from the past 30 days, simulating a monthly spot-check. ```python import datetime logs = humanloop.logs.list( file_id="pr_...", # Replace with the ID of the Prompt sample=100, # Example filter to sample Logs from the past 30 days start_date=datetime.datetime.now() - datetime.timedelta(days=30), ) log_ids = [log.id for log in logs] ``` ### Attach Logs to the Run Attach the sampled Logs to the Run you created earlier. ```python humanloop.evaluations.add_logs_to_run( id=evaluation.id, run_id=run.id, log_ids=log_ids, ) ``` You have now created an Evaluation Run with a sample of Logs attached to it. In the Humanloop app, go to the Prompt's Evaluations tab. You should see the new Evaluation named "Monthly spot-check". Click on it to view the Run with the Logs attached. ![Evaluation Run with Logs attached](file:7f0b2ec2-60c4-4289-a996-42b7f6d28345) ## Review your Logs Rate the model generations via the **Review** tab. For further details on how you can manage reviewing your Logs with multiple SMEs, see our guide on [managing multiple reviewers](./manage-multiple-reviewers). ![Logs review](file:a9c5cddd-2ee2-4734-a867-87c617f78ad5) After your Logs have been reviewed, go to the **Stats** tab to view aggregate stats. ![Aggregate run stats](file:fb674a7c-adce-4b38-bc4c-c66d465bc8e0) ## Repeating the spot-check To repeat this process the next time a spot-check is due, you can create a new Run within the same Evaluation, repeating the above steps from "Create a Run". You will then see the new Run alongside the previous ones in the Evaluation, and can compare the aggregate stats across multiple Runs. ## Next Steps * If you have performed a spot-check and identified issues, you can [iterate on your Prompts in the app](./comparing-prompts) and [run further Evaluations](./run-evaluation) to verify improvements. # Use external Evaluators > Integrate your existing evaluation process with Humanloop. LLM and code Evaluators generally live on the Humanloop runtime environment. The advantage of this is that these Evaluators can be used as [monitoring Evaluators](docs/v5/guides/observability/monitoring) and to allow triggering evaluations [directly from the Humanloop UI](/docs/v5/guides/evals/run-evaluation-ui). Your setup however can be more complex: your Evaluator has library dependencies that are not present in the [runtime environment](/docs/v5/reference/python-environment), your LLM evaluator has multiple reasoning steps, or you prefer managing the logic yourself. External Evaluators address this: they are registered with Humanloop but their code definition remains in your environment. In order to evaluate a Log, you call the logic yourself and send the judgment to Humanloop. In this tutorial, we will build a chat agent that answers questions asked by children, and evaluate its performance using an external Evaluator. ## Create the agent We reuse the chat agent from our [evaluating an agent tutorial](/docs/v5/tutorials/agent-evaluation). Let's create the initial version of our agent. Add the following in a new file: ```python title="main.py" maxLines=35 from humanloop import Humanloop from openai import OpenAI from openai.types.chat.chat_completion_message import ChatCompletionMessage as Message import wikipedia import json openai = OpenAI(api_key="ADD YOUR KEY HERE") humanloop = Humanloop(api_key="ADD YOUR KEY HERE") def search_wikipedia(query: str) -> dict: """Search Wikipedia to get up-to-date information for a query.""" try: page = wikipedia.page(query) return { "title": page.title, "content": page.content, "url": page.url, } except Exception as _: return { "title": "", "content": "No results found", "url": "", } def call_model(messages: list[Message]) -> Message: """Calls the model with the given messages""" system_message = { "role": "system", "content": ( "You are an assistant that helps to answer user questions. " "You should leverage wikipedia to answer questions so that " "the information is up to date. If the response from " "Wikipedia does not seem relevant, rephrase the question " "and call the tool again. Then finally respond to the user." ), } response = openai.chat.completions.create( model="gpt-4o", messages=[system_message] + messages, tools=[ { "type": "function", "function": { "name": "search_wikipedia", "description": "Search the internet to get up to date answers for a query.", "parameters": { "type": "object", "required": ["query"], "properties": { "query": {"type": "string"}, }, "additionalProperties": False, }, }, } ], ) return response.choices[0].message.to_dict(exclude_unset=False) def call_agent(question: str) -> str: """Calls the main agent loop and returns the final result""" messages = [{"role": "user", "content": query}] # Retry for a relevant response 3 times at most for _ in range(3): response = call_model(messages) messages.append(response) if response["tool_calls"]: # Call wikipedia to get up-to-date information for tool_call in response["tool_calls"]: source = search_wikipedia( **json.loads(tool_call["function"]["arguments"]) ) messages.append( { "role": "tool", "content": json.dumps(source), "tool_call_id": tool_call["id"], } ) else: # Respond to the user return response["content"] if __name__ == "__main__": result = call_agent("Where does the sun go at night?") print(result) ``` ```typescript title="main.ts" maxLines=35 import { HumanloopClient } from "humanloop"; import OpenAI from "openai"; import type { ChatCompletionMessageParam as Message } from "openai/resources"; import wikipedia from "wikipedia"; import fs from "fs"; import readline from "readline"; const openai = new OpenAI({ apiKey: "" }); const humanloop = new HumanloopClient({ apiKey: "" }); type WikiResult = { title: string; content: string; url: string; }; const searchWikipedia = async (query: string) => { try { const page = await wikipedia.page(query); const NO_RESULT_FOUND: WikiResult = { title: "", content: "No results found", url: "", }; if (page) { return { title: page?.title || "", content: (await page?.content()) || "", url: `https://en.wikipedia.org/wiki/${encodeURIComponent( page?.title || "" )}`, } as WikiResult; } return NO_RESULT_FOUND; } catch (error) { return NO_RESULT_FOUND; } }; const callModel = async (messages: Array) => { const systemMessage: Message = { role: "system", content: "You are an assistant that helps to answer user questions. " + "You should leverage wikipedia to answer questions so that " + "the information is up to date. If the response from " + "Wikipedia does not seem relevant, rephrase the question " + "and call the tool again. Then finally respond to the user.", }; const response = await openai.chat.completions.create({ model: "gpt-4o", messages: [systemMessage, ...messages], tools: [ { type: "function", function: { name: "search_wikipedia", description: "Search the internet to get up to date answers for a query.", parameters: { type: "object", required: ["query"], properties: { query: { type: "string" }, }, additionalProperties: false, }, }, }, ], }); return response.choices[0].message; } async function callAgent({ question }: { question: string }): Promise { const messages: Message[] = [{ role: "user", content: question }]; for (let _ = 0; _ < 3; _++) { const response = await callModel(messages); messages.push(response); if (response.tool_calls) { for (const toolCall of response.tool_calls) { const args = JSON.parse(toolCall.function.arguments); const source = await searchWikipedia(args.query); messages.push({ role: "tool", content: JSON.stringify(source), tool_call_id: toolCall.id, }); } } else { return response.content || ""; } } return "Could not get a relevant response after multiple attempts."; } async function main() { const result = await callAgent({ question: "Where does the sun go at night?", }); console.log(result); } main(); ``` Run the agent and check if it works: ```bash python main.py ``` ```plaintext Okay! Imagine the Earth is like a big ball, and we live on it. The sun doesn't really “go” anywhere—it stays in the same spot, shining all the time. But our Earth is spinning like a top! ``` ```bash npx tsx main.ts ``` ```plaintext Okay! Imagine the Earth is like a big ball, and we live on it. The sun doesn't really “go” anywhere—it stays in the same spot, shining all the time. But our Earth is spinning like a top! ``` ## Evaluate the agent Evaluators are callables that take the Log's dictionary representation as input and return a judgment. The Evaluator's judgment should respect the `return_type` present in Evaluator's [specification](https://humanloop.com/docs/v5/api-reference/evaluators/upsert#request.body.spec). The Evaluator can take an additional `target` argument to compare the Log against. The target is provided in an Evaluation context by the validation [Dataset](/docs/v5/explanation/datasets). For more details, check out our [Evaluator explanation](/docs/v5/explanation/evaluators). ### Define external Evaluator The Evaluator takes a `log` argument, which represents the Log created by calling `call_agent`. Let's add a simple Evaluator that checks if the agent's answers are too long. Add this in the `agent.py` file: ```python if __name__ == "__main__": def easy_to_understand(log): return len(log["output"]) < 100 ``` ### Add dataset Create a file called `dataset.jsonl` and add the following: ```jsonl title="dataset.jsonl" maxLines=5 {"inputs": {"question": "Why is the sky blue?"}} {"inputs": {"question": "Where does the sun go at night?"}} {"inputs": {"question": "Why do birds fly?"}} {"inputs": {"question": "What makes rainbows?"}} {"inputs": {"question": "Why do we have to sleep?"}} {"inputs": {"question": "How do fish breathe underwater?"}} {"inputs": {"question": "Why do plants need water?"}} {"inputs": {"question": "How does the moon stay in the sky?"}} {"inputs": {"question": "What are stars made of?"}} {"inputs": {"question": "Why do we have seasons?"}} {"inputs": {"question": "How does the TV work?"}} {"inputs": {"question": "Why do dogs wag their tails?"}} {"inputs": {"question": "What makes cars go?"}} {"inputs": {"question": "Why do we need to brush our teeth?"}} {"inputs": {"question": "What do ants eat?"}} {"inputs": {"question": "Why does the wind blow?"}} {"inputs": {"question": "How do airplanes stay in the air?"}} {"inputs": {"question": "Why does the ocean look so big?"}} {"inputs": {"question": "What makes the grass green?"}} {"inputs": {"question": "Why do we have to eat vegetables?"}} {"inputs": {"question": "How do butterflies fly?"}} {"inputs": {"question": "Why do some animals live in the zoo?"}} {"inputs": {"question": "How do magnets stick to the fridge?"}} {"inputs": {"question": "What makes fire hot?"}} {"inputs": {"question": "Why do leaves change color?"}} {"inputs": {"question": "What happens when we flush the toilet?"}} {"inputs": {"question": "Why do we have belly buttons?"}} {"inputs": {"question": "What makes the clouds move?"}} {"inputs": {"question": "Why do we have eyebrows?"}} {"inputs": {"question": "How do seeds turn into plants?"}} {"inputs": {"question": "Why does the moon change shape?"}} {"inputs": {"question": "Why do bees make honey?"}} {"inputs": {"question": "What makes ice melt?"}} {"inputs": {"question": "Why do we sneeze?"}} {"inputs": {"question": "How do trains stay on the tracks?"}} {"inputs": {"question": "Why do stars twinkle?"}} {"inputs": {"question": "Why can't we see air?"}} {"inputs": {"question": "What makes the Earth spin?"}} {"inputs": {"question": "Why do frogs jump?"}} {"inputs": {"question": "Why do cats purr?"}} {"inputs": {"question": "How do phones let us talk to people far away?"}} {"inputs": {"question": "Why does the moon follow us?"}} {"inputs": {"question": "What makes lightning?"}} {"inputs": {"question": "Why does it snow?"}} {"inputs": {"question": "Why do we have shadows?"}} {"inputs": {"question": "Why do boats float?"}} {"inputs": {"question": "What makes our heart beat?"}} {"inputs": {"question": "Why do some animals sleep all winter?"}} {"inputs": {"question": "Why do we have to wear shoes?"}} {"inputs": {"question": "What makes music?"}} ``` ### Add Evaluation Instantiate an Evaluation using the client's \[]`evaluations.run`]\(/docs/v5/sdk/run-evaluation) utility. `easy_to_understand` is an external Evaluator, so we provide its definition via the `callable` argument. At runtime, `evaluations.run` will call the function and submit the judgment to Humanloop. ```python title="agent.py" maxLines=100 highlight={5-28} if __name__ == "__main__": def easy_to_understand(log): return len(log["output"]) < 100 # Read the evaluation dataset with open("dataset.jsonl", "r") as fp: dataset = [json.loads(line) for line in fp] humanloop.evaluations.run( name="QA Agent Answer Comprehensiveness", file={ "path": "QA Agent/Agent", "callable": call_agent, }, evaluators=[ { "path": "QA Agent/Comprehension", "callable": easy_to_understand, "args_type": "target_free", "return_type": "boolean", } ], dataset={ "path": "QA Agent/Children Questions", "datapoints": dataset, }, workers=8, ) ``` ### Run the evaluation ```bash title="Terminal" python main.py ``` ```bash title="Terminal" maxLines=50 Navigate to your Evaluation: https://app.humanloop.com/project/fl_9CCIoTzySPfUFeIxfYE6g/evaluations/evr_67tEc2DiR83fy9iTaqyPA/stats Flow Version ID: flv_9ECTrfeZYno2OIj9KAqlz Run ID: rn_67tEcDYV6mqUS86hD8vrP Running 'Agent' over the Dataset 'Children Questions' using 8 workers [##############--------------------------] 15/50 (30.00%) | ETA: 14 ... 📊 Evaluation Results for QA Agent/Agent +------------------------+---------------------+ | | Latest | +------------------------+---------------------+ | Run ID | 67tEc | +------------------------+---------------------+ | Version ID | 9ECTr | +------------------------+---------------------+ | Added | 2024-11-19 21:49:02 | +------------------------+---------------------+ | Evaluators | | +------------------------+---------------------+ | QA Agent/Comprehension | 3.24 | +------------------------+---------------------+ ``` ```bash title="Terminal" maxLines=50 npx tsx main.ts ``` ```bash title="Terminal" maxLines=50 Navigate to your Evaluation: https://app.humanloop.com/project/fl_9CCIoTzySPfUFeIxfYE6g/evaluations/evr_67tEc2DiR83fy9iTaqyPA/stats Flow Version ID: flv_9ECTrfeZYno2OIj9KAqlz Run ID: rn_67tEcDYV6mqUS86hD8vrP Running 'Agent' over the Dataset 'Children Questions' using 8 workers [##############--------------------------] 15/50 (30.00%) | ETA: 14 ... 📊 Evaluation Results for QA Agent/Agent +------------------------+---------------------+ | | Latest | +------------------------+---------------------+ | Run ID | 67tEc | +------------------------+---------------------+ | Version ID | 9ECTr | +------------------------+---------------------+ | Added | 2024-11-19 21:49:02 | +------------------------+---------------------+ | Evaluators | | +------------------------+---------------------+ | QA Agent/Comprehension | 3.24 | +------------------------+---------------------+ ``` Click on the link to see the results when the Evaluation is complete. ## Add detailed logging If you use a programming language not supported by the SDK, or want more control, see our guide on [logging through the API](/docs/v5/guides/observability/logging-through-api) for an alternative to decorators. Up to this point, we have treated the agent as a black box, reasoning about its behavior by looking at the inputs and outputs. Let's use Humanloop logging to observe the step-by-step actions taken by the agent. Modify `main.py`: ```python title="main.py" maxLines=100 highlight={1,5,10,15} @humanloop.tool(path="QA Agent/Search Wikipedia") def search_wikipedia(query: str) -> dict: ... @humanloop.prompt(path="QA Agent/Prompt") def call_model(messages: list[Message]) -> Message: ... @humanloop.flow(path="QA Agent/Agent") def call_agent(question: str) -> str: ... ``` To auto-instrument calls to OpenAI, pass the module in the Humanloop constructor: ```typescript const humanloop = new HumanloopClient({ apiKey: process.env.HUMANLOOP_API_KEY, providers: { // Pass the OpenAI module, not the initialized client OpenAI } }); ``` Modify `main.ts`: ```typescript title="main.ts" maxLines=100 highlight={20-21,26-27,32-33} const searchWikipedia = humanloop.tool({ path: "QA Agent/Search Wikipedia", version: { function: { name: "Search Wikipedia", description: "Search Wikipedia for the best article to answer a question", strict: true, parameters: { type: "object", properties: { query: { type: "string", description: "The question to search Wikipedia for", }, }, required: ["query"], }, }, }, // Wraps the initial function body callable: async ({ query }) => { ... }, }); const callModel = humanloop.prompt({ path: "QA Agent/Prompt", // Wraps the initial function body callable: async ({ messages }) => { ... }, }); const callAgent = humanloop.flow({ path: "QA Agent/Agent", // Wraps the initial function body callable: async ({ question }) => { ... }, }); ``` Evaluate the agent again. When it's done, head to your workspace and click the **Agent** [Flow](/docs/v5/guides/explanations/flows) on the left. Select the Logs tab from the top of the page. The decorators divide the code in logical components, allowing you to observe the steps taken to answer a question. Every step taken by the agent creates a Log. ## Next steps You've learned how to integrate your existing evaluation process with Humanloop. Learn more about Humanloop's features in these guides: * Learn how to use Evaluations to improve on your feature's performance in our [tutorial on evaluating a chat agent](/docs/v5/tutorials/agent-evaluation). * Evals work hand in hand with logging. Learn how to log detailed information about your AI project in [logging setup guide](/docs/v5/quickstart/set-up-logging). # Evaluate external logs > Run an Evaluation on Humanloop with your own This guide demonstrates how to run an Evaluation on Humanloop using your own logs. This is useful if you have existing logs in an external system and want to evaluate them on Humanloop with minimal setup. In this guide, we will use the example of a JSON file containing chat messages between users and customer support agents. This guide will bring you through uploading these logs to Humanloop and creating an Evaluation with them. ## Prerequisites The code in this guide uses the Python SDK. To follow along, you will need to have the SDK installed and configured. While the code snippets are in Python, the same steps can be performed using the TypeScript SDK or via the API directly. If you are using the API directly, you will need to have an API key. First you need to install and initialize the SDK. If you have already done this, skip to the next section. Open up your terminal and follow these steps: 1. Install the Humanloop SDK: ```python pip install humanloop ``` ```typescript npm install humanloop ``` 2. Initialize the SDK with your Humanloop API key (you can get it from the [Organization Settings page](https://app.humanloop.com/account/api-keys)). ```python from humanloop import Humanloop humanloop = Humanloop(api_key="") # Check that the authentication was successful print(humanloop.prompts.list()) ``` ```typescript import { HumanloopClient, Humanloop } from "humanloop"; const humanloop = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); // Check that the authentication was successful console.log(await humanloop.prompts.list()); ``` The example JSON data in this guide can be found in the [Humanloop Cookbook](https://github.com/humanloop/humanloop-cookbook/tree/evaluating-external-logs/assets). To continue with the code in this guide, download `conversations-a.json` and `conversations-b.json` from the `assets` folder. ## Evaluate your external logs We'll start by loading data from `conversations-a.json`, which represents logs recorded by an external system. ```python import json with open("conversations-a.json") as f: data = json.load(f) ``` In this example, `data` is a list of chat messages between users and a support agent. ### Upload logs to Humanloop These steps are suitable if you do not already have an Evaluation on Humanloop. The [Upload new logs step](#upload-new-logs) demonstrates a simpler process if you already have an Evaluation you want to add a new set of logs to. Upload the logs with the `log(...)` method. This will automatically create a [Flow](../../explanation/flows) on Humanloop. We additionally pass in some `attributes` identifying the configuration of the system that generated these logs. `attributes` accepts arbitrary values, and is used for versioning the Flow. Here, it allows us to associate this set of logs with a specific version of the support agent. ```python log_ids = [] for messages in data: log = humanloop.flows.log( path="External logs demo/Travel planner", flow={"attributes": {"agent-version": "1.0.0"}}, # Optionally add attributes to identify this version of the support agent. messages=messages, ) log_ids.append(log.id) ``` This will have created a new Flow on Humanloop named **Travel planner**. To confirm this logging has succeeded, navigate to the **Logs** tab of the Flow and view the uploaded logs. Each Log should correspond to a conversation and contain a list of messages. ![Flow Logs](file:639d9030-3abe-44bb-b0b6-21af93ae0557) We will also use the created Flow version when creating our Run. ```python version_id = log.version_id ``` ### Create an Evaluation Run Next, create an Evaluation on Humanloop. Within the Evaluation, create a Run which will contain the Logs. Here, we'll use the example "Helpfulness" LLM-as-a-judge Evaluator. This will automatically rate the helpfulness of the support agent across our logs. ```python evaluation = humanloop.evaluations.create( name="Past records", # NB: you can use `path`or `id` for references on Humanloop file={"path": "External logs demo/Travel planner"}, evaluators=[ # Replace with your Evaluators {"path": "Example Evaluators/AI/Helpfulness"}, ], ) run = humanloop.evaluations.create_run( id=evaluation.id, version={'version_id': version_id}, # Associate this Run to the Flow version created above. ) ``` ### Assign Logs to the Run Finally, add the Flow Logs to the Run. ```python humanloop.evaluations.add_logs_to_run( id=evaluation.id, run_id=run.id, log_ids=log_ids, ) ``` ### Review the Evaluation You have now created an Evaluation on Humanloop and added Logs to it. ![Evaluation on Humanloop](file:6975ca77-3df1-4edd-bb86-34088ab6318d) Go to the Humanloop UI to view the Evaluation. Within the Evaluation, go to **Logs** tab. Here, you can view your uploaded logs as well as the Evaluator judgments. ![Logs tab of Evaluation](file:dbe6f56e-7c4d-4a79-9d1d-ab9bf8f01fac) The following steps will guide you through adding a different set of logs to a new Run for comparison. ### Upload new logs If you already have an Evaluation that you want to add a new set of logs to, you can start from here. To start from this point, retrieve the ID of the Evaluation you want to add logs to. Go to the Evaluation you want to add logs to on the Humanloop UI and copy the ID from the URL. This is the segment of the URL after `evaluations/`, e.g. `evr_...`. Now that we have an Evaluation on Humanloop, we can add a separate set of logs to it and compare the performance across this set of logs to the previous set. While we can achieve this by repeating the above steps, we can add logs to a Run in a more direct and simpler way now that we have an existing Evaluation. For this example, we'll continue with the Evaluation created in the previous section, and add a new Run with the data from `conversations-b.json`. These represent a set of logs from a prototype version of the support agent. ```python with open("conversations-b.json") as f: data = json.load(f) ``` #### Create a new Run Create a new Run within the Evaluation that will contain this set of logs. ```python run = humanloop.evaluations.create_run( id=evaluation.id, ) ``` #### Log to the Run Pass the `run_id` argument in your `log(...)` call to associate the Log with the Run. ```python # Add the new data to the Run for messages in data: log = humanloop.flows.log( path="External logs demo/Travel planner", flow={"attributes": {"agent-version": "2.0.0"}}, messages=messages, # Pass `run_id` to associate the Log with the Run. run_id=run.id, ) ``` ### Compare the results View the Evaluation on Humanloop. It will now contain two Runs. In the **Stats** tab of the Evaluation, you can now compare the performance of the two sets of logs. In our case, our second set of logs (on the right) can be seen to be less helpful. ![Evaluation with two Runs on Humanloop](file:88017b1b-0636-4663-b667-c6883c265173) ![Stats tab showing box plots for the two Runs](file:97230a2b-0396-400b-a8ab-e8e84a34f643) ## Next steps The above examples demonstrate how you can quickly populate an Evaluation Run with your logs. * You can extend this Evaluation with custom Evaluators, such as using [Code Evaluators](./code-based-evaluator) to calculate metrics, or using [Human Evaluators](./human-evaluators) to set up your Logs to be reviewed by your subject-matter experts. * Now that you've set up an Evaluation, explore the other [File](../../explanation/files) types on Humanloop to see how they can better reflect your production systems, and how you can use Humanloop to version-control them. Here, we've used a [Flow](../../explanation/flows) to represent a black-box system. # Create a Prompt > Learn how to create a Prompt in Humanloop using the UI or SDK, version it, and use it to generate responses from your AI models. Prompt management is a key part of the Humanloop platform. Humanloop acts as a registry of your [Prompts](/docs/explanation/prompts) so you can centrally manage all their versions and [Logs](/docs/explanation/logs), and evaluate and improve your AI systems. This guide will show you how to create a Prompt [in the UI](./create-prompt#create-a-prompt-in-the-ui) or [via the SDK/API](./create-prompt#create-a-prompt-using-the-sdk). **Prerequisite**: A Humanloop account. You can create an account now by going to the [Sign up page](https://app.humanloop.com/signup). ## Create a Prompt in the UI #### Create a Humanloop Account If you haven’t already, create an account or log in to Humanloop #### Add an OpenAI API Key If you’re the first person in your organization, you’ll need to add an API key to a model provider. 1. Go to OpenAI and [grab an API key](https://platform.openai.com/api-keys) 2. In Humanloop [Organization Settings](https://app.humanloop.com/account/api-keys) set up OpenAI as a model provider. Using the Prompt Editor will use your OpenAI credits in the same way that the OpenAI playground does. Keep your API keys for Humanloop and the model providers private. ## Get Started ### Create a Prompt File When you first open Humanloop you’ll see your File navigation on the left. Click ‘**+ New**’ and create a **Prompt**. In the sidebar, rename this file to "Comedian Bot" now or later. ### Create the Prompt template in the Editor The left hand side of the screen defines your Prompt – the parameters such as model, temperature and template. The right hand side is a single chat session with this Prompt. Click the “**+ Message**” button within the chat template to add a system message to the chat template. Add the following templated message to the chat template. ``` You are a funny comedian. Write a joke about {{topic}}. ``` This message forms the chat template. It has an input slot called `topic` (surrounded by two curly brackets) for an input value that is provided each time you call this Prompt. On the right hand side of the page, you’ll now see a box in the **Inputs** section for `topic`. 1. Add a value for `topic` e.g. music, jogging, whatever 2. Click **Run** in the bottom right of the page This will call OpenAI’s model and return the assistant response. Feel free to try other values, the model is *very* funny. You now have a first version of your prompt that you can use. ### Commit your first version of this Prompt 1. Click the **Commit** button 2. Put “initial version” in the commit message field 3. Click **Commit** ### View the logs Under the Prompt File, click ‘Logs’ to view all the generations from this Prompt Click on a row to see the details of what version of the Prompt generated it. From here you can give feedback to that generation, see performance metrics, open up this example in the Editor, or add this log to a Dataset. *** ## Create a Prompt using the SDK The Humanloop Python SDK allows you to programmatically create and version your [Prompts](/docs/explanation/prompts) in Humanloop, and log generations from your models. This guide will show you how to create a Prompt using the SDK. Note that you can also version your prompts dynamically with every Prompt **Prerequisite**: A Humanloop SDK Key. You can get this from your [Organisation Settings page](https://app.humanloop.com/account/api-keys) if you have the [right permissions](/docs/v5/reference/access-roles). First you need to install and initialize the SDK. If you have already done this, skip to the next section. Open up your terminal and follow these steps: 1. Install the Humanloop SDK: ```python pip install humanloop ``` ```typescript npm install humanloop ``` 2. Initialize the SDK with your Humanloop API key (you can get it from the [Organization Settings page](https://app.humanloop.com/account/api-keys)). ```python from humanloop import Humanloop humanloop = Humanloop(api_key="") # Check that the authentication was successful print(humanloop.prompts.list()) ``` ```typescript import { HumanloopClient, Humanloop } from "humanloop"; const humanloop = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); // Check that the authentication was successful console.log(await humanloop.prompts.list()); ``` After initializing the SDK client, you can call the Prompt creation endpoint. ### Create the Prompt This can be done by using the [Prompt Upsert](/docs/v5/api-reference/prompts/upsert) method in the SDK. Or by calling the API directly: ### Go to the App Go to the [Humanloop app](https://app.humanloop.com) and you will see your Prompt in your list of files. You now have a Prompt in Humanloop that contains your initial version. You can call the Prompt in Editor and invite team members by going to your organization's members page. ## Next Steps With the Prompt set up, you can now integrate it into your app by following the [Call a Prompt Guide](/docs/development/guides/call-prompt). # Call a Prompt > Learn how to call your Prompts that are managed on Humanloop. This guide will show you how to call your Prompts through the API, enabling you to generate responses from the large language model while versioning your Prompts. You can call an existing Prompt on Humanloop, or you can call a Prompt you're managing in code. These two use-cases are demonstrated below. ### Prerequisites First you need to install and initialize the SDK. If you have already done this, skip to the next section. Open up your terminal and follow these steps: 1. Install the Humanloop SDK: ```python pip install humanloop ``` ```typescript npm install humanloop ``` 2. Initialize the SDK with your Humanloop API key (you can get it from the [Organization Settings page](https://app.humanloop.com/account/api-keys)). ```python from humanloop import Humanloop humanloop = Humanloop(api_key="") # Check that the authentication was successful print(humanloop.prompts.list()) ``` ```typescript import { HumanloopClient, Humanloop } from "humanloop"; const humanloop = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); // Check that the authentication was successful console.log(await humanloop.prompts.list()); ``` ## Call an existing Prompt If you don't have Prompt already on Humanloop, please follow our [Prompt creation](/docs/development/guides/create-prompt) guide first. ### Get the Prompt ID In Humanloop, navigate to the Prompt and copy the Prompt ID by clicking Prompt name in the top bar, and copying from the popover. ### Call the Prompt by ID Now you can use the SDK to generate completions and log the results to your Prompt: This can be done by using the [Prompt Call](/docs/v5/api-reference/prompts/call) method in the SDK. ## Call a Prompt defined in code You can also manage your Prompts in code. Pass the `prompt` details within your API call to generate responses with the specified parameters. ## View your Prompt Logs Navigate to the **Logs** tab of your Prompt. You will be able to see the recorded inputs, messages and model generations. ## Next steps * [Iterate and improve on your Prompts](../evals/comparing-prompts) in the Editor * [Capture end-user feedback](../observability/capture-user-feedback) to monitor your model performance. # Log to a Prompt > Learn how to create a Prompt in Humanloop using the UI or SDK, version it, and use it to generate responses from your AI models. Prompt management is a key part of the Humanloop platform. This guide will show you how to capture the [Logs](/docs/explanation/logs) of your LLM calls into Humanloop. The easiest way to log LLM generations to Humanloop is to use the `Prompt.call()` method (see the guide on [Calling a Prompt](/docs/development/guides/call-prompt)). You will only need to supply prompt ID and the inputs needed by the prompt template, and the endpoint will handle fetching the latest template, making the LLM call and logging the result. However, there may be scenarios that you wish to manage the LLM provider calls directly in your own code instead of relying on Humanloop. For example, you may be using an LLM provider that is not directly supported by Humanloop such as a custom self-hosted model, or you may want to avoid adding Humanloop to the critical path of the LLM API calls. ### Prerequisites * You already have a Prompt — if not, please follow our [Prompt creation](/docs/development/guides/create-prompt) guide first. First you need to install and initialize the SDK. If you have already done this, skip to the next section. Open up your terminal and follow these steps: 1. Install the Humanloop SDK: ```python pip install humanloop ``` ```typescript npm install humanloop ``` 2. Initialize the SDK with your Humanloop API key (you can get it from the [Organization Settings page](https://app.humanloop.com/account/api-keys)). ```python from humanloop import Humanloop humanloop = Humanloop(api_key="") # Check that the authentication was successful print(humanloop.prompts.list()) ``` ```typescript import { HumanloopClient, Humanloop } from "humanloop"; const humanloop = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); // Check that the authentication was successful console.log(await humanloop.prompts.list()); ``` ## Log data to your Prompt To log LLM generations to Humanloop, you will need to make a call to the `/prompts/log` endpoint. Note that you can either specify a version of the Prompt you are logging against - in which case you will need to take care that you are supplying the correct version ID and inputs. Or you can supply the full prompt and a new version will be created if it has not been seen before. ### Get your Prompt Fetch a Prompt from Humanloop by specifying the ID. You can ignore this step if your prompts are created dynamically in code. Here's how to do this in code: ```python from humanloop import Humanloop, prompt_utils PROMPT_ID = "" hl = Humanloop(api_key="") prompt = hl.prompts.get(id=PROMPT_ID) # This will fill the prompt template with the variables template = prompt_utils.populate_template(prompt.template, {"language": "Python"}) ``` ```typescript import { HumanloopClient, Humanloop } from "humanloop"; import { ChatMessage } from "humanloop/api"; const humanloop = new HumanloopClient({ apiKey: "", }); const prompt = await humanloop.prompts.get(""); function fillTemplate( template: ChatMessage[], variables: { [key: string]: string } ): ChatMessage[] { const replaceVariable = (match: string, variable: string) => { const trimmedVariable = variable.trim(); if (trimmedVariable in variables) { return variables[trimmedVariable]; } else { throw new Error(`Error: Variable '${trimmedVariable}' is missing.`); } }; return template.map((message) => { if (typeof message.content !== "string") { return message; } const filledContent = message.content.replace( /\{\{\s*(.*?)\s*\}\}/g, replaceVariable ); return { ...message, content: filledContent }; }); } const template = fillTemplate(prompt.template as ChatMessage[], { language: "Python", }); ``` ### Call your Prompt This can be your own model, or any other LLM provider. Here is an example of calling OpenAI: ```python import openai client = openai.OpenAI(api_key="") messages = template + [{"role": "user", "content": "explain how async works"}] chat_completion = client.chat.completions.create( messages=messages, model=prompt.model, temperature=prompt.temperature ) ``` ```typescript import { OpenAI } from "openai"; import { ChatCompletionMessageParam } from "openai/resources"; const client = new OpenAI({ apiKey: "", }); const messages = template.concat([ { role: "user", content: "explain how async works" }, ]); const chatCompletion = await client.chat.completions.create({ messages: messages as ChatCompletionMessageParam[], model: prompt.model, temperature: prompt.temperature, }); ``` ### Log the result Finally, log the result to your project: ```python # Parse the output from the OpenAI response. output_message = chat_completion.choices[0].message # Log the inputs, outputs and config to your project. log = hl.prompts.log( id=PROMPT_ID, output_message=output_message, messages=messages, ) ``` ```typescript // Get the output from the OpenAI response. const outputMessage = chatCompletion.choices[0].message; const log = humanloop.prompts.log({ id: PROMPT_ID, outputMessage: outputMessage, messages: messages, }); ``` # Tool calling in Editor > Learn how to use tool calling in your large language models and intract with it in the Humanloop Prompt Editor. Humanloop's Prompt Editor supports for Tool Calling functionality, enabling models to interact with external functions. This feature, akin to [OpenAI's function calling](https://platform.openai.com/docs/guides/function-calling), is implemented through JSON Schema tools in Humanloop. These Tools adhere to the widely-used JSON Schema syntax, providing a standardized way to define data structures. Within the editor, you have the flexibility to create inline JSON Schema tools as part of your model configuration. This capability allows you to establish a structured framework for the model's responses, enhancing control and predictability. Throughout this guide, we'll explore the process of leveraging these tools within the editor environment. ### Prerequisites * You already have a Prompt — if not, please follow our [Prompt creation](/docs/development/guides/create-prompt) guide first. ## Create and use a tool in the Prompt Editor To create and use a tool follow the following steps: ### **Open the editor** Go to a Prompt and open the Editor. ### **Select a model that supports Tool Calling** To view the list of models that support Tool calling, see the [Models page](/docs/reference/models#models). In the editor, you'll see an option to select the model. Choose a model like `gpt-4o` which supports Tool Calling. ### **Define the Tool** To get started with tool definition, it's recommended to begin with one of our preloaded example tools. For this guide, we'll use the `get_current_weather` tool. Select this from the dropdown menu of preloaded examples. If you choose to edit or create your own tool, you'll need to use the universal [JSON Schema syntax](https://json-schema.org/). When creating a custom tool, it should correspond to a function you have defined in your own code. The JSON Schema you define here specifies the parameters and structure you want the AI model to use when interacting with your function. ### **Test it out** Now, let's test our tool by inputting a relevant query. Since we're working with a weather-related tool, try typing: `What's the weather in Boston?`. This should prompt OpenAI to respond using the parameters we've defined. Keep in mind that the model's use of the tool depends on the relevance of the user's input. For instance, a question like '*how are you today?*' is unlikely to trigger a weather-related tool response. ### **Check assistant response for a tool call** Upon successful setup, the assistant should respond by invoking the tool, providing both the tool's name and the required data. For our `get_current_weather` tool, the response might look like this: ``` get_current_weather({ "location":"Boston, MA" }) ``` ### **Input tool response** After the tool call, the editor will automatically add a partially filled tool message for you to complete. You can paste in the exact response that the Tool would respond with. For prototyping purposes, you can also just simulate the response yourself. Provide in a mock response: To input the tool response: 1. Find the tool response field in the editor. 2. Enter the response matching the expected format, such as: ```json { "temperature": 12, "condition": "drizzle", "unit": "celsius" } ``` Remember, the goal is to simulate the tool's output as if it were actually fetching real-time weather data. This allows you to test and refine your prompt and tool interaction without needing to implement the actual weather API. ### **Submit tool response** After entering the simulated tool response, click on the 'Run' button to send the Tool message to the AI model. ### **Review assistant response** The assistant should now respond using the information provided in your simulated tool response. For example, if you input that the weather in Boston was drizzling at 12°C, the assistant might say: `The current weather in Boston, MA is 12°C with drizzle.` This response demonstrates how the AI model incorporates the tool's output into its reply, providing a more contextual and data-driven answer. Example of assistant response using tool data ### **Iterate and refine** Feel free to experiment with different queries and simulated tool responses. This iterative process helps you fine-tune your prompt and understand how the AI model interacts with the tool, ultimately leading to more effective and accurate responses in your application. ### **Save your Prompt** By saving your prompt, you're creating a new version that includes the tool configuration. Congratulations! You've successfully learned how to use tool calling in the Humanloop editor. This powerful feature allows you to simulate and test tool interactions, helping you create more dynamic and context-aware AI applications. Keep experimenting with different scenarios and tool responses to fully explore the capabilities of your AI model and create even more impressive applications! ## Next steps After you've created and tested your tool configuration, you might want to reuse it across multiple prompts. Humanloop allows you to link a tool, making it easier to share and manage tool configurations. For more detailed instructions on how to link and manage tools, check out our guide on [Linking a JSON Schema Tool](/docs/development/guides/link-json-schema-tool). # Re-use snippets in Prompts > Learn how to use the Snippet tool to manage common text snippets that you want to reuse across your different prompts. The Snippet Tool supports managing common text 'snippets' that you want to reuse across your different prompts. A Snippet tool acts as a simple key/value store, where the key is the name of the common re-usable text snippet and the value is the corresponding text. For example, you may have some common persona descriptions that you found to be effective across a range of your LLM features. Or maybe you have some specific formatting instructions that you find yourself re-using again and again in your prompts. Instead of needing to copy and paste between your editor sessions and keep track of which projects you edited, you can instead inject the text into your prompt using the Snippet tool. ## Create and use a Snippet Tool ### Prerequisites * You already have a Prompt — if not, please follow our [Prompt creation](/docs/development/guides/create-prompt) guide first. To create and use a snippet tool, follow the following steps: ### Create a new Snippet Tool ### Name the Tool Name it `assistant-personalities` and give it a description `Useful assistant personalities`. ### Add a key called "helpful-assistant" In the initial box add `helpful-assistant` and give it a value of `You are a helpful assistant. You like to tell jokes and if anyone asks your name is Sam.` ### Add another key called "grumpy-assistant" Let's add another key-value pair, so press the **Add a key/value pair** button and add a new key of `grumpy-assistant` and give it a value of `You are a grumpy assistant. You rarely try to help people and if anyone asks your name is Freddy.`. ### Commit and Deploy your Tool. Press the *Commit button*, and enter a commit message for this new version. When asked if to Deploy your version, click the deploy button and deploy to your *production* environment. Now your Snippets are set up, you can use it to populate strings in your prompt templates across your projects. ### Navigate to the **Editor** Go to the Editor of your previously created project. ### Add `{{ assistant-personalities(key) }}` to your prompt Delete the existing prompt template and add `{{ assistant-personalities(key) }}` to your prompt. Double curly bracket syntax is used to call a tool in the editor. Inside the curly brackets you put the tool name, e.g. `{{ my-tool-name(key) }}`. ### Enter the key as an input In the input area set the value to `helpful-assistant`. The tool requires an input value to be provided for the key. When adding the tool an inputs field will appear in the top right of the editor where you can specify your `key`. ### Press the **Run** button Start the chat with the LLM and you can see the response of the LLM, as well as, see the key you previously defined add in the Chat on the right. ### Change the key to `grumpy-assistant`. If you want to see the corresponding snippet to the key you either need to first run the conversation to fetch the string and see it in the preview. ### Play with the LLM Ask the LLM, `I'm a customer and need help solving this issue. Can you help?'`. You should see a grumpy response from "Freddy" now. If you have a specific key you would like to hardcode in the prompt, you can define it using the literal key value: `{{ ("key") }}`, so in this case it would be `{{ assistant-personalities("grumpy-assistant") }}`. Delete the `grumpy-assistant` field and add it into your chat template. ### **Save** your Prompt. If you're happy with you're grumpy assistant, Commit this new version of your Prompt. The Snippet tool is particularly useful because you can define passages of text once in a Snippet tool and reuse them across multiple prompts, without needing to copy/paste them and manually keep them all in sync. Editing the values in your tool allows the changes to automatically propagate to the Prompts when you update them, as long as the key is the same. Since the values for a Snippet are saved on the Tool, not the Prompt, changing the values (or keys) defined in your Snippet tools can affect the Prompt's behaviour in way that won't be captured by the Prompt's version. This could be exactly what you intend, however caution should still be used make sure the changes are expected. # Deploy to an environment > Environments enable you to deploy model configurations and experiments, making them accessible via API, while also maintaining a streamlined production workflow. [Environments](/docs/explanation/environments) are a tagging system for deploying Prompts. They enable you to deploy maintain a streamlined deployment workflow and keep track of different versions of Prompts. ### Prerequisites * You already have a Prompt — if not, please follow our [Prompt creation](/docs/development/guides/create-prompt) guide first. To deploy a version to an environment: ### Navigate to the **Dashboard** of your Prompt ### Click the dropdown menu of the environment. ### Click the **Change deployment** button ### Select a version Choose the version you want to deploy from the list of available versions. ### Click the **Deploy** button. # Create a Directory > Directories can be used to group together related files. This is useful for organizing your work. This guide will show you how to create a [Directory](/docs/explanation/directories) in the UI. A directory is a collection of files and other directories. **Prerequisite**: A Humanloop account. You can create an account now by going to the [Sign up page](https://app.humanloop.com/signup). ## Create a Directory ### Create a Directory 1. Open Humanloop app. 2. Click '**+ New**' on the left and select **Directory**. 3. Name your new directory, for example, "Summarization App". You can call files and directories anything you want. Capital letters, spaces are all ok! Creating a new directory ### (Optional) Move a File into the Directory 1. In the File navigation sidebar, hover over the file in the sidebar to show the context menu. From the menu select "Move". 2. Choose the destination directory Moving a file into a directory You have now successfully created a directory and moved a file into it. This organization can help you manage your AI applications more efficiently within Humanloop. # Link a Tool to a Prompt > Learn how to create a JSON Schema tool that can be reused across multiple Prompts. It's possible to re-use tool definitions them across multiple Prompts. You achieve this by having a Prompt file which defines a JSON schema, and linking them to your Prompt. You achieve this by creating a `JSON Schema` Tool and linking that to as many Prompts as you need. Importantly, updates to this Tool defined here will then propagate automatically to all the Prompts you've linked it to, without having to deploy new versions of the Prompt. ### Prerequisites * You already have a Prompt — if not, please follow our [Prompt creation](/docs/development/guides/create-prompt) guide first. ## Creating and linking a JSON Schema Tool To create a reusable JSON Schema tool for your organization, follow these steps: ### Create a new Tool file Navigate to the homepage or sidebar and click the 'New File' button. ### Choose the JSON Schema Tool type From the available options, select **Json Schema** as the Tool type. ### Give the tool a name Form the top navigation bar, click on the Tool name, and rename to "Weather tool" ### Define your tool's structure Paste the following JSON into the provided dialog to define your tool's structure: ```json { "name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": { "type": "object", "properties": { "location": { "type": "string", "name": "Location", "description": "The city and state, e.g. San Francisco, CA" }, "unit": { "type": "string", "name": "Unit", "enum": ["celsius", "fahrenheit"] } }, "required": ["location"] } } ``` If you choose to edit or create your own tool, you'll need to use the universal [JSON Schema syntax](https://json-schema.org/). When creating a custom tool, it should correspond to a function you have defined in your own code. The JSON Schema you define here specifies the parameters and structure you want the AI model to use when interacting with your function. ### Commit this version of the Tool Press the **Commit** button to commit this version of the Tool, and set it as the default version by deploying it. ### Navigate to the **Editor** of a Prompt Switch to a model that supports tool calling, such as `gpt-4o`. To view the list of models that support Tool calling, see the [Models page](/docs/reference/models#models). ### **Add Tool** to the Prompt definition. In the dropdown, go to the **Link existing tool** option. You should see your `Weather Tool` tool, click on it to link it to your editor. ### Test that the Prompt is working with the tool Now that your Tool is linked you can start using it. In the **Chat** section, in the **User** input, enter `"what is the weather in london?"` Press the **Run** button. You should see the **Assistant** respond with the tool response and a new **Tool** field inserted to allow you to insert an answer. In this case, put in `22` into the tool response and press **Run**. The model will respond with `The current weather in London is 22 degrees`. ### Commit the Prompt You've linked a Tool to your Prompt, now let's save it. Press the **Save** button and name your Prompt `weather-model`. ### (Optional) Update the Tool Now that's we've linked your `get_current_weather` tool to your Prompt, let's try updating the base tool and see how it propagates the changes down into your saved `weather-model` version. Navigate back to the Tool in the sidebar and go to the Editor. ### Update the Tool Let's update both the name, as well as the required fields. For the name, update it to `get_current_weather_updated` and for the required fields, add `unit` as a required field. The should look like this now: ```json { "name": "get_current_weather_updated", "description": "Get the current weather in a given location", "parameters": { "type": "object", "properties": { "location": { "type": "string", "name": "Location", "description": "The city and state, e.g. San Francisco, CA" }, "unit": { "type": "string", "name": "Unit", "enum": ["celsius", "fahrenheit"] } }, "required": ["location", "unit"] } } ``` ### Commit and deploy the Tool Press the **Commit** button and then follow the steps to deloy this version of the Tool. Your Tool is now updated. ### Try the Prompt again Navigate back to your previous project, and open the editor. You should see the `weather-model` loaded as the active version. You should also be able to see the name of your previously linked tool in the Tools section now says `get_current_weather_updated`. In the Chat section enter in again, `What is the weather in london?`, and press **Run** again. ### Check the response You should see the updated tool response, and how it now contains the `unit` field. Congratulations, you've successfully linked a JSON Schema tool to your Prompt. When updating your Tool, remember that the change will affect all the Prompts that link to it. Be careful when making updates to not inadvertently change something you didn't intend. # Monitor production Logs > Learn how to create and use online Evaluators to observe the performance of your Prompts. [Evaluators](../../explanation/evaluators#online-monitoring) on Humanloop enable you to continuously measure the performance of your Prompts in production. Attach online Evaluators to your Prompts, and Humanloop will automatically run them on new Logs. You can then track the performance of your Prompts over time. ## Prerequisites * You have a Prompt receiving Logs. If not, please follow our [Prompt creation](/docs/development/guides/create-prompt) guide first. * You have an [online Evaluator](../../explanation/evaluators#online-monitoring). The example "Factuality" Evaluator is an online Evaluator that comes pre-configured with your organization. ## Attach Evaluator to your Prompt Attach the online Evaluator to your Prompt. Humanloop will automatically run the Evaluator on new Logs generated by your Prompt. ### Open the Prompt's monitoring dialog Go to your Prompt's dashboard. Click **Monitoring** in the top right to open the monitoring dialog. ![Dashboard showing Monitoring dialog](file:54d7655c-4c64-4ed3-ae94-c24864d8e6e9) ### Connect your Evaluator Click **Connect Evaluators** and select the Evaluator you created. ### Ensure "Auto-run" is enabled Ensure the "Auto-run" switch is enabled for the Evaluator. This will ensure the Evaluator runs automatically on new Logs. ## View Evaluator results Humanloop will run the Evaluators on the new Logs as they are generated. ### Graphs over time Evaluator results are summarized in the **Dashboard** tab. Here, Humanloop displays the average Evaluator results over time. You can toggle the period and time resolution covered by the graphs to see how your Prompt has performed over time. ![Graphs on Dashboard showing the average Evaluator results over time.](file:95244643-37c6-4bce-9780-e836d2938436) ### Filtering Logs To investigate specific Logs, go to the **Logs** tab. Here, you can see the Evaluator results for each Log generated by your Prompt. The Evaluator you attached above will have a column in the Logs table showing the results of the Evaluator. ![The Logs table includes a column for each monitoring Evaluator.](file:1d2aa5af-610f-42e2-9b31-5043a4a4e57f) You can filter the Logs table by the Evaluator results. Click on the column header to sort the table by the Evaluator results. For Evaluators with options (i.e. those with a return type of `select` or `multi_select`), you can filter the table by the applied options. ## Next steps * Iterate on your Prompt based on the Evaluator results. You can open specific Logs in the Editor to tweak and test new Prompt versions. * [Add Logs of interest to a Dataset](../evals/create-dataset-from-logs) to use in an Evaluation. # Capture user feedback > Learn how to record user feedback on your generated Prompt Logs using the Humanloop SDK. ### Prerequisites * You already have a Prompt — if not, please follow our [Prompt creation](../../guides/prompt/create-prompt) guide first. * You have created a Human Evaluator. For this guide, we will use the "rating" example Evaluator automatically created for your organization. First you need to install and initialize the SDK. If you have already done this, skip to the next section. Open up your terminal and follow these steps: 1. Install the Humanloop SDK: ```python pip install humanloop ``` ```typescript npm install humanloop ``` 2. Initialize the SDK with your Humanloop API key (you can get it from the [Organization Settings page](https://app.humanloop.com/account/api-keys)). ```python from humanloop import Humanloop humanloop = Humanloop(api_key="") # Check that the authentication was successful print(humanloop.prompts.list()) ``` ```typescript import { HumanloopClient, Humanloop } from "humanloop"; const humanloop = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); // Check that the authentication was successful console.log(await humanloop.prompts.list()); ``` ## Configure feedback To collect user feedback, connect a Human Evaluator to your Prompt. The Evaluator specifies the type of the feedback you want to collect. See our guide on [creating Human Evaluators](../evals/human-evaluators) for more information. You can use the example "rating" Evaluator that is automatically for you. This Evaluator allows users to apply a label of "good" or "bad", and is automatically connected to all new Prompts. If you choose to use this Evaluator, you can skip to the "Log feedback" section. Different use-cases and user interfaces may require different kinds of feedback that need to be mapped to the appropriate end user interaction. There are broadly 3 important kinds of feedback: 1. **Explicit feedback**: these are purposeful actions to review the generations. For example, ‘thumbs up/down’ button presses. 2. **Implicit feedback**: indirect actions taken by your users may signal whether the generation was good or bad, for example, whether the user ‘copied’ the generation, ‘saved it’ or ‘dismissed it’ (which is negative feedback). 3. **Free-form feedback**: Corrections and explanations provided by the end-user on the generation. You should create Human Evaluators structured to capture the feedback you need. For example, a Human Evaluator with return type "text" can be used to capture free-form feedback, while a Human Evaluator with return type "multi\_select" can be used to capture user actions that provide implicit feedback. If you have not done so, you can follow our guide to [create a Human Evaluator](/docs/evaluation/guides/human-evaluator) to set up the appropriate feedback schema. ### Open the Prompt's monitoring dialog Go to your Prompt's dashboard. Click **Monitoring** in the top right to open the monitoring dialog. ![Prompt dashboard showing Monitoring dialog](file:62ec33bb-bf7f-4963-8b93-8f913baee937) ### Connect your Evaluator Click **Connect Evaluators** and select the Human Evaluator you created. ![Dialog connecting the "Tweet Issues" Evaluator as a Monitoring Evaluator](file:25c135c5-8456-4ba1-ab59-a923837fcd72) You should now see the selected Human Evaluator attached to the Prompt in the Monitoring dialog. ![Monitoring dialog showing the "Tweet Issues" Evaluator attached to the Prompt](file:0bc601cc-5e2d-4adc-be15-6bffc7fb72d8) ## Log feedback With the Human Evaluator attached to your Prompt, you can record feedback against the Prompt's Logs. ### Retrieve Log ID The ID of the Prompt Log can be found in the response of the `humanloop.prompts.call(...)` method. ```python log = humanloop.prompts.call( version_id="prv_qNeXZp9P6T7kdnMIBHIOV", path="persona", messages=[{"role": "user", "content": "What really happened at Roswell?"}], inputs={"person": "Trump"}, ) log_id = log.id ``` ### Log the feedback Call `humanloop.evaluators.log(...)` referencing the above Log ID as `parent_id` to record user feedback. ```python feedback = humanloop.evaluators.log( # Pass the `log_id` from the previous step to indicate the Log to record feedback against parent_id=log_id, # Here, we're recording feedback against a "Tweet Issues" Human Evaluator, # which is of type `multi_select` and has multiple options to choose from. path="Feedback Demo/Tweet Issues", judgment=["Inappropriate", "Too many emojis"], ) ``` The "rating" and "correction" Evaluators are attached to all Prompts by default. You can record feedback using these Evaluators as well. The "rating" Evaluator can be used to record explicit feedback (e.g. from a 👍/👎 button). ```python rating_log = humanloop.evaluators.log( parent_id=log_id, # We're recording feedback using the "rating" Human Evaluator, # which has 2 options: "good" and "bad". path="rating", judgment="good", # You can also include the source of the feedback when recording it with the `user` parameter. user="user_123", ) ``` The "correction" Evaluator can be used to record user-provided corrections to the generations (e.g. If the user edits the generation before copying it). ```python correction_log = humanloop.evaluators.log( parent_id=log_id, path="correction", judgment="NOTHING happened at Roswell, folks! Fake News media pushing ALIEN conspiracy theories. SAD! " + "I know Area 51, have the best aliens. Roswell? Total hoax! Believe me. 👽🚫 #Roswell #FakeNews", ) ``` If the user removes their feedback (e.g. if the user deselects a previous 👎 feedback), you can record this by passing `judgment=None`. ```python removed_rating_log = humanloop.evaluators.log( parent_id=log_id, path="rating", judgment=None, ) ``` ## View feedback You can view the applied feedback in two main ways: through the Logs that the feedback was applied to, and through the Evaluator itself. ### Feedback applied to Logs The feedback recorded for each Log can be viewed in the **Logs** table of your Prompt. ![Logs table showing feedback applied to Logs](file:b07ce7dd-9e8c-4584-b415-cdde62ee2be1) Your internal users can also apply feedback to the Logs directly through the Humanloop app. ![Log drawer showing feedback section](file:8ea74af0-4cbd-4ce0-aa7a-a2b2947b7a9f) ### Feedback for an Evaluator You can view all feedback recorded for a specific Human Evaluator in the **Logs** tab of the Evaluator. This will display all feedback recorded for the Evaluator across all other Files. ![Logs table for "Tweet Issues" Evaluator showing feedback](file:a45ea978-1e4e-44ce-8b03-8dd63784fa84) ## Next steps * [Create and customize your own Human Evaluators](../evals/human-evaluators) to capture the feedback you need. * Human Evaluators can also be used in Evaluations, allowing you to [collect judgments from your subject-matter experts](../evals/run-human-evaluation). # Logging through API > Add logging your AI project using the Humanloop API. Our SDK offers high-level utilities for integrating Humanloop in your project. You can use the API to the same effect with any language you use or if you prefer more control. This guide revisits our [logging quickstart tutorial](/docs/v5/quickstart/set-up-logging): we'll use API actions instead of the SDK decorators, showing you how Humanloop instrumentation works step-by-step. By the end, we'll have a chat agent project integrated with Humanloop logging. The example uses the Python SDK, but the verbs map directly [to our API](/docs/v5/api-reference/sdks). ## Prerequisites

Create a Humanloop Account

If you haven't already, [create an account](https://app.humanloop.com/signup) or [log in](https://app.humanloop.com/login) to Humanloop

Add an OpenAI API Key

If you're the first person in your organization, you'll need to add an API key to a model provider. 1. Go to OpenAI and [grab an API key](https://platform.openai.com/api-keys). 2. In Humanloop [Organization Settings](https://app.humanloop.com/account/api-keys) set up OpenAI as a model provider. Using the Prompt Editor will use your OpenAI credits in the same way that the OpenAI playground does. Keep your API keys for Humanloop and the model providers private.
```bash pip install humanloop openai ``` Humanloop SDK requires Python 3.9 or higher. Optionally, create a virtual environment to keep dependencies tidy. {/* TODO: Add a disclaimer for TS */}
## Create the chat agent We start with a simple chat agent that answers math and science questions. Create an `agent.py` file and add the following: ```python title="agent.py" maxLines=100 import os import json import datetime from humanloop import Humanloop from openai import OpenAI openai = OpenAI(api_key="YOUR_OPENAI_KEY") humanloop = Humanloop(api_key="YOUR_HUMANLOOP_KEY") def calculator(operation: str, num1: int, num2: int) -> str: """Do arithmetic operations on two numbers.""" if operation == "add": return num1 + num2 elif operation == "subtract": return num1 - num2 elif operation == "multiply": return num1 * num2 elif operation == "divide": return num1 / num2 else: return "Invalid operation" TOOL_JSON_SCHEMA = { "name": "calculator", "description": "Do arithmetic operations on two numbers.", "parameters": { "type": "object", "required": ["operation", "num1", "num2"], "properties": { "operation": {"type": "string"}, "num1": {"type": "integer"}, "num2": {"type": "integer"}, }, "additionalProperties": False, }, } def call_model(messages: list[str]) -> str: output = openai.chat.completions.create( messages=messages, model="gpt-4o", tools=[ { "type": "function", "function": TOOL_JSON_SCHEMA, } ], temperature=0.7, ) # Check if model asked for a tool call if output.choices[0].message.tool_calls: for tool_call in output.choices[0].message.tool_calls: arguments = json.loads(tool_call.function.arguments) if tool_call.function.name == "calculator": result = calculator(**arguments) return f"[TOOL CALL] {result}" # Otherwise, return the LLM response return output.choices[0].message.content def conversation(): messages = [ { "role": "system", "content": "You are a a groovy 80s surfer dude " "helping with math and science.", }, ] while True: user_input = input("You: ") if user_input == "exit": break messages.append({"role": "user", "content": user_input}) response = call_model(messages=messages) messages.append({"role": "assistant", "content": response}) print(f"Agent: {response}") if __name__ == "__main__": conversation() ``` Create an `agent.ts` file and add the following: ```typescript title="agent.ts" maxLines=100 import { OpenAI } from "openai"; import * as readline from 'readline/promises'; import { HumanloopClient } from "humanloop"; import { ChatCompletionMessageParam } from "openai/resources"; const openai = new OpenAI({ apiKey: "YOUR_OPENAI_KEY" }); const humanloop = new HumanloopClient({apiKey: "YOUR_HUMANLOOP_KEY"}); function calculator(operation: string, num1: number, num2: number): string | number { /** Do arithmetic operations on two numbers. */ switch (operation) { case "add": return num1 + num2; case "subtract": return num1 - num2; case "multiply": return num1 * num2; case "divide": return num1 / num2; default: return "Invalid operation"; } } const TOOL_JSON_SCHEMA = { name: "calculator", description: "Do arithmetic operations on two numbers.", parameters: { type: "object", required: ["operation", "num1", "num2"], properties: { operation: { type: "string" }, num1: { type: "integer" }, num2: { type: "integer" }, }, additionalProperties: false, }, }; async function callModel(messages: ChatCompletionMessageParam[]): Promise { const output = await openai.chat.completions.create({ messages: messages, model: "gpt-4o", tools: [ { type: "function", function: TOOL_JSON_SCHEMA, }, ], temperature: 0.7, }); // Check if model asked for a tool call const toolCalls = output.choices[0]?.message?.tool_calls; if (toolCalls) { for (const toolCall of toolCalls) { const toolArguments = JSON.parse(toolCall.function.arguments); if (toolCall.function.name === "calculator") { const toolStartTime = new Date(); const result = calculator(toolArguments.operation, toolArguments.num1, toolArguments.num2); return `[TOOL CALL] ${result}`; } } } // Otherwise, return the LLM response return output.choices[0]?.message?.content || ""; } async function conversation() { const messages: ChatCompletionMessageParam[] = [ { role: "system", content: "You are a groovy 80s surfer dude helping with math and science.", }, ]; const rl = readline.createInterface({ input: process.stdin, output: process.stdout, }); while (true) { let userInput = await rl.question("You: ") if (userInput === "exit") { rl.close(); break; } messages.push({ role: "user", content: userInput }); const response = await callModel(traceId, messages); messages.push({ role: "assistant", content: response }); console.log(`Agent: ${response}`); } } conversation(); ``` ## Log to Humanloop The agent works and is capable of function calling. However, we rely on inputs and outputs to reason about the behavior. Humanloop logging allows you to observe the steps taken by the agent, which we will demonstrate below. ### Initialize the trace Modify `call_model` to accept a `trace_id` argument. It will be used to associate [Logs](/docs/v5/explanations/logs) to the logging trace. The trace of the conversation will be associated with a [Flow](/docs/v5/explanations/flows). Initialize the trace at the start of the conversation. ```python title="agent.py" highlight={1,5-10,23} maxLines=30 def call_model(trace_id: str, messages: list[str]) -> str: ... def conversation(): trace_id = humanloop.flows.log( path="Logging Quickstart/QA Agent", flow={ "attributes": {}, }, ).id messages = [ { "role": "system", "content": "You are a a groovy 80s surfer dude " "helping with math and science.", }, ] while True: user_input = input("You: ") if user_input == "exit": break messages.append({"role": "user", "content": user_input}) response = call_model(trace_id=trace_id, messages=messages) messages.append({"role": "assistant", "content": response}) print(f"Agent: {response}") ``` ```typescript title="agent.ts" highlight={1,6-13,35} maxLines=30 async function callModel(traceId: string, messages: ChatCompletionMessageParam[]): Promise { ... } async function conversation() { const flowLog = await humanloop.flows.log({ path="Logging Quickstart/QA Agent", flow={ "attributes": {}, }, }); const traceId = flowLog.id; const messages: ChatCompletionMessageParam[] = [ { role: "system", content: "You are a groovy 80s surfer dude helping with math and science.", }, ]; const rl = readline.createInterface({ input: process.stdin, output: process.stdout, }); while (true) { let userInput = await rl.question("You: ") if (userInput === "exit") { rl.close(); break; } messages.push({ role: "user", content: userInput }); const response = await callModel(traceId, messages); messages.push({ role: "assistant", content: response }); console.log(`Agent: ${response}`); } } ``` ### Add logging We add log statements that will create the Logs contained in the trace. ```python title="agent.py" highlight={2,12-23,30,32-44} def call_model(trace_id: str, messages: list[str]) -> str: prompt_start_time = datetime.datetime.now() output = openai.chat.completions.create( messages=messages, model="gpt-4o", tools=[{ "type": "function", "function": TOOL_JSON_SCHEMA, }], temperature=0.7, ) prompt_log_id = humanloop.prompts.log( path="Logging Quickstart/QA Prompt", prompt={ "model": "gpt-4o", "tools": [TOOL_JSON_SCHEMA], "temperature": 0.7, }, output=output.choices[0].message.content, trace_parent_id=trace_id, start_time=prompt_start_time, end_time=datetime.datetime.now(), ).id # Check if model asked for a tool call if output.choices[0].message.tool_calls: for tool_call in output.choices[0].message.tool_calls: arguments = json.loads(tool_call.function.arguments) if tool_call.function.name == "calculator": tool_start_time = datetime.datetime.now() result = calculator(**arguments) humanloop.tools.log( path="Logging Quickstart/Calculator", tool={ "name": "calculator", "description": "Do arithmetic operations on two numbers.", "function": TOOL_JSON_SCHEMA, }, inputs=arguments, output=result, trace_parent_id=prompt_log_id, start_time=tool_start_time, end_time=datetime.datetime.now(), ) return f"[TOOL CALL] {result}" # Otherwise, return the LLM response return output.choices[0].message.content ``` ```typescript title="agent.ts" highlight={2,15-28,39-53} async function callModel(traceId: string, messages: ChatCompletionMessageParam[]): Promise { const promptStartTime = new Date(); const output = await openai.chat.completions.create({ messages: messages, model: "gpt-4o", tools: [ { type: "function", function: TOOL_JSON_SCHEMA, }, ], temperature: 0.7, }); const promptLog = await humanloop.prompts.log({ path: "Logging Quickstart/QA Prompt", prompt: { model: "gpt-4o", tools: [TOOL_JSON_SCHEMA], temperature: 0.7, }, output: output.choices[0]?.message?.content || "", traceParentId: traceId, startTime: promptStartTime, endTime: new Date(), }); const promptLogId = promptLog.id; // Check if model asked for a tool call const toolCalls = output.choices[0]?.message?.tool_calls; if (toolCalls) { for (const toolCall of toolCalls) { const toolArguments = JSON.parse(toolCall.function.arguments); if (toolCall.function.name === "calculator") { const toolStartTime = new Date(); const result = calculator(toolArguments.operation, toolArguments.num1, toolArguments.num2); await humanloop.tools.log({ path: "Logging Quickstart/Calculator", tool: { "function": { "name": "calculator", "description": "Do arithmetic operations on two numbers.", "parameters": TOOL_JSON_SCHEMA }, }, inputs: toolArguments, output: JSON.stringify(result), traceParentId: promptLogId, startTime: toolStartTime, endTime: new Date(), }); return `[TOOL CALL] ${result}`; } } } // Otherwise, return the LLM response return output.choices[0]?.message?.content || ""; } async function conversation() { const traceLog = await humanloop.flows.log({ path: "Logging Quickstart/QA Agent", flow: { attributes: {}, }, }); const traceId = traceLog.id; const messages: ChatCompletionMessageParam[] = [ { role: "system", content: "You are a groovy 80s surfer dude helping with math and science.", }, ]; const rl = readline.createInterface({ input: process.stdin, output: process.stdout, }); while (true) { let userInput = await rl.question("You: ") if (userInput === "exit") { rl.close(); break; } messages.push({ role: "user", content: userInput }); const response = await callModel(traceId, messages); messages.push({ role: "assistant", content: response }); console.log(`Agent: ${response}`); } await humanloop.flows.updateLog(traceId, { "output": "", "traceStatus": "complete", }); } ``` ### Complete the trace When the conversation is finished, we mark the trace as complete, signalling no more logs will be added. ```python title="agent.py" highlight={24-28} maxLines=30 def conversation(): trace_id = humanloop.flows.log( path="Logging Quickstart/QA Agent", flow={ "attributes": {}, }, ).id messages = [ { "role": "system", "content": "You are a a groovy 80s surfer dude " "helping with math and science.", }, ] while True: user_input = input("You: ") if user_input == "exit": break messages.append({"role": "user", "content": user_input}) response = call_model(trace_id=trace_id, messages=messages) messages.append({"role": "assistant", "content": response}) print(f"Agent: {response}") humanloop.flows.update_log( log_id=trace_id, output="", status="complete", ) ``` ```typescript title="agent.ts" highlight={36-39} async function conversation() { const traceLog = await humanloop.flows.log({ path: "Logging Quickstart/QA Agent", flow: { attributes: {}, }, }); const traceId = traceLog.id; const messages: ChatCompletionMessageParam[] = [ { role: "system", content: "You are a groovy 80s surfer dude helping with math and science.", }, ]; const rl = readline.createInterface({ input: process.stdin, output: process.stdout, }); while (true) { let userInput = await rl.question("You: ") if (userInput === "exit") { rl.close(); break; } messages.push({ role: "user", content: userInput }); const response = await callModel(traceId, messages); messages.push({ role: "assistant", content: response }); console.log(`Agent: ${response}`); } await humanloop.flows.updateLog(traceId, { "output": "", "traceStatus": "complete", }); } ``` ## Run the code Have a conversation with the agent. When you're done, type `exit` to close the program. ```bash curl title="Terminal" python agent.py You: Hi dude! Agent: Tubular! I am here to help with math and science, what is groovin? You: How does flying work? Agent: ... You: What is 5678 * 456? Agent: [TOOL CALL] 2587968 You: exit ``` ```bash curl title="Terminal" npx tsc && node dist/index.js You: Hi dude! Agent: Tubular! I am here to help with math and science, what is groovin? You: How does flying work? Agent: ... You: What is 5678 * 456? Agent: [TOOL CALL] 2587968 You: exit ``` ## Check your workspace Navigate to [your workspace](https://app.humanloop.com) to see the logged conversation. Inside the **Logging Quickstart** directory on the left, click the **QA Agent** [Flow](/docs/v5/explanation/flows). Select the **Logs** tab from the top of the page and click the Log inside the table. You will see the conversation's trace, containing Logs corresponding to the [Tool](/docs/v5/explanation/tools) and the [Prompt](/docs/v5/explanation/prompts). ## Change the agent and rerun Modify the `call_model` function to use a different model and temperature. ```python title="agent.py" highlight={5,10,15,17} maxLines=30 def call_model(trace_id: str, messages: list[str]) -> str: prompt_start_time = datetime.datetime.now() output = openai.chat.completions.create( messages=messages, model="gpt-4o-mini", tools=[{ "type": "function", **TOOL_JSON_SCHEMA, }], temperature=0.2, ) prompt_log_id = humanloop.prompts.log( path="Logging Quickstart/QA Prompt", prompt={ "model": "gpt-4o-mini", "tools": [TOOL_JSON_SCHEMA], "temperature": 0.2, } output=output.choices[0].message.content, trace_parent_id=trace_id, start_time=prompt_start_time, end_time=datetime.datetime.now(), ).id ... ``` ```typescript title="agent.ts" highlight={5,12,18,20} maxLines=30 async function callModel(traceId: string, messages: ChatCompletionMessageParam[]): Promise { const promptStartTime = new Date(); const output = await openai.chat.completions.create({ messages: messages, model: "gpt-4o-mini", tools: [ { type: "function", function: TOOL_JSON_SCHEMA, }, ], temperature: 0.2, }); const promptLog = await humanloop.prompts.log({ path: "Logging Quickstart/QA Prompt", prompt: { model: "gpt-4o-mini", tools: [TOOL_JSON_SCHEMA], temperature: 0.2, }, output: output.choices[0]?.message?.content || "", traceParentId: traceId, startTime: promptStartTime, endTime: new Date(), }); ... ``` Run the agent again, then head back to your workspace. Click the **QA Prompt** [Prompt](/docs/v5/explanation/prompts), select the **Dashboard** tab from the top of the page and look at **Uncommitted** Versions. By changing the hyperparameters of the OpenAI call, you have tagged a new version of the Prompt. ## Next steps Logging is the first step to observing your AI product. Read these guides to learn more about evals on Humanloop: * Add [monitoring Evaluators](/docs/v5/guides/observability/monitoring) to evaluate Logs as they're made against a File. * See evals in action in our [tutorial on evaluating an agent](/docs/v5/tutorials/agent-evaluation). # Invite collaborators > Inviting people to your organization allows them to interact with your Humanloop projects. Inviting people to your organization allows them to interact with your Humanloop workspace: * Teammates will be able to create new Prompts, Tools, Datasets and Evaluators. * Developers will be able to get an API key to call, log and evaluate the Prompts via the SDK. * Annotators may give feedback the Logs and provide the human reviews for Evaluations. ## Invite Users To invite users to your organization: ### Go to your organization's **[Members page](https://app.humanloop.com/account/members)** ### Enter the **email address** Enter the email of the person you wish to invite into the **Invite members** box. ### Click **Send invite**. An email will be sent to the entered email address, inviting them to the organization. If the entered email address is not already a Humanloop user, they will be prompted to create an account before being added to the organization. 🎉 Once they create an account, they can view your projects at the same URL to begin collaborating. # Manage API keys > How to create, share and manage you Humanloop API keys. The API keys allow you to access the Humanloop API programmatically in your app. ## Create a new API key ### Go to your Organization's **[API Keys page](https://app.humanloop.com/account/api-keys)**. ### Click the **Create new API key** button. ### Enter a name for your API key. Choose a name that helps you identify the key's purpose. You can't change the name of an API key after it's created. ### Click **Create**. ### Copy the generated API key Save it in a secure location. You will not be shown the full API key again. ## Revoke an API key You can revoke an existing API key if it is no longer needed. When an API key is revoked, future API requests that use this key will be rejected. Any systems that are dependent on this key will no longer work. ### Go to API keys page Go to your Organization's **[API Keys page](https://app.humanloop.com/account/api-keys)**. ### Identify the API key Find the key you wish to revoke by its name or by the displayed trailing characters. ### Click 'Revoke' Click the three dots button on the right of its row to open its menu. Click **Revoke**. A confirmation dialog will be displayed. Click **Remove**. # Manage Environments > Environments are a tagging system for deploying Prompts. They enable you to deploy maintain a streamlined deployment workflow and keep track of different versions of Prompts. [Environments](/docs/explanation/environments) are a tagging system for deploying Prompts. They enable you to deploy maintain a streamlined deployment workflow and keep track of different versions of Prompts. The default environment is your production environment. Everytime you fetch a Prompt, Tool, Dataset etc. without specifying an alternative environment or specific version, the version that is tagged with the default environment is returned. ## Create an environment ### Go to your [Environments](https://app.humanloop.com/account/environments) tab in your Organization's settings. ### Click the '**+ Environment**' button to open the new environment dialog ### Assign a custom name to the environment We recommend something short. For example, you could use `staging`, `prod`, `qa`, `dev`, `testing`, etc. This name is be used to identify the environment in the UI and in the API. ### Click **Create**. ## Rename an environment You can rename an environment to re-arrange your development workflows. Since each new file is automatically deployed to the default environment, which is production unless altered, it may make more sense to create a separate production environment and rename your current environments. Renaming the environments will take immediate effect, so ensure that this change is planned and does not disrupt your production workflows. ### Go to environments page Go to your Organization's **[environments page](https://app.humanloop.com/account/environments)**. ### Identify the environment Find the environment you wish to rename. ### Click 'Rename' Click the three dots button on the right of its row to open its menu. Click **Rename**. A confirmation dialog will be displayed. Update the name and click **Rename**. # Deployment Options > Humanloop is SOC-2 compliant, offers within your VPC and never trains on your data. Learn more about our hosting options. Humanloop offers a broad range of hosting environments to meet the security and compliance needs of enterprise customers. Our menu of hosting options is as follows from basic to more advanced: 1. **Default**: Our multi-tenanted cloud offering is SOC2 compliant and hosted in AWS US-east region on AWS. 2. **Region specific**: Same as 1, but where additional region requirements for data storage are required - e.g. data can never leave the EU for GDPR reasons. We offer UK, EU and US guarantees for data storage regions. 3. **Dedicated**: We provision your own dedicated instance of Humanloop in your region of choice. With the additional added benefits: * Full [HIPAA compliant](https://aws.amazon.com/compliance/hipaa-compliance/) AWS setup. * Ability to manage your own encryption keys in KMS. * Ability to subscribe to application logging and cloudtrail infrastructure monitoring. 4. **Self-hosted**: You deploy an instance of Humanloop within your own VPC on AWS. We provide an infra as code setup with [Pulumi](https://www.pulumi.com/) to easily spin up a Humanloop instance in your VPC. # Supported Models > Humanloop supports all the major large language model providers, including OpenAI, Anthropic, Google, AWS Bedrock, Azure, and more. Additionally, you can use your own custom models with with the API and still benefit from the Humanloop platform. Humanloop supports all the major large language model providers, including OpenAI, Anthropic, Google, AWS Bedrock, Azure, and more. Additionally, you can use your own custom models with with the API and still benefit from the Humanloop platform. ## Providers Here is a summary of which providers are supported, and what information is available for each provider automatically. | Provider | Models | Cost information | Token information | | ----------- | ---------------- | ---------------- | ----------------- | | OpenAI | ✅ | ✅ | ✅ | | Anthropic | ✅ | ✅ | ✅ | | Google | ✅ | ✅ | ✅ | | Azure | ✅ | ✅ | ✅ | | Cohere | ✅ | ✅ | ✅ | | Llama | ✅ | | | | Groq | ✅ | | | | AWS Bedrock | Anthropic, Llama | ✅ | ✅ | Adding in more providers is driven by customer demand. If you have a specific provider or model you would like to see supported, please reach out to us at [support@humanloop.com](mailto:support@humanloop.com). ## Models The following are models that are integrated with Humanloop. This means that they can be used in the Prompt Editor and are callable through the Humanloop API. If you have a specific model you would like to see supported, please reach out to us at [support@humanloop.com](mailto:support@humanloop.com). Remember, you can always use any model you want including your own self-hosted models, if you orchestrate the API calls yourself and log the data to Humanloop. | Provider | Model | Max Prompt Tokens | Max Output Tokens | Cost per Prompt Token | Cost per Output Token | Tool Support | Image Support | | ------------- | --------------------------- | ----------------- | ----------------- | --------------------- | --------------------- | ------------ | ------------- | | openai | o1-preview | 128000 | 32768 | \$0.000015 | \$0.00006 | ❌ | ❌ | | openai | o1-mini | 128000 | 65536 | \$0.000003 | \$0.000012 | ❌ | ❌ | | openai | gpt-4o-64k-output-alpha | 128000 | 64000 | \$0.000006 | \$0.000018 | ✅ | ✅ | | openai | gpt-4o | 128000 | 4096 | \$0.000005 | \$0.000015 | ✅ | ✅ | | openai | gpt-4o-mini | 128000 | 4096 | \$0.00000015 | \$0.0000006 | ✅ | ✅ | | openai | gpt-4 | 8192 | 4096 | \$0.00003 | \$0.00006 | ✅ | ❌ | | openai | gpt-4-turbo | 128000 | 4096 | \$0.00001 | \$0.00003 | ✅ | ✅ | | openai | gpt-4-turbo-2024-04-09 | 128000 | 4096 | \$0.00001 | \$0.00003 | ✅ | ❌ | | openai | gpt-4-32k | 32768 | 4096 | \$0.00003 | \$0.00003 | ✅ | ❌ | | openai | gpt-4-1106-preview | 128000 | 4096 | \$0.00001 | \$0.00003 | ✅ | ❌ | | openai | gpt-4-0125-preview | 128000 | 4096 | \$0.00001 | \$0.00003 | ✅ | ❌ | | openai | gpt-4-vision | 128000 | 4096 | \$0.00001 | \$0.00003 | ✅ | ✅ | | openai | gpt-4-1106-vision-preview | 16385 | 4096 | \$0.0000015 | \$0.000002 | ✅ | ❌ | | openai | gpt-3.5-turbo | 16385 | 4096 | \$0.0000015 | \$0.000002 | ✅ | ❌ | | openai | gpt-3.5-turbo-instruct | 8192 | 4097 | \$0.0000015 | \$0.000002 | ✅ | ❌ | | openai | babbage-002 | 16384 | 16384 | \$0.0000004 | \$0.0000004 | ✅ | ❌ | | openai | davinci-002 | 16384 | 16384 | \$0.000002 | \$0.000002 | ✅ | ❌ | | openai | ft:gpt-3.5-turbo | 4097 | 4096 | \$0.000003 | \$0.000006 | ✅ | ❌ | | openai | ft:davinci-002 | 16384 | 16384 | \$0.000002 | \$0.000002 | ✅ | ❌ | | openai | text-moderation | 32768 | 32768 | \$0.000003 | \$0.000004 | ✅ | ❌ | | anthropic | claude-3-5-sonnet-20241022 | 200000 | 8192 | \$0.000003 | \$0.000015 | ✅ | ✅ | | anthropic | claude-3-5-sonnet-20240620 | 200000 | 4096 | \$0.000003 | \$0.000015 | ✅ | ✅ | | anthropic | claude-3-5-haiku-20241022 | 200000 | 8192 | \$0.000003 | \$0.000015 | ✅ | ✅ | | anthropic | claude-3-opus-20240229 | 200000 | 4096 | \$0.000015 | \$0.000075 | ✅ | ❌ | | anthropic | claude-3-sonnet-20240229 | 200000 | 4096 | \$0.000003 | \$0.000015 | ✅ | ❌ | | anthropic | claude-3-haiku-20240307 | 200000 | 4096 | \$0.00000025 | \$0.00000125 | ✅ | ❌ | | anthropic | claude-2.1 | 100000 | 4096 | \$0.00000025 | \$0.000024 | ❌ | ❌ | | anthropic | claude-2 | 100000 | 4096 | \$0.000008 | \$0.000024 | ❌ | ❌ | | google | gemini-pro-vision | 16384 | 2048 | \$0.00000025 | \$0.0000005 | ❌ | ✅ | | google | gemini-1.0-pro-vision | 16384 | 2048 | \$0.00000025 | \$0.0000005 | ❌ | ✅ | | google | gemini-pro | 32760 | 8192 | \$0.00000025 | \$0.0000005 | ❌ | ❌ | | google | gemini-1.0-pro | 32760 | 8192 | \$0.00000025 | \$0.0000005 | ❌ | ❌ | | google | gemini-1.5-pro-latest | 1000000 | 8192 | \$0.00000025 | \$0.0000005 | ❌ | ❌ | | google | gemini-1.5-pro | 1000000 | 8192 | \$0.00000025 | \$0.0000005 | ❌ | ❌ | | google | gemini-1.5-flash | 1000000 | 8192 | \$0.000000075 | \$0.0000003 | ✅ | ✅ | | google | gemini-1.5-flash-8b | 1000000 | 8192 | \$0.0000000375 | \$0.00000015 | ✅ | ✅ | | openai\_azure | o1-preview | 128000 | 32768 | \$0.000015 | \$0.00006 | ❌ | ❌ | | openai\_azure | o1-mini | 128000 | 65536 | \$0.000003 | \$0.000012 | ❌ | ❌ | | openai\_azure | gpt-4o | 128000 | 4096 | \$0.000005 | \$0.000015 | ✅ | ✅ | | openai\_azure | gpt-4o-2024-05-13 | 128000 | 4096 | \$0.000005 | \$0.000015 | ✅ | ✅ | | openai\_azure | gpt-4-turbo-2024-04-09 | 128000 | 4096 | \$0.00003 | \$0.00006 | ✅ | ✅ | | openai\_azure | gpt-4 | 8192 | 4096 | \$0.00003 | \$0.00006 | ✅ | ❌ | | openai\_azure | gpt-4-0314 | 8192 | 4096 | \$0.00003 | \$0.00006 | ✅ | ❌ | | openai\_azure | gpt-4-32k | 32768 | 4096 | \$0.00006 | \$0.00012 | ✅ | ❌ | | openai\_azure | gpt-4-0125 | 128000 | 4096 | \$0.00001 | \$0.00003 | ✅ | ❌ | | openai\_azure | gpt-4-1106 | 128000 | 4096 | \$0.00001 | \$0.00003 | ✅ | ❌ | | openai\_azure | gpt-4-0613 | 8192 | 4096 | \$0.00003 | \$0.00006 | ✅ | ❌ | | openai\_azure | gpt-4-turbo | 128000 | 4096 | \$0.00001 | \$0.00003 | ✅ | ❌ | | openai\_azure | gpt-4-turbo-vision | 128000 | 4096 | \$0.000003 | \$0.000004 | ✅ | ✅ | | openai\_azure | gpt-4-vision | 128000 | 4096 | \$0.000003 | \$0.000004 | ✅ | ✅ | | openai\_azure | gpt-35-turbo-1106 | 16384 | 4096 | \$0.0000015 | \$0.000002 | ✅ | ❌ | | openai\_azure | gpt-35-turbo-0125 | 16384 | 4096 | \$0.0000005 | \$0.0000015 | ✅ | ❌ | | openai\_azure | gpt-35-turbo-16k | 16384 | 4096 | \$0.000003 | \$0.000004 | ✅ | ❌ | | openai\_azure | gpt-35-turbo | 4097 | 4096 | \$0.0000015 | \$0.000002 | ✅ | ❌ | | openai\_azure | gpt-3.5-turbo-instruct | 4097 | 4096 | \$0.0000015 | \$0.000002 | ✅ | ❌ | | openai\_azure | gpt-35-turbo-instruct | 4097 | 4097 | \$0.0000015 | \$0.000002 | ✅ | ❌ | | cohere | command-r | 128000 | 4000 | \$0.0000005 | \$0.0000015 | ❌ | ❌ | | cohere | command-light | 4096 | 4096 | \$0.000015 | \$0.000015 | ❌ | ❌ | | cohere | command-r-plus | 128000 | 4000 | \$0.000003 | \$0.000015 | ❌ | ❌ | | cohere | command-nightly | 4096 | 4096 | \$0.000015 | \$0.000015 | ❌ | ❌ | | cohere | command | 4096 | 4096 | \$0.000015 | \$0.000015 | ❌ | ❌ | | cohere | command-medium-beta | 4096 | 4096 | \$0.000015 | \$0.000015 | ❌ | ❌ | | cohere | command-xlarge-beta | 4096 | 4096 | \$0.000015 | \$0.000015 | ❌ | ❌ | | groq | mixtral-8x7b-32768 | 32768 | 32768 | \$0.0 | \$0.0 | ❌ | ❌ | | groq | llama-3.2-1b-preview | 131072 | 8192 | \$0.0 | \$0.0 | ✅ | ❌ | | groq | llama-3.2-3b-preview | 131072 | 8192 | \$0.0 | \$0.0 | ✅ | ❌ | | groq | llama3-1-70b-versatile | 131072 | 8192 | \$0.0 | \$0.0 | ✅ | ❌ | | groq | llama3-1-8b-instant | 131072 | 8192 | \$0.0 | \$0.0 | ✅ | ❌ | | groq | llama3-8b-8192 | 8192 | 8192 | \$0.0 | \$0.0 | ❌ | ❌ | | groq | llama3-70b-8192 | 8192 | 8192 | \$0.0 | \$0.0 | ❌ | ❌ | | groq | gemma2-9b-it | 8192 | 8192 | \$0.0 | \$0.0 | ❌ | ❌ | | groq | gemma-7b-it | 8192 | 8192 | \$0.0 | \$0.0 | ❌ | ❌ | | replicate | llama-3-70b-instruct | 8192 | 8192 | \$0.00000065 | \$0.00000275 | ❌ | ❌ | | replicate | llama-3-70b | 8192 | 8192 | \$0.00000065 | \$0.00000275 | ❌ | ❌ | | replicate | llama-3-8b-instruct | 8192 | 8192 | \$0.00000005 | \$0.00000025 | ❌ | ❌ | | replicate | llama-3-8b | 8192 | 8192 | \$0.00000005 | \$0.00000025 | ❌ | ❌ | | replicate | llama-2-70b | 4096 | 4096 | \$0.00003 | \$0.00006 | ❌ | ❌ | | replicate | llama70b-v2 | 4096 | 4096 | N/A | N/A | ❌ | ❌ | | replicate | mixtral-8x7b | 4096 | 4096 | N/A | N/A | ❌ | ❌ | | google | gemini-1.5-flash | 1048576 | 8192 | \$0.000000075 | \$0.0000003 | ✅ | ✅ | | google | gemini-1.5-pro | 2097152 | 8192 | \$0.0000035 | \$0.0000105 | ✅ | ❌ | | bedrock\* | anthropic.claude-3.5-sonnet | 200000 | 4096 | \$0.000003 | \$0.000015 | ✅ | ✅ | | bedrock\* | anthropic.claude-3-5-haiku | 200000 | 4096 | \$0.000001 | \$0.000005 | ✅ | ✅ | | bedrock\* | anthropic.claude-3-sonnet | 200000 | 4096 | \$0.000003 | \$0.000015 | ✅ | ✅ | | bedrock\* | anthropic.claude-3-haiku | 200000 | 4096 | \$0.00000025 | \$0.00000125 | ✅ | ✅ | | bedrock\* | anthropic.claude-3-opus | 200000 | 4096 | \$0.000015 | \$0.000075 | ✅ | ✅ | | bedrock\* | meta.llama3-1-405b-instruct | 131072 | 2048 | \$0.00000532 | \$0.000016 | ✅ | ❌ | | bedrock\* | meta.llama3-1-70b-instruct | 131072 | 2048 | \$0.00000022 | \$0.00000022 | ✅ | ❌ | | bedrock\* | meta.llama3-1-8b-instruct | 131072 | 2048 | \$0.00000099 | \$0.00000099 | ✅ | ❌ | \* AWS Bedrock prices differ based on region. The prices listed are for us-west-2 # Template Library > Explore Humanloop’s template library. Find example evaluators and prompts for popular use cases like Agents and RAG, all ready for customization. Template Library To help you get started, we provide a library of example Evaluators, Prompts, Tools and Datasets. These examples have been designed for common AI use-cases like RAG, customer service, and agents. Each example can be duplicated to your workspace and customized for your use cases. ## Browse the Library The Template Library includes various examples built for common AI application needs, including: #### Evaluators Pre-built evaluators for measuring semantic similarity, cost, latency, and other key metrics. Quickly assess your AI's performance with ready-made evaluators that are suited for RAG. Measure response accuracy, prompt quality, and user satisfaction with tools designed for reliable and consistent evaluation. #### Prompts Example prompts for common use cases including classification, question-answering, content generation, conversational agents, and RAG (Retrieval-Augmented Generation). #### Tools Ready-to-use tools for tasks like retrieval, data extraction, and calling external functions with JSON Schema function calling. #### Datasets Example datasets for common use cases including classification, question-answering, and content generation. #### Example Agents and Flows Example agent templates designed for common use cases like customer support, guided workflows, and RAG applications. These templates demonstrate best practices for building AI agents that can classify queries, answer questions, generate content, and provide context-aware responses by retrieving and incorporating relevant information. ## How to use templates 1. Navigate to the 'Library' page in the sidebar 2. Filter by type, category, or tags 3. Select a template you'd like to use 4. Click "Duplicate" to create a copy in your workspace that you can customize Once you've added a template to your workspace, you can modify it to match your specific requirements. Learn more about customizing your files in our guides for [Evaluators](/docs/evaluation/guides/code-based-evaluator), [Prompts](/docs/development/prompt-engineering), and [Tools](/docs/development/tools). # Vercel AI SDK > Learn about the ways you can use Humanloop with the Vercel AI SDK. ## Observability integration The Vercel AI SDK supports [tracing via OpenTelemetry](https://sdk.vercel.ai/docs/ai-sdk-core/telemetry). You can export these traces to Humanloop by enabling telemetry and configuring the OpenTelemetry Exporter. The Vercel AI SDK tracing feature is experimental and subject to change. You must enable it with the `experimental_telemetry` parameter on each AI SDK function call that you want to trace. Learn how to add tracing to your AI SDK application below. ### Metadata parameters Humanloop's AI SDK OpenTelemetry Receiver will automatically extract the following metadata parameters from the `experimental_telemetry` metadata object: * `humanloop.directoryPath`: **\[Required]** The path to the directory on Humanloop. Generation spans will create Logs for this Directory on Humanloop. * `humanloop.traceId`: **\[Optional]** The ID of a Flow Log on Humanloop. Set this to group multiple calls to the AI SDK into a single Flow Log on Humanloop. ### Prerequisites The following steps assume you're already using the AI SDK in your application. If not, follow [Vercel's quickstarts](https://sdk.vercel.ai/docs/getting-started) to get started. Versions of Next \< 15 must set `experimental.instrumentationHook` in `next.config.js`. Learn more [here](https://nextjs.org/docs/app/building-your-application/optimizing/open-telemetry). You can find an example Next.js application that uses the AI SDK to stream chat responses [here](https://sdk.vercel.ai/cookbook/next/stream-text-with-chat-prompt). ### Set up OpenTelemetry Install dependencies. ```bash title="npm" wordWrap npm install @vercel/otel @opentelemetry/sdk-logs @opentelemetry/api-logs @opentelemetry/instrumentation ``` ```bash title="pnpm" wordWrap pnpm add @vercel/otel @opentelemetry/sdk-logs @opentelemetry/api-logs @opentelemetry/instrumentation ``` ```bash title="yarn" wordWrap yarn add @vercel/otel @opentelemetry/sdk-logs @opentelemetry/api-logs @opentelemetry/instrumentation ``` Create a file called `instrumentation.ts` in your root or /src directory and add the following: ```typescript title="instrumentation.ts" import { registerOTel } from '@vercel/otel'; export function register() { registerOTel({ serviceName: 'humanloop-vercel-ai-sdk' }); } ``` ### Configure OpenTelemetry Configure the [OpenTelemetry exporter](https://opentelemetry.io/docs/specs/otel/protocol/exporter/) to forward logs to Humanloop. ```plaintext title=".env.local" wordWrap HUMANLOOP_API_KEY= # Configure the OpenTelemetry OTLP Exporter OTEL_EXPORTER_OTLP_ENDPOINT=https://api.humanloop.com/v5/import/otel OTEL_EXPORTER_OTLP_PROTOCOL=http/json OTEL_EXPORTER_OTLP_HEADERS="X-API-KEY=" # Humanloop API key ``` ### Trace AI SDK calls Now add the `experimental_telemetry` parameter to your AI SDK function calls to trace them. With a simple one-step generation, each call to `streamText` or `generateText` will be traced as a Prompt Log on Humanloop. ```typescript title="app/api/chat/route.ts" highlight={7-12} maxLines={50} import { openai } from '@ai-sdk/openai'; import { streamText } from 'ai'; // Allow streaming responses up to 30 seconds export const maxDuration = 30; export async function POST(req: Request) { const { messages, id } = await req.json(); const result = streamText({ model: openai('gpt-4o'), messages, experimental_telemetry: { isEnabled: true, metadata: { "humanloop.directoryPath": "path/to/directory", }, }, }); // Respond with the stream return result.toDataStreamResponse(); } ``` You can also group each step of a multi-step generation into a Flow by passing the `humanloopFlowPath` metadata value. ```typescript title="app/api/chat/route.ts" highlight={10-16} maxLines={50} import { openai } from '@ai-sdk/openai'; import { streamText } from 'ai'; // Allow streaming responses up to 30 seconds export const maxDuration = 30; export async function POST(req: Request) { const { messages, id } = await req.json(); const result = streamText({ model: openai('gpt-4o'), messages, maxSteps: 3, toolCallStreaming: true, system: "You are a helpful assistant that answers questions about the weather in a given city.", experimental_telemetry: { isEnabled: true, metadata: { "humanloop.directoryPath": "path/to/directory", } }, tools: { getWeatherInformation: { description: 'show the weather in a given city to the user', parameters: z.object({ city: z.string() }), execute: async ({}: { city: string }) => { const weatherOptions = ['sunny', 'cloudy', 'rainy', 'snowy', 'windy']; return { weather: weatherOptions[Math.floor(Math.random() * weatherOptions.length)], temperature: Math.floor(Math.random() * 50 - 10), }; } }, }, }); // Respond with the stream return result.toDataStreamResponse(); } ``` Node.js projects can use OpenTelemetry auto-instrumentation to trace requests without manually instrumenting code. Learn more about Node.js auto-instrumentation [here](https://opentelemetry.io/docs/languages/js/getting-started/nodejs). ### Set up OpenTelemetry Install dependencies. ```bash title="npm" wordWrap npm install dotenv @opentelemetry/sdk-node @opentelemetry/auto-instrumentations-node ``` ```bash title="pnpm" wordWrap pnpm add dotenv @opentelemetry/sdk-node @opentelemetry/auto-instrumentations-node ``` ```bash title="yarn" wordWrap yarn add dotenv @opentelemetry/sdk-node @opentelemetry/auto-instrumentations-node ``` Add the following code to your file to initialize and clean up the OpenTelemetry SDK. Do not forget to call await sdk.shutdown() before your application shuts down in order to flush any remaining traces to Humanloop. ```typescript title="main.ts" import dotenv from 'dotenv'; import { NodeSDK } from "@opentelemetry/sdk-node"; import { getNodeAutoInstrumentations } from "@opentelemetry/auto-instrumentations-node"; dotenv.config(); // Start the OpenTelemetry SDK const sdk = new NodeSDK({ instrumentations: [getNodeAutoInstrumentations()] }); sdk.start(); async function main() { // ... Your code here ... await sdk.shutdown(); } main().catch(console.error); ``` ### Configure OpenTelemetry Configure the [OpenTelemetry exporter](https://opentelemetry.io/docs/specs/otel/protocol/exporter/) to forward logs to Humanloop. ```plaintext title=".env.local" wordWrap HUMANLOOP_API_KEY= # Configure the OpenTelemetry OTLP Exporter OTEL_EXPORTER_OTLP_ENDPOINT=https://api.humanloop.com/v5/import/otel OTEL_EXPORTER_OTLP_PROTOCOL=http/json OTEL_EXPORTER_OTLP_HEADERS="X-API-KEY=" # Humanloop API key ``` ### Trace AI SDK calls Now add the `experimental_telemetry` parameter to your AI SDK function calls to trace them. With a simple one-step generation, each call to `streamText` or `generateText` will be traced as a Prompt Log on Humanloop. ```typescript title="main.ts" highlight={9-14} maxLines={50} import { openai } from '@ai-sdk/openai'; import { streamText } from 'ai'; async function main() { // Example of a simple one-step generation const result = await streamText({ model: openai('gpt-4o'), messages, experimental_telemetry: { isEnabled: true, metadata: { "humanloop.directoryPath": "path/to/directory", } } }); } ``` You can also group each step of a multi-step generation into a Flow by passing the `humanloopFlowPath` metadata value. ```typescript title="main.ts" highlight={12-18} maxLines={50} import { openai } from '@ai-sdk/openai'; import { streamText } from 'ai'; async function main() { // Example of a multi-step generation const result = await streamText({ model: openai('gpt-4o'), messages, maxSteps: 3, toolCallStreaming: true, system: "You are a helpful assistant that answers questions about the weather in a given city.", experimental_telemetry: { isEnabled: true, metadata: { "humanloop.directoryPath": "path/to/directory", } }, tools: { getWeatherInformation: { description: 'show the weather in a given city to the user', parameters: z.object({ city: z.string() }), execute: async ({}: { city: string }) => { const weatherOptions = ['sunny', 'cloudy', 'rainy', 'snowy', 'windy']; return { weather: weatherOptions[Math.floor(Math.random() * weatherOptions.length)], temperature: Math.floor(Math.random() * 50 - 10), }; } }, }, }); } ``` ## Learn more To see the integration in action, check out our [Vercel AI SDK guides](/docs/v5/integrations/vercel-ai-sdk). # Prompt file format > The `.prompt` file format is a human-readable and version-control-friendly format for storing model configurations. .prompt file format Our `.prompt` file format is a serialized representation of a [Prompt](/docs/explanation/prompts), designed to be human-readable and suitable for checking into your version control systems alongside your code. This allows technical teams to maintain the source of truth for their prompts within their existing version control workflow. ## Format The format is heavily inspired by [MDX](https://mdxjs.com/), with model and parameters specified in a YAML header alongside a JSX-inspired syntax for chat templates. ```jsx Chat --- model: gpt-4o temperature: 0.7 max_tokens: -1 provider: openai endpoint: chat --- You are a friendly assistant. ``` ```jsx Completion --- model: claude-2 temperature: 0.7 max_tokens: 256 top_p: 1.0 provider: anthropic endpoint: complete --- Autocomplete the sentence. Context: {{context}} {{sentence}} ``` ### Multi-modality and images Images can be specified using nested `` tags within a `` message. To specify text alongside the image, use a `` tag. ```jsx Image and Text --- model: gpt-4o temperature: 0.7 max_tokens: -1 provider: openai endpoint: chat tools: [] --- You are a friendly assistant. What is in this image? ``` ### Tools, tool calls, and tool responses Specify the tools available to the model as a JSON list in the YAML header. Tool calls in assistant messages can be added with nested `` tags. A `` tag within an `` tag denotes a tool call of `type: "function"`, and requires the attributes `name` and `id`. The text wrapped in a `` tag should be a JSON-formatted string containing the tool call's arguments. Tool call responses can then be added with `` tags after the `` message. ```jsx --- model: gpt-4o temperature: 0.7 max_tokens: -1 provider: openai endpoint: chat tools: [ { "name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": { "type": "object", "properties": { "location": { "type": "string", "name": "Location", "description": "The city and state, e.g. San Francisco, CA" }, "unit": { "type": "string", "name": "Unit", "enum": [ "celsius", "fahrenheit" ] } }, "required": [ "location" ] } } ] --- You are a friendly assistant. What is the weather in SF? { "location": "San Francisco, CA" } Cloudy with a chance of meatballs. ``` ``` ``` # Humanloop Runtime Environment > This reference provides details about the Python environment and supported packages. Humanloop allows you to specify the runtime for your code [Evaluators](../explanation/evaluators) and [Tool](../explanation/tools) implementations in order to run them natively with your Prompts in our Editor and UI based Evaluation workflows. ## Environment details Python version: **3.11.4** ``` anthropic==0.29.0 continuous-eval==0.3.13 jellyfish==1.1.0 jsonschema==4.22.0 langdetect==1.0.9 nltk==3.8.1 numpy==1.26.4 openai==1.35.10 pandas==2.2.2 pydantic==2.8.2 requests==2.32.3 scikit-learn==1.5.1 spacy==3.7.5 sqlglot==25.5.1 syllapy==0.7.2 textstat==0.7.3 transformers==4.43.4 ``` If you have any specific packages you would like to see here, please let us know at [support@humanloop.com](mailto:support@humanloop.com). # Security and Compliance > Learn about Humanloop's commitment to security, data protection, and compliance with industry standards. Humanloop is deeply committed to AI governance, security, and compliance. View our [Trust Report](https://trust.humanloop.com/) and [Policy Pages](https://humanloop.com/policies/privacy-policy) to see all of our certifications, request documentation, and view high-level details on the controls we adhere to. Humanloop never trains on user data. ## Humanloop Security Offerings: * **Data Privacy and Security** * Activate LLMs with your private data, safely and securely. You own your data and models. * **Monitoring & Support** * End-to-end monitoring of your AI applications, support guarantees from trusted AI experts. * Data Encryption * Data Management & AI Governance ## User Authentication and Access Control ### Authentication & Access Control - Humanloop Web App All users of the Humanloop web application require a valid email address and password to use the system: * Email addresses are verified on account creation. * Passwords are verified as sufficiently complex. * Passwords are stored using a one-way salted hash. * User access logs are maintained including date, time, user ID, relevant URL, operation performed, and source IP address for audit purposes. ### Authentication & Access Control - Humanloop API All users of the API are required to authenticate with a unique API token header: * Follows the OAuth 2.0 pattern. * API tokens are only visible once on creation and then obfuscated. * Users can manage the expiry of API keys. * API token access logs are maintained including date, time, user ID, relevant URL, operation performed, and source IP address for audit purposes. ### Additional Resources * Role-based access control (RBAC) - We implement strict role-based access control (RBAC) for all our systems. * Multi-factor authentication (MFA) - MFA is enforced for all employee accounts. ## Encryption Standards ### **Encryption** Humanloop follows best practices for data management and encryption. All data in transit is secured with TLS/SSL, and all data at rest is encrypted using the AES-256 algorithm. All encryption keys are managed using AWS Key Management Service (KMS) as part of the VPC definition. * All data in transit is encrypted using TLS 1.2 or higher. * Data at rest is encrypted using AES-256 encryption. ### **Infrastructure** All sensitive data is encrypted in transit. For Self-Hosted Cloud (VPC) environments, network traffic is also encrypted in transit and at rest to meet HIPAA requirements. Sensitive application data is only ever processed within the ECS cluster and stored in Aurora. To request a network infrastructure diagram or more information, please contact [privacy@humanloop.com](mailto:privacy@humanloop.com). **Learn More** For more information about how Humanloop processes user data, visit our Data Management & Hosting Options page. ## Security Certifications ### SOC2 Type II Compliance Humanloop is fully SOC2 Type II compliant. Learn more via our [Trust Center](https://trust.humanloop.com/) and our [Security Policy](https://humanloop.com/policies/security-policy) page. ### HIPAA Compliance Humanloop actively works with paying customers to help them achieve HIPAA compliance. Official certification is pending. To request references or more information, contact [sales@humanloop.com](mailto:sales@humanloop.com). **HIPAA Compliance via Hosting Environment:** Humanloop offers dedicated platform instances on AWS with HIPAA provisions for enterprise customers that have particularly sensitive data. These provisions include: * The ability for enterprises to manage their own encryption keys. * A specific AWS Fargate deployment that follows HIPAA practices. ### GDPR Compliance We are fully compliant with the General Data Protection Regulation (GDPR). This includes: * Data minimization practices * User rights management * Data processing agreements ## **How Humanloop helps customers maintain compliance:** * Self-Hosted Cloud (VPC) environments * Data Processing Agreements (DPAs) * Data Minimization and Retention Policies * Role-Based Access Controls * Data Encryption * Robust Security Measures * Incident Response Plan SLAs * Regular Training & Audits ### Learn more: * Cloud Hosting Options * Data Management Protocols * [Security Policy](https://humanloop.com/policies/security-policy) * [Privacy Policy](https://humanloop.com/policies/privacy-policy) * [Trust Center](https://trust.humanloop.com/) To request references or more information, contact [sales@humanloop.com](mailto:sales@humanloop.com) # Data Management > Discover Humanloop's robust data management practices and state-of-the-art encryption methods ensuring maximum security and compliance for AI applications. ### Data Handling and Segregation Separate environments are provisioned and maintained for development, quality assurance/user acceptance testing, and production to ensure data segregation at the environment level. ### Data Classification & Access Control All platform data received from the user and data derived from user data is classified as sensitive. All platform audit and telemetry data that does not contain PII and reference to specific user data is classified as not sensitive. By default, only authenticated users can see their own sensitive data. Data classified as not sensitive can be accessed by dedicated Humanloop support staff using a secure VPN connection to the private network of the VPC for the target environment. This access is for debugging issues and improving system performance. The Terms of Service define further details around data ownership and access on a case-by-case basis. ### Data Encryption and Security #### Encryption Humanloop follows best practices for data management and encryption. All data in transit is secured with TLS/SSL, and all data at rest is encrypted using the AES-256 algorithm. All encryption keys are managed using AWS Key Management Service (KMS) as part of the VPC definition. ### Infrastructure All sensitive data is encrypted in transit. For Self-Hosted Cloud (VPC) environments, network traffic is also encrypted in transit and at rest to meet HIPAA requirements. Sensitive application data is only processed within the ECS cluster and stored in Aurora. To request a network infrastructure diagram or more information, please contact [privacy@humanloop.com](mailto:privacy@humanloop.com). ### Learn More For more information on how Humanloop processes user data, visit our [Security & Compliance](https://trust.humanloop.com) page. ### Data Storage, Retention, and Recovery All platform data is stored in a primary database server with multi-availability zone replication. Platform data is retained indefinitely and backed up daily in a secure and encrypted manner until a request is made by the contractual owners of that data to remove it, in accordance with GDPR guidelines. Humanloop's Terms of Service define the contractual owner of the user data and data derived from the user data. A semi-automated disaster recovery process is in place to restore the database to a specified point-in-time backup as required. ### Data Breach Response Any data breaches will be communicated to all impacted Humanloop users and partners within 24 hours, along with consequences and mitigations. Breaches will be dealt with in accordance with the Humanloop data breach response policy, which is tested annually. ### Data Portability and Return Within 30 days post-contract termination, users can request the return of their data and derived data (as defined by the Terms of Service). Humanloop provides this data via downloadable files in comma-separated value (.csv) or .json formats. # Access roles (RBACs) > Learn about the different roles and permissions in Humanloop to help you with prompt and data management for large language models. Everyone invited to the organization can access all projects currently (controlling project access coming soon). A user can be one of the following rolws: **Admin:** The highest level of control. They can manage, modify, and oversee the Organization's settings and have full functionality across all projects. **Developer:** (Enterprise tier only) Can deploy Files, manage environments, create and add API keys, but lacks the ability to access billing or invite others. **Member:** (Enterprise tier only) The basic level of access. Can create and save Files, run Evaluations, but not deploy. Can not see any org-wide API keys. ## RBACs summary Here is the full breakdown of roles and access: | Action | Member | Developer | Admin | | :----------------------------- | :----- | :-------- | :---- | | Create and manage Files | ✔️ | ✔️ | ✔️ | | Inspect logs and feedback | ✔️ | ✔️ | ✔️ | | Create and manage Evaluators | ✔️ | ✔️ | ✔️ | | Run Evaluations | ✔️ | ✔️ | ✔️ | | Create and manage Datasets | ✔️ | ✔️ | ✔️ | | Create and manage API keys | | ✔️ | ✔️ | | Manage prompt deployments | | ✔️ | ✔️ | | Create and manage environments | | ✔️ | ✔️ | | Send invites | | | ✔️ | | Set user roles | | | ✔️ | | Manage billing | | | ✔️ | | Change Organization settings | | | ✔️ | # SSO and Authentication > Learn about Single Sign-On (SSO) and authentication options for Humanloop {/* WIP - for gartner /start */} Humanloop offers authentication options to ensure secure access to your organization's resources. This guide covers our Single Sign-On (SSO) capabilities and other authentication methods. ## Single Sign-On (SSO) Single Sign-On allows users to access multiple applications with a single set of credentials. Humanloop supports SSO integration with major identity providers, enhancing security and simplifying user management. ### Supported SSO Providers * Google Workspace * Okta * Azure Active Directory * OneLogin * Custom SAML 2.0 providers ### Benefits of SSO 1. Enhanced security with centralized authentication 2. Simplified user management 3. Improved user experience with reduced password fatigue 4. Streamlined onboarding and offboarding processes ### Setting up SSO To set up SSO for your organization: 1. Contact our sales team to enable SSO for your account 2. Choose your identity provider 3. Configure the connection between Humanloop and your identity provider 4. Test the SSO integration 5. Roll out to your users ## Multi-Factor Authentication (MFA) For accounts not using SSO, we strongly recommend enabling Multi-Factor Authentication for an additional layer of security. ### MFA Options * Time-based One-Time Password (TOTP) apps * SMS-based verification * Hardware security keys (e.g., YubiKey) ## API Authentication For programmatic access to Humanloop, we use API keys. These should be kept secure and rotated regularly. ### Managing API Keys * Generate API keys in your account settings * Use environment variables to store API keys in your applications * Implement key rotation policies for enhanced security ## User Provisioning and Deprovisioning Humanloop supports automated user lifecycle management through our Directory Sync feature. This allows for: * Automatic user creation based on directory group membership * Real-time updates to user attributes and permissions * Immediate deprovisioning when users are removed from directory groups ## Best Practices 1. Use SSO when possible for centralized access control 2. Enable MFA for all user accounts 3. Regularly audit user access and permissions 4. Implement the principle of least privilege 5. Use secure protocols (HTTPS) for all communications with Humanloop For more information on setting up SSO or other authentication methods, please contact our support team or refer to our API documentation. ## Active Directory Sync Humanloop supports Active Directory Sync for automated user provisioning and deprovisioning. This feature allows you to: * Automatically create and update user accounts based on your Active Directory groups * Sync user attributes and roles in real-time * Instantly deprovision access when users are removed from AD groups * Maintain consistent access control across your organization * Reduce manual user management tasks and potential security risks To set up Active Directory Sync: 1. Contact our sales team to enable this feature for your account 2. Configure the connection between Humanloop and your Active Directory 3. Map your AD groups to Humanloop roles and permissions 4. Test the sync process with a small group of users 5. Roll out to your entire organization For more information on implementing Active Directory Sync, please contact our [support team](mailto:support@humanloop.com). {/* WIP - for gartner /end */} # LLMs.txt > Humanloop docs are accessible to AI tools using the llms.txt standard. ## What is llms.txt? LLMs.text is an [emerging standard](https://llmstxt.org/) so that websites can easily expose information to AI. We have implemented it for the Humanloop docs. ### An overview of the Humanloop docs The llms.txt file contains an overview of the Humanloop docs with links to each page. /llms.txt for an overview of the Humanloop docs.
* **Small and fast**: Quick to load and easy to parse * **Summary-focused**: Each page with one-sentence description with its URL * **Structured for AI**: Helps tools understand the structure of the docs
### The full content of the Humanloop docs The llms-full.txt file contains the full content of the Humanloop docs and API reference. /llms-full.txt for the full content of the Humanloop docs and API reference
* **Comprehensive**: Includes the full content of your documentation * **API knowledge**: Incorporates the full API reference and SDK snippets * **Convenient**: One giant payload for all the docs
Note that this might take up to 10 seconds to load. ### Raw markdown on any page available And on any specific page, you can add **`.md`** to the end of the URL to get the raw markdown content of the page. Example of the raw markdown for the Call Prompt page
Add **`.md`** to the end of the URL to get the raw markdown content of the page.
## How to use llms.txt You can copy the URL and paste that into Cursor, ChatGPT or any other AI tool that can load a URL to give it as context to your LLM. * [https://humanloop.com/docs/llms.txt](https://humanloop.com/docs/llms.txt) * [https://humanloop.com/docs/llms-full.txt](https://humanloop.com/docs/llms-full.txt) If the AI tool you are using doesn't support loading a URL, you can copy the content and paste it into the prompt. # Overview > Learn how to integrate Humanloop into your applications using our Python and TypeScript SDKs or REST API. The Humanloop platform can be accessed through the [API](/docs/v5/api) or through our Python and TypeScript SDKs. ### Usage Examples ```shell title="Installation" npm install humanloop ``` ```typescript title="Example usage" import { HumanloopClient } from "humanloop"; const humanloop = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); // Check that the authentication was successful console.log(await humanloop.prompts.list()); ``` ```shell title="Installation" pip install humanloop ``` ```python title="Example usage" from humanloop import Humanloop hl = Humanloop(api_key="") # Check that the authentication was successful print(hl.prompts.list()) ``` # Errors > This page provides a list of the error codes and messages you may encounter when using the Humanloop API. ### HTTP error codes Our API will return one of the following HTTP error codes in the event of an issue: Your request was improperly formatted or presented. Your API key is incorrect or missing, or your user does not have the rights to access the relevant resource. The requested resource could not be located. Modifying the resource would leave it in an illegal state. Your request was properly formatted but contained invalid instructions or did not match the fields required by the endpoint. You've exceeded the maximum allowed number of requests in a given time period. An unexpected issue occurred on the server. The service is temporarily overloaded and you should try again. ## Error details Our `prompt/call` endpoint acts as a unified interface across all popular model providers. The error returned by this endpoint may be raised by the model provider's system. Details of the error are returned in the `detail` object of the response. ```json { "type": "unprocessable_entity_error", "message": "This model's maximum context length is 4097 tokens. However, you requested 10000012 tokens (12 in the messages, 10000000 in the completion). Please reduce the length of the messages or completion.", "code": 422, "origin": "OpenAI" } ``` # Decorators Overview > Overview of the decorator system in the Humanloop SDK ## Introduction Humanloop provides a set of decorators that help you instrument your AI features with minimal code changes. These decorators automatically create and manage Logs on the Humanloop platform, enabling monitoring, evaluation, and improvement of your AI applications. | Decorator | Purpose | Creates | Documentation | |-----------|---------|---------|---------------| | `prompt` | Instrument LLM provider calls | Prompt Logs | [Learn more →](/docs/v5/sdk/decorators/prompt) | | `tool` | Define function calling tools | Tool Logs | [Learn more →](/docs/v5/sdk/decorators/tool) | | `flow` | Trace multi-step AI features | Flow Log with traces | [Learn more →](/docs/v5/sdk/decorators/flow) | ## Common Patterns All decorators share these common characteristics: - **Path-based organization**: Each decorator requires a `path` parameter that determines where the File and its Logs are stored in your Humanloop workspace. - **Automatic versioning**: Changes to the decorated function or its parameters create new versions of the File. - **Error handling**: Errors are caught and logged, making debugging easier. - **Minimal code changes**: Decorate existing code and adopt the Humanloop SDK gradually. # Flow Decorator > Technical reference for the Flow decorator in the Humanloop SDK ## Overview The Flow decorator creates and manages traces for your AI feature. When applied to a function, it: - Creates a new trace on function invocation. - Adds all Humanloop logging calls made inside the function to the trace. - Completes the trace when the function exits. On Humanloop, a trace is the collection of Logs associated with a Flow Log. ## Usage The `flow` decorator will trace all downstream Humanloop logs, whether they are created by other decorators or SDK calls. ### Tracing Decorators ```python maxLines=50 wrapLines title="Python" @hl_client.prompt(path="MyFeature/Call LLM"): def call_llm(messages: List[ChatMessage]): return openai.chat.completions.create( model="gpt-4o-mini", messages=messages ).choices[0].message.content @hl_client.flow(path="MyFeature/Process") def process_input(inputs: list[str]) -> list[str]: # Logs created by the Prompt decorator are added to the trace return [ call_llm([{"role": "user", "content": text}]) for text in inputs ] ``` ```typescript maxLines=50 wrapLines title="TypeScript" const callLLM = hlClient.prompt({ path: "MyFeature/Call LLM", callable: async (messages: ChatMessage[]): Promise => { const response = await openai.chat.completions.create({ model: "gpt-4o-mini", messages }); return response.choices[0].message.content; } }); const processInput = hlClient.flow({ path: "MyFeature/Process", callable: async (inputs: string[]): Promise => { // Logs created by the Prompt decorator are added to the trace return inputs.map(async (text) => await callLLM([ {"role": "user", "content": text} ])); }); ``` ### Tracing SDK Calls Logs created through the Humanloop SDK are added to the trace. ```python maxLines=50 title="Python" wrapLines @hl_client.flow(path="MyFeature/Process") def process_input(text: str) -> str: # Created Log is added to the trace llm_output = hl_client.prompts.call( path="MyFeature/Transform", messages=[{"role": "user", "content": text}] ).logs[0].output_message.content transformed_output = transform(llm_output) # Created Log is added to the trace hl_client.tools.log( path="MyFeature/Transform", tool={function: TRANSFORM_JSON_SCHEMA}, inputs={"text": text}, output=transformed_output ) return transformed_output ``` ```typescript maxLines=50 const processInput = hlClient.flow({ path: "MyFeature/Process", callable: async (text: string): Promise => { // Created Log is added to the trace const llmOutput = ( await hlClient.prompts.call({ path: "MyFeature/Transform", messages: [{ role: "user", content: text }], }) ).logs[0].outputMessage.content; const transformedOutput = transform(llmOutput); // Created Log is added to the trace await hlClient.tools.log({ path: "MyFeature/Transform", tool: { function: TRANSFORM_JSON_SCHEMA }, inputs: { text }, output: transformedOutput, }); return transformedOutput; }, }); ``` ## Behavior The decorated function creates a Flow Log when called. All Logs created inside the decorated function are added to its trace. The Flow Log's fields are populated as follows: | Field | Type | Description | | ---------------- | ----------- | -------------------------------------------------------------------- | | `inputs` | object | Function arguments that aren't ChatMessage arrays | | `messages` | array | ChatMessage arrays passed as arguments | | `output_message` | ChatMessage | Return value if it's a ChatMessage-like object | | `output` | string | Stringified return value if not a ChatMessage-like object | | `error` | string | Error message if function throws or return value can't be serialized | If the decorated function returns a ChatMessage object, the `output_message` field is populated. Otherwise, the `output` field is populated with the stringified return value. The decorated function creates a Flow Log when called. All Logs created inside the decorated function are added to its trace. The Flow Log's fields are populated as follows: | Field | Type | Description | | --------------- | ----------- | -------------------------------------------------------------------- | | `inputs` | object | Function arguments that aren't ChatMessage arrays | | `messages` | array | ChatMessage arrays passed as arguments | | `outputMessage` | ChatMessage | Return value if it's a ChatMessage-like object | | `output` | string | Stringified return value if not a ChatMessage-like object | | `error` | string | Error message if function throws or return value can't be serialized | If the decorated function returns a ChatMessage object, the `outputMessage` field is populated. Otherwise, the `output` field is populated with the stringified return value. ## Definition ```python @hl_client.flow( # Required: path on Humanloop workspace for the Flow path: str, # Optional: metadata for versioning the Flow attributes: dict[str, Any] = None ) def function(*args, **kwargs): ... ``` The decorator will preserve the function's signature. ```typescript hlClient.flow({ // Required: path on Humanloop workspace for the Flow path: string, // Required: decorated function callable: I extends Record & { messages: ChatMessage[] } ? (inputs: I) => O : () => O; // Optional: metadata for versioning the Flow attributes?: Record; }) => Promise ``` The function returned by the decorator is async and preserves the signature of `callable`. Callable's `inputs` must extend `Record`. If a `messages` field is present in the `inputs`, it must have the `ChatMessage[]` type. The decorated function will not wrap the return value in a second Promise if the `callable` is also asynchronous. The decorator accepts the following parameters: | Parameter | Type | Required | Description | | ------------ | ------ | -------- | ---------------------------------------- | | `path` | string | Yes | Path on Humanloop workspace for the Flow | | `attributes` | object | No | Key-value object for versioning the Flow | ## SDK Interactions - It's not possible to call `flows.log()` inside a decorated function. This will raise a [`HumanloopRuntimeError`](#error-handling) - To create nested traces, call another flow-decorated function. - Passing `trace_parent_id` argument to an SDK logging call inside the decorated function is ignored and emits a warning; the Log is added to the trace of the decorated function. - It's not possible to call `flows.log()` inside a decorated function. This will raise a [`HumanloopRuntimeError`](#error-handling) - To create nested traces, call another flow-decorated function. - Passing `traceParentId` argument to an SDK logging call inside the decorated function is ignored and emits a warning; the Log is added to the trace of the decorated function. ## Error Handling - If user-written code (e.g. in code Evaluators) raises an exception, the relevant Log's `error` field is populated with the exception message and the decorated function returns `None`. - `HumanloopRuntimeError` exceptions indicate incorrect decorator or SDK usage and are re-raised instead of being logged under `error`. - If user-written code (e.g. in code Evaluators) throws an exception, the relevant Log's `error` field is populated with the exception message and the decorated function returns `undefined`. - `HumanloopRuntimeError` exceptions indicate incorrect decorator or SDK usage and are re-thrown instead of being logged under `error`. ## Related Documentation A explanation of Flows and their role in the Humanloop platform is found in our [Flows](/docs/v5/explanation/flows) documentation. # Prompt Decorator > Technical reference for the Prompt decorator in the Humanloop SDK ## Overview The Prompt decorator automatically instruments LLM provider calls and creates Prompt Logs on Humanloop. When applied to a function, it: - Creates a new Log for each LLM provider call made within the decorated function. - Versions the Prompt using hyperparameters of the provider call. ### Decorator Definition ```python @hl_client.prompt( # Required: path on Humanloop workspace for the Prompt path: str ) def function(*args, **kwargs): ... ``` The decorated function will have the same signature as the original function. ```typescript hlClient.prompt({ // Required: path on Humanloop workspace for the Prompt path: string, // Required: decorated function callable: I extends Record & { messages?: ChatMessage[] } ? (args: I) => O : () => O; }) => Promise ``` The decorated function is always async and has the same signature as the `callable` argument. Callable's `args` must extend `Record`. If a `messages` field is present in the `args`, it must have type `ChatMessage[]`. The decorated function will not wrap the return value in a second Promise if the `callable` is also asynchronous. You must pass the providers you want to auto-instrument to the HumanloopClient constructor. Otherwise, the decorated function will work, but no Logs will be created. ```typescript {6-7} import { HumanloopClient } from "humanloop"; import { OpenAI } from "openai"; const hlClient = new HumanloopClient({ apiKey: process.env.HL_API_KEY, // Pass the provider module here providers: { OpenAI } }) // You can now use the prompt decorator ``` ### Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `path` | string | Yes | Path on Humanloop workspace for the Prompt | ### Usage ```python @hl_client.prompt(path="MyFeature/Process") def process_input(text: str) -> str: return openai.chat.completions.create( model="gpt-4o-mini", messages=[{"role": "user", "content": text}] ).choices[0].message.content ``` ```typescript const processInput = hlClient.prompt({ path: "MyFeature/Process", callable: async (text: string): Promise => { return openai.chat.completions.create({ model: "gpt-4o-mini", messages: [{ role: "user", content: text }] }).choices[0].message.content; } }); ``` ## Behavior ### Versioning The hyperparameters of the LLM provider call are used to version the Prompt. If the configuration changes, new Logs will be created under the new version of the the same Prompt. The following parameters are considered for versioning the Prompt: | Parameter | Description | |-----------|-------------| | `model` | The LLM model identifier | | `endpoint` | The API endpoint type | | `provider` | The LLM provider (e.g., "openai", "anthropic") | | `max_tokens` | Maximum tokens in completion | | `temperature` | Sampling temperature | | `top_p` | Nucleus sampling parameter | | `presence_penalty` | Presence penalty for token selection | | `frequency_penalty` | Frequency penalty for token selection | ### Log Creation Each LLM provider call within the decorated function creates a Log with the following fields set: | Field | Type | Description | |-------|------|-------------| | `inputs` | dict[str, Any] | Function arguments that aren't ChatMessage arrays | | `messages` | ChatMessage[] | ChatMessage arrays passed to the LLM | | `output_message` | ChatMessage | LLM response with role and content | | `error` | string | Error message if the LLM call fails | | `prompt_tokens` | int | Number of tokens in the prompt | | `reasoning_tokens` | int | Number of tokens used in reasoning | | `output_tokens` | int | Number of tokens in the completion | | `finish_reason` | string | Reason the LLM stopped generating | | `start_time` | datetime | When the LLM call started | | `end_time` | datetime | When the LLM call completed | | Field | Type | Description | |-------|------|-------------| | `inputs` | object | Function arguments that aren't ChatMessage arrays | | `messages` | ChatMessage[] | ChatMessage arrays passed to the LLM | | `output_message` | ChatMessage | LLM response with role and content | | `error` | string | Error message if the LLM call fails | | `prompt_tokens` | number | Number of tokens in the prompt | | `reasoning_tokens` | number | Number of tokens used in reasoning | | `output_tokens` | number | Number of tokens in the completion | | `finish_reason` | string | Reason the LLM stopped generating | | `start_time` | Date | When the LLM call started | | `end_time` | Date | When the LLM call completed | ## Error Handling - LLM provider errors are caught and logged in the Log's `error` field. However, `HumanloopRuntimeError` is not caught and will be re-raised: they indicate wrong SDK or decorator usage. - The decorated function propagates exceptions from the LLM provider. - LLM provider errors are caught and logged in the Log's `error` field. However, `HumanloopRuntimeError` is not caught and will be re-thrown: they indicate wrong SDK or decorator usage. - The decorated function propagates exceptions from the LLM provider. ## Best Practices 1. Multiple Logs will be created if you make multiple calls inside the decorated function. To avoid confusion, avoid calls with different providers or hyperparameters, as this will create multiple versions of the Prompt. 2. Calling `prompts.log()` or `prompts.call()` inside the decorated function works normally, with no interaction with the decorator. However, it indicates a misuse of the decorator, as they are alternatives for achieving the same result. 3. If you want to switch between providers with ease, use [`prompts.call()`](/docs/v5/reference/prompts/call) with a `provider` parameter instead of the decorator. ## Related Documentation Humanloop Prompts are more than the string passed to the LLM provider. They encapsulate LLM hyperparameters, associations to available tools, and can be templated. For more details, refer to our [Prompts explanation](/docs/v5/explanation/prompts). # Tool Decorator > Technical reference for the Tool decorator in the Humanloop SDK ## Overview The Tool decorator helps you define [Tools](/docs/v5/explanation/tools) for use in function calling. It automatically instruments function calls and creates Tool Logs on Humanloop. Calling a decorated function will create a Tool Log with the following fields: - `inputs`: The function arguments. - `output`: The function return value. - `error`: The error message if the function call fails. Calling a decorated function will create a Tool Log with the following fields: - `inputs`: The function arguments. - `output`: The function return value. - `error`: The error message if the function call fails. ### Definition ```python @hl_client.tool( # Required: path on Humanloop workspace for the Tool path: str, # Optional: additional metadata for the Tool attributes: Optional[dict[str, Any]] = None, # Optional: values needed to setup the Tool setup_values: Optional[dict[str, Any]] = None ) def function(*args, **kwargs): ... ``` The decorated function will have the same signature as the original function and will have a `json_schema` attribute containing the inferred JSON Schema. ```typescript hlClient.tool({ // Required: path on Humanloop workspace for the Tool path: string, // Required: decorated function callable: I extends Record ? (args: I) => O : () => O, // Required: JSON Schema for the Tool version: ToolKernelRequest }) => Promise ``` The decorated function is always async and has the same signature as the `callable` argument. It will have a `jsonSchema` attribute containing the provided JSON Schema. ### Parameters | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `path` | string | Yes | Path on Humanloop workspace for the Tool | | `attributes` | object | No | Additional metadata for the Tool (Python only) | | `setup_values` | object | No | Values needed to setup the Tool (Python only) | | `version` | ToolKernelRequest | Yes | JSON Schema for the Tool (TypeScript only) | ### Usage ```python @hl_client.tool(path="MyFeature/Calculator") def calculator(a: int, b: Optional[int] = None) -> int: """Add two numbers together.""" return a + (b or 0) ``` Decorating a function will set a `json_schema` attribute that can be used for function calling. ```python {5, 12-14} # Use with prompts.call response = hl_client.prompts.call( path="MyFeature/Assistant", messages=[{"role": "user", "content": "What is 5 + 3?"}], tools=[calculator.json_schema] ) # Or with OpenAI directly! response = openai.chat.completions.create( model="gpt-4o-mini", messages=[{"role": "user", "content": "What is 5 + 3?"}], tools=[{ "type": "function", "function": calculator.json_schema }] ) ``` ```typescript maxLines=50 const calculator = hlClient.tool({ path: "MyFeature/Calculator", callable: (inputs: { a: number; b?: number }) => { return inputs.a + (inputs.b || 0); }, version: { function: { name: "calculator", description: "Add two numbers together.", parameters: { type: "object", properties: { a: { type: "number" }, b: { type: "number" } }, required: ["a"] } } } }); ``` Decorating a function will set a `jsonSchema` attribute that can be used for function calling. ```typescript {5, 12-14} // Use with prompts.call const response = await hlClient.prompts.call({ path: "MyFeature/Assistant", messages: [{ role: "user", content: "What is 5 + 3?" }], tools: [calculator.jsonSchema] }); // Or with OpenAI directly! const response = await openai.chat.completions.create({ model: "gpt-4o-mini", messages: [{ role: "user", content: "What is 5 + 3?" }], tools: [{ type: "function", function: calculator.jsonSchema }] }); ``` ## Behavior ### Schema Definition In Python, the decorator automatically infers a JSON Schema from the source code, argument signature, and docstrings: - Function name becomes the tool name - Function docstring becomes the tool description - Parameter type hints are converted to JSON Schema types - Optional parameters (using `Optional[T]` or `T | None`) are marked as not required - Return type is not included in the schema Supported type hints: | Python Type | JSON Schema Type | |-------------|------------------| | `str` | `"string"` | | `int` | `"integer"` | | `float` | `"number"` | | `bool` | `"boolean"` | | `list[T]` | `"array"` with items of type T | | `dict[K, V]` | `"object"` with properties of types K and V | | `tuple[T1, T2, ...]` | `"array"` with items of specific types | | `Optional[T]` or `T \| None` | Type T with `"null"` added | | `Union[T1, T2, ...]` | `"anyOf"` with types T1, T2, etc. | | No type hint | `any` | In TypeScript, you must provide a JSON Schema in the `version` parameter: ```typescript version: { function: { name: string; description: string; parameters: { type: "object"; properties: Record; required?: string[]; }; }; attributes?: Record; setup_values?: Record; } ``` ### Log Creation Each function call creates a Tool Log with the following fields: | Field | Type | Description | |-------|------|-------------| | `inputs` | dict[str, Any] | Function arguments | | `output` | string | JSON-serialized return value | | `error` | string | Error message if the function call fails | | Field | Type | Description | |-------|------|-------------| | `inputs` | object | Function arguments | | `output` | string | JSON-serialized return value | | `error` | string | Error message if the function call fails | ## Error Handling - Function errors are caught and logged in the Log's `error` field. - The decorated function returns `None` when an error occurs. - `HumanloopRuntimeError` is not caught and will be re-raised, as it indicates incorrect SDK or decorator usage. - Function errors are caught and logged in the Log's `error` field. - The decorated function returns `undefined` when an error occurs. - Schema validation errors are thrown if the inputs don't match the schema. - `HumanloopRuntimeError` is not caught and will be re-thrown, as it indicates incorrect SDK or decorator usage. ## Best Practices 1. Use clear and descriptive docstrings in Python to provide good tool descriptions 2. Ensure all function parameters have appropriate type hints in Python 3. Make return values JSON-serializable 4. Use the `json_schema` attribute when passing the tool to `prompts.call()` 1. Use clear and descriptive docstrings in TypeScript to provide good tool descriptions 2. Ensure all function parameters have appropriate type hints in TypeScript 3. Make return values JSON-serializable 4. Use the `jsonSchema` attribute when passing the tool to `prompts.call()` ## Related Documentation For a deeper understanding of Tools and their role in the Humanloop platform, refer to our [Tools](/docs/v5/explanation/tools) documentation. For attaching a Tool to a Prompt, see [Tool calling in Editor](/docs/v5/guides/prompts/tool-calling-editor) and [linking a Tool to a Prompt](/docs/v5/guides/prompts/link-tool). # Run Evaluation > Getting up and running with Humanloop is quick and easy. This guide will explain how to set up evaluations on Humanloop and use them to iteratively improve your applications. The `evaluations.run()` function is a convenience function that allows you to trigger evaluations from code. It will create the evaluation, fetch the dataset, generate all the Logs and then run the evaluators on each log. It supports evaluating arbitrary functions, Prompts stored on Humanloop, and Prompts defined in code. ## Parameters You can see the source code for the `evaluations.run()` function in [Python](https://github.com/humanloop/humanloop-python/blob/master/src/humanloop/evals/run.py#L106) and [TypeScript](https://github.com/humanloop/humanloop-node/blob/master/src/evals/run.ts#L211). Name of the evaluation to help identify it Configuration for what is being evaluated. The evaluation will be stored on this File. Path to the evaluated File (a [Prompt](/docs/explanation/prompts), [Flow](/docs/explanation/flows), [Tool](/docs/explanation/tools), [Evaluator](/docs/explanation/evaluators) etc.) on Humanloop. If the File does not exist on Humanloop, it will be created. Example: `My Agent` will create a `flow` file on Humanloop. `flow` (default), `prompt`, `tool`, `evaluator` If the File does not exist on Humanloop, it will be created with this File type. Pass in the details of the version of the File you want to evaluate. For example, for a Flow you might pass in identifiers: ```json { "git_hash": "1234567890", "identifier": "rag-with-pinecone" } ``` Or for a Prompt you can pass in Prompt details and it will be called. ```json { "model": "gpt-4", "template": [ { "role": "user", "content": "You are a helpful assistant on the topic of {{topic}}." } ] } ``` Function to evaluate (optional if the File is runnable on Humanloop like a Prompt). It will be called using your Dataset `callable(**datapoint.inputs, messages=datapoint.messages)`. It should return a single string output. List of evaluators to judge the generated output Path to evaluator on Humanloop The type of arguments the Evaluator expects - only required for local Evaluators The type of return value the Evaluator produces - only required for local Evaluators Function to evaluate (optional if the Evaluator is runnable on Humanloop). It will be called using the generated output as follows: `callable(output)`. It should return a single string output. Optional function that logs the output judgment from your Evaluator to Humanloop. If provided, it will be called as: `judgment = callable(log_dict); log = custom_logger(client, judgment)`. Inside the custom_logger, you can use the Humanloop `client` to log the judgment to Humanloop. If not provided your function must return a single string and by default the code will be used to inform the version of the external Evaluator on Humanloop. The threshold to check the evaluator result against Dataset to evaluate against Path to existing dataset on Humanloop. If the Dataset does not exist on Humanloop, it will be created. The datapoints to map your function over to produce the outputs required by the evaluation. Optional - if not provided, the evaluation will be run over the datapoints stored on Humanloop. ## Return Type Returns an `EvaluationStats` object containing: - run_stats: Array of statistics for each run - progress: Summary of evaluation progress - report: Detailed evaluation report - status: Current status of evaluation # Examples ## 1. Evaluating an Arbitrary Flow Function To evaluate an arbitrary workflow you can pass in the `callable` parameter to the `file` object. ```python def my_flow_function(messages): # Your custom logic here return "Response based on messages" evaluation = humanloop.evaluations.run( name="Custom Flow Evaluation", type="flow", file={ "path": "Custom/Flow", "callable": my_flow_function }, evaluators=[ {"path": "Example Evaluators/AI/Semantic similarity"}, {"path": "Example Evaluators/Code/Latency"} ], dataset={ "path": "Test/Dataset", "datapoints": [ { "messages": [ {"role": "user", "content": "Test question 1"} ] } ] } ) ``` ```typescript const myFlowFunction = (messages: Message[]): string => { // Your custom logic here return "Response based on messages"; }; const evaluation = await humanloop.evaluations.run({ name: "Custom Flow Evaluation", file: { path: "Custom/Flow", type: "flow", callable: myFlowFunction, }, evaluators: [ { path: "Example Evaluators/AI/Semantic similarity" }, { path: "Example Evaluators/Code/Latency" }, ], dataset: { path: "Test/Dataset", datapoints: [ { messages: [{ role: "user", content: "Test question 1" }], }, ], }, }); ``` ## 2. Evaluating a Prompt on Humanloop To evaluate a Prompt stored on Humanloop you simply supply a `path` to the Prompt and a list of Evaluators. ```python evaluation = humanloop.evaluations.run( name="Existing Prompt Evaluation", file={ "path": "Existing/Prompt", }, evaluators=[ {"path": "Example Evaluators/AI/Semantic similarity"}, {"path": "Example Evaluators/Code/Cost"} ], dataset={ "path": "Existing/Dataset" } ) ``` ```typescript const evaluation = await humanloop.evaluations.run({ name: "Existing Prompt Evaluation", file: { path: "Existing/Prompt", }, evaluators: [ { path: "Example Evaluators/AI/Semantic similarity" }, { path: "Example Evaluators/Code/Cost" }, ], dataset: { path: "Existing/Dataset", }, }); ``` ## 3. Evaluating a Prompt in Code To evaluate a Prompt defined in code you can pass in the `model`, `template` and other Prompt parameters to the `file`'s `version` object. ```python evaluation = humanloop.evaluations.run( name="Code Prompt Evaluation", file={ "path": "Code/Prompt", "version": { "model": "gpt-4", "template": [ { "role": "system", "content": "You are a helpful assistant on the topic of {{topic}}." } ] }, }, evaluators=[ {"path": "Example Evaluators/AI/Semantic similarity"}, {"path": "Example Evaluators/Code/Latency"} ], dataset={ "datapoints": [ { "inputs": { "topic": "machine learning" }, "messages": [ {"role": "user", "content": "What is machine learning?"} ], "target": { "output": "Machine learning is a subset of artificial intelligence..." } } ] } ) ``` ```typescript const evaluation = await humanloop.evaluations.run({ name: "Code Prompt Evaluation", file: { path: "Code/Prompt", model: "gpt-4", template: [ { role: "system", content: "You are a helpful assistant on the topic of {{topic}}.", }, ], }, evaluators: [ { path: "Example Evaluators/AI/Semantic similarity" }, { path: "Example Evaluators/Code/Latency" }, ], dataset: { datapoints: [ { inputs: { topic: "machine learning" }, messages: [{ role: "user", content: "What is machine learning?" }], target: { output: "Machine learning is a subset of artificial intelligence...", }, }, ], }, }); ``` Each example demonstrates a different way to use the `evaluation.run` function. The function returns evaluation statistics that can be used to understand the performance of your LLM application according to the specified evaluators. You can view the results of your evaluation in the Humanloop UI by navigating to the specified file path, or by checking the evaluation stats programmatically using the returned object's `report` field. # API The Humanloop API allows you to interact with Humanloop and model providers programmatically. You can do this through HTTP requests from any language or via our official Python or TypeScript SDK. First you need to install and initialize the SDK. If you have already done this, skip to the next section. Open up your terminal and follow these steps: 1. Install the Humanloop SDK: ```python pip install humanloop ``` ```typescript npm install humanloop ``` 2. Initialize the SDK with your Humanloop API key (you can get it from the [Organization Settings page](https://app.humanloop.com/account/api-keys)). ```python from humanloop import Humanloop humanloop = Humanloop(api_key="") # Check that the authentication was successful print(humanloop.prompts.list()) ``` ```typescript import { HumanloopClient, Humanloop } from "humanloop"; const humanloop = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); // Check that the authentication was successful console.log(await humanloop.prompts.list()); ``` Guides and further details about key concepts can be found in [our docs](/docs/getting-started/overview). # Log to a Prompt ```http POST https://api.humanloop.com/v5/prompts/log Content-Type: application/json ``` Log to a Prompt. You can use query parameters `version_id`, or `environment`, to target an existing version of the Prompt. Otherwise, the default deployed version will be chosen. Instead of targeting an existing version explicitly, you can instead pass in Prompt details in the request body. In this case, we will check if the details correspond to an existing version of the Prompt. If they do not, we will create a new version. This is helpful in the case where you are storing or deriving your Prompt details in code. ## Query Parameters - VersionId (optional): A specific Version ID of the Prompt to log to. - Environment (optional): Name of the Environment identifying a deployed version to log to. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell Log prompt curl -X POST https://api.humanloop.com/v5/prompts/log \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{ "path": "persona", "output_message": { "role": "assistant", "content": "Well, you know, there is so much secrecy involved in government, folks, it\'s unbelievable. They don\'t want to tell you everything. They don\'t tell me everything! But about Roswell, it\'s a very popular question. I know, I just know, that something very, very peculiar happened there. Was it a weather balloon? Maybe. Was it something extraterrestrial? Could be. I\'d love to go down and open up all the classified documents, believe me, I would. But they don\'t let that happen. The Deep State, folks, the Deep State. They\'re unbelievable. They want to keep everything a secret. But whatever the truth is, I can tell you this: it\'s something big, very very big. Tremendous, in fact." }, "prompt_tokens": 100, "output_tokens": 220, "prompt_cost": 0.00001, "output_cost": 0.0002, "finish_reason": "stop", "messages": [ { "role": "user", "content": "What really happened at Roswell?" } ], "prompt": { "model": "gpt-4", "template": [ { "role": "system", "content": "You are {{person}}. Answer questions as this person. Do not break character." } ] }, "created_at": "2024-07-19T00:29:35.178992", "error": null, "provider_latency": 6.5931549072265625, "inputs": { "person": "Trump" } }' ``` ```python Log prompt import datetime from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.prompts.log( path="persona", prompt={ "model": "gpt-4", "template": [ { "role": "system", "content": "You are {{person}}. Answer questions as this person. Do not break character.", } ], }, messages=[{"role": "user", "content": "What really happened at Roswell?"}], inputs={"person": "Trump"}, created_at=datetime.datetime.fromisoformat( "2024-07-19 00:29:35.178000+00:00", ), provider_latency=6.5931549072265625, output_message={ "content": "Well, you know, there is so much secrecy involved in government, folks, it's unbelievable. They don't want to tell you everything. They don't tell me everything! But about Roswell, it's a very popular question. I know, I just know, that something very, very peculiar happened there. Was it a weather balloon? Maybe. Was it something extraterrestrial? Could be. I'd love to go down and open up all the classified documents, believe me, I would. But they don't let that happen. The Deep State, folks, the Deep State. They're unbelievable. They want to keep everything a secret. But whatever the truth is, I can tell you this: it's something big, very very big. Tremendous, in fact.", "role": "assistant", }, prompt_tokens=100, output_tokens=220, prompt_cost=1e-05, output_cost=0.0002, finish_reason="stop", ) ``` ```typescript Log prompt import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.prompts.log({ path: "persona", prompt: { model: "gpt-4", template: [{ role: "system", content: "You are {{person}}. Answer questions as this person. Do not break character." }] }, messages: [{ role: "user", content: "What really happened at Roswell?" }], inputs: { "person": "Trump" }, createdAt: "2024-07-19T00:29:35.178992", error: undefined, providerLatency: 6.5931549072265625, outputMessage: { content: "Well, you know, there is so much secrecy involved in government, folks, it's unbelievable. They don't want to tell you everything. They don't tell me everything! But about Roswell, it's a very popular question. I know, I just know, that something very, very peculiar happened there. Was it a weather balloon? Maybe. Was it something extraterrestrial? Could be. I'd love to go down and open up all the classified documents, believe me, I would. But they don't let that happen. The Deep State, folks, the Deep State. They're unbelievable. They want to keep everything a secret. But whatever the truth is, I can tell you this: it's something big, very very big. Tremendous, in fact.", role: "assistant" }, promptTokens: 100, outputTokens: 220, promptCost: 0.00001, outputCost: 0.0002, finishReason: "stop" }); ``` ```go Log prompt package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/prompts/log" payload := strings.NewReader("{\n \"path\": \"persona\",\n \"prompt_tokens\": 100,\n \"output_tokens\": 220,\n \"prompt_cost\": 0.00001,\n \"output_cost\": 0.0002,\n \"finish_reason\": \"stop\",\n \"created_at\": \"2024-07-19T00:29:35.178992\",\n \"error\": null,\n \"provider_latency\": 6.5931549072265625\n}") req, _ := http.NewRequest("POST", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby Log prompt require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/prompts/log") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{\n \"path\": \"persona\",\n \"prompt_tokens\": 100,\n \"output_tokens\": 220,\n \"prompt_cost\": 0.00001,\n \"output_cost\": 0.0002,\n \"finish_reason\": \"stop\",\n \"created_at\": \"2024-07-19T00:29:35.178992\",\n \"error\": null,\n \"provider_latency\": 6.5931549072265625\n}" response = http.request(request) puts response.read_body ``` ```java Log prompt HttpResponse response = Unirest.post("https://api.humanloop.com/v5/prompts/log") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{\n \"path\": \"persona\",\n \"prompt_tokens\": 100,\n \"output_tokens\": 220,\n \"prompt_cost\": 0.00001,\n \"output_cost\": 0.0002,\n \"finish_reason\": \"stop\",\n \"created_at\": \"2024-07-19T00:29:35.178992\",\n \"error\": null,\n \"provider_latency\": 6.5931549072265625\n}") .asString(); ``` ```php Log prompt request('POST', 'https://api.humanloop.com/v5/prompts/log', [ 'body' => '{ "path": "persona", "prompt_tokens": 100, "output_tokens": 220, "prompt_cost": 0.00001, "output_cost": 0.0002, "finish_reason": "stop", "created_at": "2024-07-19T00:29:35.178992", "error": null, "provider_latency": 6.5931549072265625 }', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp Log prompt var client = new RestClient("https://api.humanloop.com/v5/prompts/log"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{\n \"path\": \"persona\",\n \"prompt_tokens\": 100,\n \"output_tokens\": 220,\n \"prompt_cost\": 0.00001,\n \"output_cost\": 0.0002,\n \"finish_reason\": \"stop\",\n \"created_at\": \"2024-07-19T00:29:35.178992\",\n \"error\": null,\n \"provider_latency\": 6.5931549072265625\n}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift Log prompt import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [ "path": "persona", "prompt_tokens": 100, "output_tokens": 220, "prompt_cost": 0.00001, "output_cost": 0.0002, "finish_reason": "stop", "created_at": "2024-07-19T00:29:35.178992", "error": , "provider_latency": 6.5931549072265625 ] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/prompts/log")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X POST "https://api.humanloop.com/v5/prompts/log?version_id=string&environment=string" \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{}' ``` ```python import datetime from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.prompts.log( path="persona", prompt={ "model": "gpt-4", "template": [ { "role": "system", "content": "You are {{person}}. Answer questions as this person. Do not break character.", } ], }, messages=[{"role": "user", "content": "What really happened at Roswell?"}], inputs={"person": "Trump"}, created_at=datetime.datetime.fromisoformat( "2024-07-19 00:29:35.178000+00:00", ), provider_latency=6.5931549072265625, output_message={ "content": "Well, you know, there is so much secrecy involved in government, folks, it's unbelievable. They don't want to tell you everything. They don't tell me everything! But about Roswell, it's a very popular question. I know, I just know, that something very, very peculiar happened there. Was it a weather balloon? Maybe. Was it something extraterrestrial? Could be. I'd love to go down and open up all the classified documents, believe me, I would. But they don't let that happen. The Deep State, folks, the Deep State. They're unbelievable. They want to keep everything a secret. But whatever the truth is, I can tell you this: it's something big, very very big. Tremendous, in fact.", "role": "assistant", }, prompt_tokens=100, output_tokens=220, prompt_cost=1e-05, output_cost=0.0002, finish_reason="stop", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.prompts.log({ path: "persona", prompt: { model: "gpt-4", template: [{ role: "system", content: "You are {{person}}. Answer questions as this person. Do not break character." }] }, messages: [{ role: "user", content: "What really happened at Roswell?" }], inputs: { "person": "Trump" }, createdAt: "2024-07-19T00:29:35.178992", error: undefined, providerLatency: 6.5931549072265625, outputMessage: { content: "Well, you know, there is so much secrecy involved in government, folks, it's unbelievable. They don't want to tell you everything. They don't tell me everything! But about Roswell, it's a very popular question. I know, I just know, that something very, very peculiar happened there. Was it a weather balloon? Maybe. Was it something extraterrestrial? Could be. I'd love to go down and open up all the classified documents, believe me, I would. But they don't let that happen. The Deep State, folks, the Deep State. They're unbelievable. They want to keep everything a secret. But whatever the truth is, I can tell you this: it's something big, very very big. Tremendous, in fact.", role: "assistant" }, promptTokens: 100, outputTokens: 220, promptCost: 0.00001, outputCost: 0.0002, finishReason: "stop" }); ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/prompts/log?version_id=string&environment=string" payload := strings.NewReader("{}") req, _ := http.NewRequest("POST", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/prompts/log?version_id=string&environment=string") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{}" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.post("https://api.humanloop.com/v5/prompts/log?version_id=string&environment=string") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{}") .asString(); ``` ```php request('POST', 'https://api.humanloop.com/v5/prompts/log?version_id=string&environment=string', [ 'body' => '{}', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/prompts/log?version_id=string&environment=string"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/prompts/log?version_id=string&environment=string")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Update Prompt Log ```http PATCH https://api.humanloop.com/v5/prompts/{id}/log/{log_id} Content-Type: application/json ``` Update a Log. Update the details of a Log with the given ID. ## Path Parameters - Id (required): Unique identifier for Prompt. - LogId (required): Unique identifier for the Log. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell curl -X PATCH https://api.humanloop.com/v5/prompts/id/log/log_id \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{}' ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.prompts.update_log( id="id", log_id="log_id", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.prompts.updateLog("id", "log_id"); ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/prompts/id/log/log_id" payload := strings.NewReader("{}") req, _ := http.NewRequest("PATCH", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/prompts/id/log/log_id") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Patch.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{}" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.patch("https://api.humanloop.com/v5/prompts/id/log/log_id") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{}") .asString(); ``` ```php request('PATCH', 'https://api.humanloop.com/v5/prompts/id/log/log_id', [ 'body' => '{}', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/prompts/id/log/log_id"); var request = new RestRequest(Method.PATCH); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/prompts/id/log/log_id")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "PATCH" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X PATCH https://api.humanloop.com/v5/prompts/:id/log/:log_id \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{}' ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.prompts.update_log( id="id", log_id="log_id", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.prompts.updateLog("id", "log_id"); ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/prompts/%3Aid/log/%3Alog_id" payload := strings.NewReader("{}") req, _ := http.NewRequest("PATCH", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/prompts/%3Aid/log/%3Alog_id") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Patch.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{}" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.patch("https://api.humanloop.com/v5/prompts/%3Aid/log/%3Alog_id") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{}") .asString(); ``` ```php request('PATCH', 'https://api.humanloop.com/v5/prompts/%3Aid/log/%3Alog_id', [ 'body' => '{}', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/prompts/%3Aid/log/%3Alog_id"); var request = new RestRequest(Method.PATCH); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/prompts/%3Aid/log/%3Alog_id")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "PATCH" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Call Prompt ```http POST https://api.humanloop.com/v5/prompts/call Content-Type: application/json ``` Call a Prompt. Calling a Prompt calls the model provider before logging the request, responses and metadata to Humanloop. You can use query parameters `version_id`, or `environment`, to target an existing version of the Prompt. Otherwise the default deployed version will be chosen. Instead of targeting an existing version explicitly, you can instead pass in Prompt details in the request body. In this case, we will check if the details correspond to an existing version of the Prompt. If they do not, we will create a new version. This is helpful in the case where you are storing or deriving your Prompt details in code. ## Query Parameters - VersionId (optional): A specific Version ID of the Prompt to log to. - Environment (optional): Name of the Environment identifying a deployed version to log to. ## Response Body - 200: - 422: Validation Error ## Examples ```shell Supplying Prompt with Tool curl -X POST https://api.humanloop.com/v5/prompts/call \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{ "stream": false, "path": "persona", "messages": [ { "role": "user", "content": "latest apple" } ], "prompt": { "model": "gpt-4", "template": [ { "role": "system", "content": "You are stockbot. Return latest prices." } ], "tools": [ { "name": "get_stock_price", "description": "Get current stock price", "parameters": { "type": "object", "properties": { "ticker_symbol": { "type": "string", "name": "Ticker Symbol", "description": "Ticker symbol of the stock" } }, "required": [] } } ] } }' ``` ```python Supplying Prompt with Tool from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.prompts.call( path="persona", prompt={ "model": "gpt-4", "template": [ { "role": "system", "content": "You are stockbot. Return latest prices.", } ], "tools": [ { "name": "get_stock_price", "description": "Get current stock price", "parameters": { "type": "object", "properties": { "ticker_symbol": { "type": "string", "name": "Ticker Symbol", "description": "Ticker symbol of the stock", } }, "required": [], }, } ], }, messages=[{"role": "user", "content": "latest apple"}], ) ``` ```typescript Supplying Prompt with Tool import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.prompts.call({ path: "persona", prompt: { model: "gpt-4", template: [{ role: "system", content: "You are stockbot. Return latest prices." }], tools: [{ name: "get_stock_price", description: "Get current stock price", parameters: { "type": "object", "properties": { "ticker_symbol": { "type": "string", "name": "Ticker Symbol", "description": "Ticker symbol of the stock" } }, "required": [] } }] }, messages: [{ role: "user", content: "latest apple" }] }); ``` ```go Supplying Prompt with Tool package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/prompts/call" payload := strings.NewReader("{\n \"stream\": false,\n \"path\": \"persona\"\n}") req, _ := http.NewRequest("POST", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby Supplying Prompt with Tool require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/prompts/call") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{\n \"stream\": false,\n \"path\": \"persona\"\n}" response = http.request(request) puts response.read_body ``` ```java Supplying Prompt with Tool HttpResponse response = Unirest.post("https://api.humanloop.com/v5/prompts/call") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{\n \"stream\": false,\n \"path\": \"persona\"\n}") .asString(); ``` ```php Supplying Prompt with Tool request('POST', 'https://api.humanloop.com/v5/prompts/call', [ 'body' => '{ "stream": false, "path": "persona" }', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp Supplying Prompt with Tool var client = new RestClient("https://api.humanloop.com/v5/prompts/call"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{\n \"stream\": false,\n \"path\": \"persona\"\n}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift Supplying Prompt with Tool import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [ "stream": false, "path": "persona" ] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/prompts/call")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell Supplying Prompt curl -X POST https://api.humanloop.com/v5/prompts/call \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{ "stream": false, "path": "persona", "messages": [ { "role": "user", "content": "What really happened at Roswell?" } ], "prompt": { "model": "gpt-4", "template": [ { "role": "system", "content": "You are {{person}}. Answer any questions as this person. Do not break character." } ] }, "inputs": { "person": "Trump" } }' ``` ```python Supplying Prompt from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.prompts.call( path="persona", prompt={ "model": "gpt-4", "template": [ { "role": "system", "content": "You are {{person}}. Answer any questions as this person. Do not break character.", } ], }, messages=[{"role": "user", "content": "What really happened at Roswell?"}], inputs={"person": "Trump"}, ) ``` ```typescript Supplying Prompt import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.prompts.call({ path: "persona", prompt: { model: "gpt-4", template: [{ role: "system", content: "You are {{person}}. Answer any questions as this person. Do not break character." }] }, messages: [{ role: "user", content: "What really happened at Roswell?" }], inputs: { "person": "Trump" } }); ``` ```go Supplying Prompt package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/prompts/call" payload := strings.NewReader("{\n \"stream\": false,\n \"path\": \"persona\"\n}") req, _ := http.NewRequest("POST", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby Supplying Prompt require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/prompts/call") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{\n \"stream\": false,\n \"path\": \"persona\"\n}" response = http.request(request) puts response.read_body ``` ```java Supplying Prompt HttpResponse response = Unirest.post("https://api.humanloop.com/v5/prompts/call") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{\n \"stream\": false,\n \"path\": \"persona\"\n}") .asString(); ``` ```php Supplying Prompt request('POST', 'https://api.humanloop.com/v5/prompts/call', [ 'body' => '{ "stream": false, "path": "persona" }', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp Supplying Prompt var client = new RestClient("https://api.humanloop.com/v5/prompts/call"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{\n \"stream\": false,\n \"path\": \"persona\"\n}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift Supplying Prompt import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [ "stream": false, "path": "persona" ] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/prompts/call")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell By ID curl -X POST "https://api.humanloop.com/v5/prompts/call?version_id=prv_Wu6zx1lAWJRqOyL8nWuZk" \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{ "path": "persona", "messages": [ { "role": "user", "content": "What really happened at Roswell?" } ], "inputs": { "person": "Trump" } }' ``` ```python By ID from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.prompts.call( version_id="prv_Wu6zx1lAWJRqOyL8nWuZk", path="persona", messages=[{"role": "user", "content": "What really happened at Roswell?"}], inputs={"person": "Trump"}, ) ``` ```typescript By ID import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.prompts.call({ versionId: "prv_Wu6zx1lAWJRqOyL8nWuZk", path: "persona", messages: [{ role: "user", content: "What really happened at Roswell?" }], inputs: { "person": "Trump" } }); ``` ```go By ID package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/prompts/call?version_id=prv_Wu6zx1lAWJRqOyL8nWuZk" payload := strings.NewReader("{\n \"path\": \"persona\"\n}") req, _ := http.NewRequest("POST", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby By ID require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/prompts/call?version_id=prv_Wu6zx1lAWJRqOyL8nWuZk") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{\n \"path\": \"persona\"\n}" response = http.request(request) puts response.read_body ``` ```java By ID HttpResponse response = Unirest.post("https://api.humanloop.com/v5/prompts/call?version_id=prv_Wu6zx1lAWJRqOyL8nWuZk") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{\n \"path\": \"persona\"\n}") .asString(); ``` ```php By ID request('POST', 'https://api.humanloop.com/v5/prompts/call?version_id=prv_Wu6zx1lAWJRqOyL8nWuZk', [ 'body' => '{ "path": "persona" }', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp By ID var client = new RestClient("https://api.humanloop.com/v5/prompts/call?version_id=prv_Wu6zx1lAWJRqOyL8nWuZk"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{\n \"path\": \"persona\"\n}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift By ID import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = ["path": "persona"] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/prompts/call?version_id=prv_Wu6zx1lAWJRqOyL8nWuZk")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X POST "https://api.humanloop.com/v5/prompts/call?version_id=string&environment=string" \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{ "stream": false }' ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.prompts.call( path="persona", prompt={ "model": "gpt-4", "template": [ { "role": "system", "content": "You are stockbot. Return latest prices.", } ], "tools": [ { "name": "get_stock_price", "description": "Get current stock price", "parameters": { "type": "object", "properties": { "ticker_symbol": { "type": "string", "name": "Ticker Symbol", "description": "Ticker symbol of the stock", } }, "required": [], }, } ], }, messages=[{"role": "user", "content": "latest apple"}], ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.prompts.call({ path: "persona", prompt: { model: "gpt-4", template: [{ role: "system", content: "You are stockbot. Return latest prices." }], tools: [{ name: "get_stock_price", description: "Get current stock price", parameters: { "type": "object", "properties": { "ticker_symbol": { "type": "string", "name": "Ticker Symbol", "description": "Ticker symbol of the stock" } }, "required": [] } }] }, messages: [{ role: "user", content: "latest apple" }] }); ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/prompts/call?version_id=string&environment=string" payload := strings.NewReader("{\n \"stream\": false\n}") req, _ := http.NewRequest("POST", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/prompts/call?version_id=string&environment=string") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{\n \"stream\": false\n}" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.post("https://api.humanloop.com/v5/prompts/call?version_id=string&environment=string") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{\n \"stream\": false\n}") .asString(); ``` ```php request('POST', 'https://api.humanloop.com/v5/prompts/call?version_id=string&environment=string', [ 'body' => '{ "stream": false }', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/prompts/call?version_id=string&environment=string"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{\n \"stream\": false\n}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = ["stream": false] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/prompts/call?version_id=string&environment=string")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # List Prompts ```http GET https://api.humanloop.com/v5/prompts ``` Get a list of all Prompts. ## Query Parameters - Page (optional): Page number for pagination. - Size (optional): Page size for pagination. Number of Prompts to fetch. - Name (optional): Case-insensitive filter for Prompt name. - UserFilter (optional): Case-insensitive filter for users in the Prompt. This filter matches against both email address and name of users. - SortBy (optional): Field to sort Prompts by - Order (optional): Direction to sort by. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell curl -G https://api.humanloop.com/v5/prompts \ -H "X-API-KEY: " \ -d size=1 ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) response = client.prompts.list( size=1, ) for item in response: yield item # alternatively, you can paginate page-by-page for page in response.iter_pages(): yield page ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); const response = await client.prompts.list({ size: 1 }); for await (const item of response) { console.log(item); } // Or you can manually iterate page-by-page const page = await client.prompts.list({ size: 1 }); while (page.hasNextPage()) { page = page.getNextPage(); } ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/prompts?size=1" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/prompts?size=1") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.get("https://api.humanloop.com/v5/prompts?size=1") .header("X-API-KEY", "") .asString(); ``` ```php request('GET', 'https://api.humanloop.com/v5/prompts?size=1', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/prompts?size=1"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/prompts?size=1")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -G https://api.humanloop.com/v5/prompts \ -H "X-API-KEY: " \ -d page=0 \ -d size=0 ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) response = client.prompts.list( size=1, ) for item in response: yield item # alternatively, you can paginate page-by-page for page in response.iter_pages(): yield page ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); const response = await client.prompts.list({ size: 1 }); for await (const item of response) { console.log(item); } // Or you can manually iterate page-by-page const page = await client.prompts.list({ size: 1 }); while (page.hasNextPage()) { page = page.getNextPage(); } ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/prompts?page=0&size=0" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/prompts?page=0&size=0") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.get("https://api.humanloop.com/v5/prompts?page=0&size=0") .header("X-API-KEY", "") .asString(); ``` ```php request('GET', 'https://api.humanloop.com/v5/prompts?page=0&size=0', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/prompts?page=0&size=0"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/prompts?page=0&size=0")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Upsert Prompt ```http POST https://api.humanloop.com/v5/prompts Content-Type: application/json ``` Create a Prompt or update it with a new version if it already exists. Prompts are identified by the `ID` or their `path`. The parameters (i.e. the prompt template, temperature, model etc.) determine the versions of the Prompt. You can provide `version_name` and `version_description` to identify and describe your versions. Version names must be unique within a Prompt - attempting to create a version with a name that already exists will result in a 409 Conflict error. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell Upsert prompt curl -X POST https://api.humanloop.com/v5/prompts \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{ "model": "gpt-4o", "path": "Personal Projects/Coding Assistant", "endpoint": "chat", "template": [ { "content": "You are a helpful coding assistant specialising in {{language}}", "role": "system" } ], "provider": "openai", "max_tokens": -1, "temperature": 0.7 }' ``` ```python Upsert prompt from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.prompts.upsert( path="Personal Projects/Coding Assistant", model="gpt-4o", endpoint="chat", template=[ { "content": "You are a helpful coding assistant specialising in {{language}}", "role": "system", } ], provider="openai", max_tokens=-1, temperature=0.7, ) ``` ```typescript Upsert prompt import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.prompts.upsert({ path: "Personal Projects/Coding Assistant", model: "gpt-4o", endpoint: "chat", template: [{ content: "You are a helpful coding assistant specialising in {{language}}", role: "system" }], provider: "openai", maxTokens: -1, temperature: 0.7, commitMessage: "Initial commit" }); ``` ```go Upsert prompt package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/prompts" payload := strings.NewReader("{\n \"model\": \"gpt-4o\",\n \"path\": \"Personal Projects/Coding Assistant\",\n \"endpoint\": \"chat\",\n \"provider\": \"openai\",\n \"max_tokens\": -1,\n \"temperature\": 0.7\n}") req, _ := http.NewRequest("POST", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby Upsert prompt require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/prompts") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{\n \"model\": \"gpt-4o\",\n \"path\": \"Personal Projects/Coding Assistant\",\n \"endpoint\": \"chat\",\n \"provider\": \"openai\",\n \"max_tokens\": -1,\n \"temperature\": 0.7\n}" response = http.request(request) puts response.read_body ``` ```java Upsert prompt HttpResponse response = Unirest.post("https://api.humanloop.com/v5/prompts") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{\n \"model\": \"gpt-4o\",\n \"path\": \"Personal Projects/Coding Assistant\",\n \"endpoint\": \"chat\",\n \"provider\": \"openai\",\n \"max_tokens\": -1,\n \"temperature\": 0.7\n}") .asString(); ``` ```php Upsert prompt request('POST', 'https://api.humanloop.com/v5/prompts', [ 'body' => '{ "model": "gpt-4o", "path": "Personal Projects/Coding Assistant", "endpoint": "chat", "provider": "openai", "max_tokens": -1, "temperature": 0.7 }', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp Upsert prompt var client = new RestClient("https://api.humanloop.com/v5/prompts"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{\n \"model\": \"gpt-4o\",\n \"path\": \"Personal Projects/Coding Assistant\",\n \"endpoint\": \"chat\",\n \"provider\": \"openai\",\n \"max_tokens\": -1,\n \"temperature\": 0.7\n}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift Upsert prompt import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [ "model": "gpt-4o", "path": "Personal Projects/Coding Assistant", "endpoint": "chat", "provider": "openai", "max_tokens": -1, "temperature": 0.7 ] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/prompts")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X POST https://api.humanloop.com/v5/prompts \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{ "model": "string" }' ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.prompts.upsert( path="Personal Projects/Coding Assistant", model="gpt-4o", endpoint="chat", template=[ { "content": "You are a helpful coding assistant specialising in {{language}}", "role": "system", } ], provider="openai", max_tokens=-1, temperature=0.7, ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.prompts.upsert({ path: "Personal Projects/Coding Assistant", model: "gpt-4o", endpoint: "chat", template: [{ content: "You are a helpful coding assistant specialising in {{language}}", role: "system" }], provider: "openai", maxTokens: -1, temperature: 0.7, commitMessage: "Initial commit" }); ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/prompts" payload := strings.NewReader("{\n \"model\": \"string\"\n}") req, _ := http.NewRequest("POST", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/prompts") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{\n \"model\": \"string\"\n}" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.post("https://api.humanloop.com/v5/prompts") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{\n \"model\": \"string\"\n}") .asString(); ``` ```php request('POST', 'https://api.humanloop.com/v5/prompts', [ 'body' => '{ "model": "string" }', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/prompts"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{\n \"model\": \"string\"\n}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = ["model": "string"] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/prompts")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Get Prompt ```http GET https://api.humanloop.com/v5/prompts/{id} ``` Retrieve the Prompt with the given ID. By default, the deployed version of the Prompt is returned. Use the query parameters `version_id` or `environment` to target a specific version of the Prompt. ## Path Parameters - Id (required): Unique identifier for Prompt. ## Query Parameters - VersionId (optional): A specific Version ID of the Prompt to retrieve. - Environment (optional): Name of the Environment to retrieve a deployed Version from. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell Get specific prompt curl https://api.humanloop.com/v5/prompts/pr_30gco7dx6JDq4200GVOHa \ -H "X-API-KEY: " ``` ```python Get specific prompt from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.prompts.get( id="pr_30gco7dx6JDq4200GVOHa", ) ``` ```typescript Get specific prompt import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.prompts.get("pr_30gco7dx6JDq4200GVOHa"); ``` ```go Get specific prompt package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/prompts/pr_30gco7dx6JDq4200GVOHa" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby Get specific prompt require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/prompts/pr_30gco7dx6JDq4200GVOHa") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java Get specific prompt HttpResponse response = Unirest.get("https://api.humanloop.com/v5/prompts/pr_30gco7dx6JDq4200GVOHa") .header("X-API-KEY", "") .asString(); ``` ```php Get specific prompt request('GET', 'https://api.humanloop.com/v5/prompts/pr_30gco7dx6JDq4200GVOHa', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp Get specific prompt var client = new RestClient("https://api.humanloop.com/v5/prompts/pr_30gco7dx6JDq4200GVOHa"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift Get specific prompt import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/prompts/pr_30gco7dx6JDq4200GVOHa")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -G https://api.humanloop.com/v5/prompts/:id \ -H "X-API-KEY: " \ -d version_id=string \ -d environment=string ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.prompts.get( id="pr_30gco7dx6JDq4200GVOHa", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.prompts.get("pr_30gco7dx6JDq4200GVOHa"); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/prompts/%3Aid?version_id=string&environment=string" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/prompts/%3Aid?version_id=string&environment=string") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.get("https://api.humanloop.com/v5/prompts/%3Aid?version_id=string&environment=string") .header("X-API-KEY", "") .asString(); ``` ```php request('GET', 'https://api.humanloop.com/v5/prompts/%3Aid?version_id=string&environment=string', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/prompts/%3Aid?version_id=string&environment=string"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/prompts/%3Aid?version_id=string&environment=string")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Delete Prompt ```http DELETE https://api.humanloop.com/v5/prompts/{id} ``` Delete the Prompt with the given ID. ## Path Parameters - Id (required): Unique identifier for Prompt. ## Response Body - 422: Validation Error ## Examples ```shell Delete prompt curl -X DELETE https://api.humanloop.com/v5/prompts/pr_30gco7dx6JDq4200GVOHa \ -H "X-API-KEY: " ``` ```python Delete prompt from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.prompts.delete( id="pr_30gco7dx6JDq4200GVOHa", ) ``` ```typescript Delete prompt import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.prompts.delete("pr_30gco7dx6JDq4200GVOHa"); ``` ```go Delete prompt package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/prompts/pr_30gco7dx6JDq4200GVOHa" req, _ := http.NewRequest("DELETE", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby Delete prompt require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/prompts/pr_30gco7dx6JDq4200GVOHa") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Delete.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java Delete prompt HttpResponse response = Unirest.delete("https://api.humanloop.com/v5/prompts/pr_30gco7dx6JDq4200GVOHa") .header("X-API-KEY", "") .asString(); ``` ```php Delete prompt request('DELETE', 'https://api.humanloop.com/v5/prompts/pr_30gco7dx6JDq4200GVOHa', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp Delete prompt var client = new RestClient("https://api.humanloop.com/v5/prompts/pr_30gco7dx6JDq4200GVOHa"); var request = new RestRequest(Method.DELETE); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift Delete prompt import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/prompts/pr_30gco7dx6JDq4200GVOHa")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "DELETE" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X DELETE https://api.humanloop.com/v5/prompts/:id \ -H "X-API-KEY: " ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.prompts.delete( id="pr_30gco7dx6JDq4200GVOHa", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.prompts.delete("pr_30gco7dx6JDq4200GVOHa"); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/prompts/%3Aid" req, _ := http.NewRequest("DELETE", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/prompts/%3Aid") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Delete.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.delete("https://api.humanloop.com/v5/prompts/%3Aid") .header("X-API-KEY", "") .asString(); ``` ```php request('DELETE', 'https://api.humanloop.com/v5/prompts/%3Aid', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/prompts/%3Aid"); var request = new RestRequest(Method.DELETE); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/prompts/%3Aid")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "DELETE" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Move Prompt ```http PATCH https://api.humanloop.com/v5/prompts/{id} Content-Type: application/json ``` Move the Prompt to a different path or change the name. ## Path Parameters - Id (required): Unique identifier for Prompt. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell Move prompt curl -X PATCH https://api.humanloop.com/v5/prompts/pr_30gco7dx6JDq4200GVOHa \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{ "path": "new directory/new name" }' ``` ```python Move prompt from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.prompts.move( id="pr_30gco7dx6JDq4200GVOHa", path="new directory/new name", ) ``` ```typescript Move prompt import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.prompts.move("pr_30gco7dx6JDq4200GVOHa", { path: "new directory/new name" }); ``` ```go Move prompt package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/prompts/pr_30gco7dx6JDq4200GVOHa" payload := strings.NewReader("{\n \"path\": \"new directory/new name\"\n}") req, _ := http.NewRequest("PATCH", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby Move prompt require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/prompts/pr_30gco7dx6JDq4200GVOHa") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Patch.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{\n \"path\": \"new directory/new name\"\n}" response = http.request(request) puts response.read_body ``` ```java Move prompt HttpResponse response = Unirest.patch("https://api.humanloop.com/v5/prompts/pr_30gco7dx6JDq4200GVOHa") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{\n \"path\": \"new directory/new name\"\n}") .asString(); ``` ```php Move prompt request('PATCH', 'https://api.humanloop.com/v5/prompts/pr_30gco7dx6JDq4200GVOHa', [ 'body' => '{ "path": "new directory/new name" }', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp Move prompt var client = new RestClient("https://api.humanloop.com/v5/prompts/pr_30gco7dx6JDq4200GVOHa"); var request = new RestRequest(Method.PATCH); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{\n \"path\": \"new directory/new name\"\n}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift Move prompt import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = ["path": "new directory/new name"] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/prompts/pr_30gco7dx6JDq4200GVOHa")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "PATCH" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X PATCH https://api.humanloop.com/v5/prompts/:id \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{}' ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.prompts.move( id="pr_30gco7dx6JDq4200GVOHa", path="new directory/new name", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.prompts.move("pr_30gco7dx6JDq4200GVOHa", { path: "new directory/new name" }); ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/prompts/%3Aid" payload := strings.NewReader("{}") req, _ := http.NewRequest("PATCH", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/prompts/%3Aid") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Patch.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{}" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.patch("https://api.humanloop.com/v5/prompts/%3Aid") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{}") .asString(); ``` ```php request('PATCH', 'https://api.humanloop.com/v5/prompts/%3Aid', [ 'body' => '{}', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/prompts/%3Aid"); var request = new RestRequest(Method.PATCH); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/prompts/%3Aid")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "PATCH" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Populate Prompt template ```http POST https://api.humanloop.com/v5/prompts/{id}/populate Content-Type: application/json ``` Retrieve the Prompt with the given ID, including the populated template. By default, the deployed version of the Prompt is returned. Use the query parameters `version_id` or `environment` to target a specific version of the Prompt. ## Path Parameters - Id (required): Unique identifier for Prompt. ## Query Parameters - VersionId (optional): A specific Version ID of the Prompt to retrieve to populate the template. - Environment (optional): Name of the Environment to retrieve a deployed Version from to populate the template. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell curl -X POST https://api.humanloop.com/v5/prompts/id/populate \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{ "key": "value" }' ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.prompts.populate( id="id", request={"key": "value"}, ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.prompts.populateTemplate("id", { body: { "key": "value" } }); ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/prompts/id/populate" payload := strings.NewReader("{\n \"key\": \"value\"\n}") req, _ := http.NewRequest("POST", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/prompts/id/populate") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{\n \"key\": \"value\"\n}" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.post("https://api.humanloop.com/v5/prompts/id/populate") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{\n \"key\": \"value\"\n}") .asString(); ``` ```php request('POST', 'https://api.humanloop.com/v5/prompts/id/populate', [ 'body' => '{ "key": "value" }', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/prompts/id/populate"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{\n \"key\": \"value\"\n}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = ["key": "value"] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/prompts/id/populate")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X POST "https://api.humanloop.com/v5/prompts/:id/populate?version_id=string&environment=string" \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{ "string": {} }' ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.prompts.populate( id="id", request={"key": "value"}, ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.prompts.populateTemplate("id", { body: { "key": "value" } }); ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/prompts/%3Aid/populate?version_id=string&environment=string" payload := strings.NewReader("{}") req, _ := http.NewRequest("POST", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/prompts/%3Aid/populate?version_id=string&environment=string") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{}" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.post("https://api.humanloop.com/v5/prompts/%3Aid/populate?version_id=string&environment=string") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{}") .asString(); ``` ```php request('POST', 'https://api.humanloop.com/v5/prompts/%3Aid/populate?version_id=string&environment=string', [ 'body' => '{}', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/prompts/%3Aid/populate?version_id=string&environment=string"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/prompts/%3Aid/populate?version_id=string&environment=string")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # List Versions of a Prompt ```http GET https://api.humanloop.com/v5/prompts/{id}/versions ``` Get a list of all the versions of a Prompt. ## Path Parameters - Id (required): Unique identifier for Prompt. ## Query Parameters - EvaluatorAggregates (optional): Whether to include Evaluator aggregate results for the versions in the response ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell List versions curl https://api.humanloop.com/v5/prompts/pr_30gco7dx6JDq4200GVOHa/versions \ -H "X-API-KEY: " ``` ```python List versions from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.prompts.list_versions( id="pr_30gco7dx6JDq4200GVOHa", ) ``` ```typescript List versions import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.prompts.listVersions("pr_30gco7dx6JDq4200GVOHa", { status: "committed" }); ``` ```go List versions package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/prompts/pr_30gco7dx6JDq4200GVOHa/versions" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby List versions require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/prompts/pr_30gco7dx6JDq4200GVOHa/versions") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java List versions HttpResponse response = Unirest.get("https://api.humanloop.com/v5/prompts/pr_30gco7dx6JDq4200GVOHa/versions") .header("X-API-KEY", "") .asString(); ``` ```php List versions request('GET', 'https://api.humanloop.com/v5/prompts/pr_30gco7dx6JDq4200GVOHa/versions', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp List versions var client = new RestClient("https://api.humanloop.com/v5/prompts/pr_30gco7dx6JDq4200GVOHa/versions"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift List versions import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/prompts/pr_30gco7dx6JDq4200GVOHa/versions")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -G https://api.humanloop.com/v5/prompts/:id/versions \ -H "X-API-KEY: " \ -d evaluator_aggregates=true ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.prompts.list_versions( id="pr_30gco7dx6JDq4200GVOHa", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.prompts.listVersions("pr_30gco7dx6JDq4200GVOHa", { status: "committed" }); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/prompts/%3Aid/versions?evaluator_aggregates=true" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/prompts/%3Aid/versions?evaluator_aggregates=true") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.get("https://api.humanloop.com/v5/prompts/%3Aid/versions?evaluator_aggregates=true") .header("X-API-KEY", "") .asString(); ``` ```php request('GET', 'https://api.humanloop.com/v5/prompts/%3Aid/versions?evaluator_aggregates=true', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/prompts/%3Aid/versions?evaluator_aggregates=true"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/prompts/%3Aid/versions?evaluator_aggregates=true")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Delete Prompt Version ```http DELETE https://api.humanloop.com/v5/prompts/{id}/versions/{version_id} ``` Delete a version of the Prompt. ## Path Parameters - Id (required): Unique identifier for Prompt. - VersionId (required): Unique identifier for the specific version of the Prompt. ## Response Body - 422: Validation Error ## Examples ```shell curl -X DELETE https://api.humanloop.com/v5/prompts/id/versions/version_id \ -H "X-API-KEY: " ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.prompts.delete_prompt_version( id="id", version_id="version_id", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.prompts.deletePromptVersion("id", "version_id"); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/prompts/id/versions/version_id" req, _ := http.NewRequest("DELETE", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/prompts/id/versions/version_id") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Delete.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.delete("https://api.humanloop.com/v5/prompts/id/versions/version_id") .header("X-API-KEY", "") .asString(); ``` ```php request('DELETE', 'https://api.humanloop.com/v5/prompts/id/versions/version_id', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/prompts/id/versions/version_id"); var request = new RestRequest(Method.DELETE); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/prompts/id/versions/version_id")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "DELETE" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X DELETE https://api.humanloop.com/v5/prompts/:id/versions/:version_id \ -H "X-API-KEY: " ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.prompts.delete_prompt_version( id="id", version_id="version_id", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.prompts.deletePromptVersion("id", "version_id"); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/prompts/%3Aid/versions/%3Aversion_id" req, _ := http.NewRequest("DELETE", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/prompts/%3Aid/versions/%3Aversion_id") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Delete.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.delete("https://api.humanloop.com/v5/prompts/%3Aid/versions/%3Aversion_id") .header("X-API-KEY", "") .asString(); ``` ```php request('DELETE', 'https://api.humanloop.com/v5/prompts/%3Aid/versions/%3Aversion_id', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/prompts/%3Aid/versions/%3Aversion_id"); var request = new RestRequest(Method.DELETE); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/prompts/%3Aid/versions/%3Aversion_id")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "DELETE" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Update Prompt Version ```http PATCH https://api.humanloop.com/v5/prompts/{id}/versions/{version_id} Content-Type: application/json ``` Update the name or description of the Prompt version. ## Path Parameters - Id (required): Unique identifier for Prompt. - VersionId (required): Unique identifier for the specific version of the Prompt. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell curl -X PATCH https://api.humanloop.com/v5/prompts/id/versions/version_id \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{}' ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.prompts.patch_prompt_version( id="id", version_id="version_id", ) ``` ```javascript const url = 'https://api.humanloop.com/v5/prompts/id/versions/version_id'; const options = { method: 'PATCH', headers: {'X-API-KEY': '', 'Content-Type': 'application/json'}, body: '{}' }; try { const response = await fetch(url, options); const data = await response.json(); console.log(data); } catch (error) { console.error(error); } ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/prompts/id/versions/version_id" payload := strings.NewReader("{}") req, _ := http.NewRequest("PATCH", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/prompts/id/versions/version_id") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Patch.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{}" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.patch("https://api.humanloop.com/v5/prompts/id/versions/version_id") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{}") .asString(); ``` ```php request('PATCH', 'https://api.humanloop.com/v5/prompts/id/versions/version_id', [ 'body' => '{}', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/prompts/id/versions/version_id"); var request = new RestRequest(Method.PATCH); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/prompts/id/versions/version_id")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "PATCH" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X PATCH https://api.humanloop.com/v5/prompts/:id/versions/:version_id \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{}' ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.prompts.patch_prompt_version( id="id", version_id="version_id", ) ``` ```javascript const url = 'https://api.humanloop.com/v5/prompts/%3Aid/versions/%3Aversion_id'; const options = { method: 'PATCH', headers: {'X-API-KEY': '', 'Content-Type': 'application/json'}, body: '{}' }; try { const response = await fetch(url, options); const data = await response.json(); console.log(data); } catch (error) { console.error(error); } ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/prompts/%3Aid/versions/%3Aversion_id" payload := strings.NewReader("{}") req, _ := http.NewRequest("PATCH", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/prompts/%3Aid/versions/%3Aversion_id") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Patch.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{}" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.patch("https://api.humanloop.com/v5/prompts/%3Aid/versions/%3Aversion_id") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{}") .asString(); ``` ```php request('PATCH', 'https://api.humanloop.com/v5/prompts/%3Aid/versions/%3Aversion_id', [ 'body' => '{}', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/prompts/%3Aid/versions/%3Aversion_id"); var request = new RestRequest(Method.PATCH); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/prompts/%3Aid/versions/%3Aversion_id")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "PATCH" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Deploy Prompt ```http POST https://api.humanloop.com/v5/prompts/{id}/environments/{environment_id} ``` Deploy Prompt to an Environment. Set the deployed version for the specified Environment. This Prompt will be used for calls made to the Prompt in this Environment. ## Path Parameters - Id (required): Unique identifier for Prompt. - EnvironmentId (required): Unique identifier for the Environment to deploy the Version to. ## Query Parameters - VersionId (required): Unique identifier for the specific version of the Prompt. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell curl -X POST "https://api.humanloop.com/v5/prompts/id/environments/environment_id?version_id=version_id" \ -H "X-API-KEY: " ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.prompts.set_deployment( id="id", environment_id="environment_id", version_id="version_id", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.prompts.setDeployment("id", "environment_id", { versionId: "version_id" }); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/prompts/id/environments/environment_id?version_id=version_id" req, _ := http.NewRequest("POST", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/prompts/id/environments/environment_id?version_id=version_id") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.post("https://api.humanloop.com/v5/prompts/id/environments/environment_id?version_id=version_id") .header("X-API-KEY", "") .asString(); ``` ```php request('POST', 'https://api.humanloop.com/v5/prompts/id/environments/environment_id?version_id=version_id', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/prompts/id/environments/environment_id?version_id=version_id"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/prompts/id/environments/environment_id?version_id=version_id")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X POST "https://api.humanloop.com/v5/prompts/:id/environments/:environment_id?version_id=string" \ -H "X-API-KEY: " ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.prompts.set_deployment( id="id", environment_id="environment_id", version_id="version_id", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.prompts.setDeployment("id", "environment_id", { versionId: "version_id" }); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/prompts/%3Aid/environments/%3Aenvironment_id?version_id=string" req, _ := http.NewRequest("POST", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/prompts/%3Aid/environments/%3Aenvironment_id?version_id=string") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.post("https://api.humanloop.com/v5/prompts/%3Aid/environments/%3Aenvironment_id?version_id=string") .header("X-API-KEY", "") .asString(); ``` ```php request('POST', 'https://api.humanloop.com/v5/prompts/%3Aid/environments/%3Aenvironment_id?version_id=string', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/prompts/%3Aid/environments/%3Aenvironment_id?version_id=string"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/prompts/%3Aid/environments/%3Aenvironment_id?version_id=string")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Remove Deployment ```http DELETE https://api.humanloop.com/v5/prompts/{id}/environments/{environment_id} ``` Remove deployed Prompt from the Environment. Remove the deployed version for the specified Environment. This Prompt will no longer be used for calls made to the Prompt in this Environment. ## Path Parameters - Id (required): Unique identifier for Prompt. - EnvironmentId (required): Unique identifier for the Environment to remove the deployment from. ## Response Body - 422: Validation Error ## Examples ```shell curl -X DELETE https://api.humanloop.com/v5/prompts/id/environments/environment_id \ -H "X-API-KEY: " ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.prompts.remove_deployment( id="id", environment_id="environment_id", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.prompts.removeDeployment("id", "environment_id"); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/prompts/id/environments/environment_id" req, _ := http.NewRequest("DELETE", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/prompts/id/environments/environment_id") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Delete.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.delete("https://api.humanloop.com/v5/prompts/id/environments/environment_id") .header("X-API-KEY", "") .asString(); ``` ```php request('DELETE', 'https://api.humanloop.com/v5/prompts/id/environments/environment_id', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/prompts/id/environments/environment_id"); var request = new RestRequest(Method.DELETE); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/prompts/id/environments/environment_id")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "DELETE" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X DELETE https://api.humanloop.com/v5/prompts/:id/environments/:environment_id \ -H "X-API-KEY: " ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.prompts.remove_deployment( id="id", environment_id="environment_id", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.prompts.removeDeployment("id", "environment_id"); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/prompts/%3Aid/environments/%3Aenvironment_id" req, _ := http.NewRequest("DELETE", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/prompts/%3Aid/environments/%3Aenvironment_id") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Delete.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.delete("https://api.humanloop.com/v5/prompts/%3Aid/environments/%3Aenvironment_id") .header("X-API-KEY", "") .asString(); ``` ```php request('DELETE', 'https://api.humanloop.com/v5/prompts/%3Aid/environments/%3Aenvironment_id', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/prompts/%3Aid/environments/%3Aenvironment_id"); var request = new RestRequest(Method.DELETE); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/prompts/%3Aid/environments/%3Aenvironment_id")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "DELETE" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # List a Prompt's Environments ```http GET https://api.humanloop.com/v5/prompts/{id}/environments ``` List all Environments and their deployed versions for the Prompt. ## Path Parameters - Id (required): Unique identifier for Prompt. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell List environments curl https://api.humanloop.com/v5/prompts/pr_30gco7dx6JDq4200GVOHa/environments \ -H "X-API-KEY: " ``` ```python List environments from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.prompts.list_environments( id="pr_30gco7dx6JDq4200GVOHa", ) ``` ```typescript List environments import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.prompts.listEnvironments("pr_30gco7dx6JDq4200GVOHa"); ``` ```go List environments package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/prompts/pr_30gco7dx6JDq4200GVOHa/environments" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby List environments require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/prompts/pr_30gco7dx6JDq4200GVOHa/environments") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java List environments HttpResponse response = Unirest.get("https://api.humanloop.com/v5/prompts/pr_30gco7dx6JDq4200GVOHa/environments") .header("X-API-KEY", "") .asString(); ``` ```php List environments request('GET', 'https://api.humanloop.com/v5/prompts/pr_30gco7dx6JDq4200GVOHa/environments', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp List environments var client = new RestClient("https://api.humanloop.com/v5/prompts/pr_30gco7dx6JDq4200GVOHa/environments"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift List environments import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/prompts/pr_30gco7dx6JDq4200GVOHa/environments")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl https://api.humanloop.com/v5/prompts/:id/environments \ -H "X-API-KEY: " ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.prompts.list_environments( id="pr_30gco7dx6JDq4200GVOHa", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.prompts.listEnvironments("pr_30gco7dx6JDq4200GVOHa"); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/prompts/%3Aid/environments" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/prompts/%3Aid/environments") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.get("https://api.humanloop.com/v5/prompts/%3Aid/environments") .header("X-API-KEY", "") .asString(); ``` ```php request('GET', 'https://api.humanloop.com/v5/prompts/%3Aid/environments', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/prompts/%3Aid/environments"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/prompts/%3Aid/environments")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Update Monitoring ```http POST https://api.humanloop.com/v5/prompts/{id}/evaluators Content-Type: application/json ``` Activate and deactivate Evaluators for monitoring the Prompt. An activated Evaluator will automatically be run on all new Logs within the Prompt for monitoring purposes. ## Path Parameters - Id (required) ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell Add evaluator curl -X POST https://api.humanloop.com/v5/prompts/pr_30gco7dx6JDq4200GVOHa/evaluators \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{ "activate": [ { "evaluator_version_id": "evv_1abc4308abd" } ] }' ``` ```python Add evaluator from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.prompts.update_monitoring( id="pr_30gco7dx6JDq4200GVOHa", activate=[{"evaluator_version_id": "evv_1abc4308abd"}], ) ``` ```typescript Add evaluator import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.prompts.updateMonitoring("pr_30gco7dx6JDq4200GVOHa", { activate: [{ evaluatorVersionId: "evv_1abc4308abd" }] }); ``` ```go Add evaluator package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/prompts/pr_30gco7dx6JDq4200GVOHa/evaluators" payload := strings.NewReader("{}") req, _ := http.NewRequest("POST", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby Add evaluator require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/prompts/pr_30gco7dx6JDq4200GVOHa/evaluators") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{}" response = http.request(request) puts response.read_body ``` ```java Add evaluator HttpResponse response = Unirest.post("https://api.humanloop.com/v5/prompts/pr_30gco7dx6JDq4200GVOHa/evaluators") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{}") .asString(); ``` ```php Add evaluator request('POST', 'https://api.humanloop.com/v5/prompts/pr_30gco7dx6JDq4200GVOHa/evaluators', [ 'body' => '{}', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp Add evaluator var client = new RestClient("https://api.humanloop.com/v5/prompts/pr_30gco7dx6JDq4200GVOHa/evaluators"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift Add evaluator import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/prompts/pr_30gco7dx6JDq4200GVOHa/evaluators")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X POST https://api.humanloop.com/v5/prompts/:id/evaluators \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{}' ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.prompts.update_monitoring( id="pr_30gco7dx6JDq4200GVOHa", activate=[{"evaluator_version_id": "evv_1abc4308abd"}], ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.prompts.updateMonitoring("pr_30gco7dx6JDq4200GVOHa", { activate: [{ evaluatorVersionId: "evv_1abc4308abd" }] }); ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/prompts/%3Aid/evaluators" payload := strings.NewReader("{}") req, _ := http.NewRequest("POST", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/prompts/%3Aid/evaluators") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{}" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.post("https://api.humanloop.com/v5/prompts/%3Aid/evaluators") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{}") .asString(); ``` ```php request('POST', 'https://api.humanloop.com/v5/prompts/%3Aid/evaluators', [ 'body' => '{}', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/prompts/%3Aid/evaluators"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/prompts/%3Aid/evaluators")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Log to a Tool ```http POST https://api.humanloop.com/v5/tools/log Content-Type: application/json ``` Log to a Tool. You can use query parameters `version_id`, or `environment`, to target an existing version of the Tool. Otherwise the default deployed version will be chosen. Instead of targeting an existing version explicitly, you can instead pass in Tool details in the request body. In this case, we will check if the details correspond to an existing version of the Tool, if not we will create a new version. This is helpful in the case where you are storing or deriving your Tool details in code. ## Query Parameters - VersionId (optional): A specific Version ID of the Tool to log to. - Environment (optional): Name of the Environment identifying a deployed version to log to. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell Tool log curl -X POST https://api.humanloop.com/v5/tools/log \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{ "path": "math-tool", "output": "35", "inputs": { "a": 5, "b": 7 }, "tool": { "function": { "name": "multiply", "description": "Multiply two numbers", "parameters": { "type": "object", "properties": { "a": { "type": "number" }, "b": { "type": "number" } }, "required": [ "a", "b" ] } } } }' ``` ```python Tool log from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.tools.log( path="math-tool", tool={ "function": { "name": "multiply", "description": "Multiply two numbers", "parameters": { "type": "object", "properties": { "a": {"type": "number"}, "b": {"type": "number"}, }, "required": ["a", "b"], }, } }, inputs={"a": 5, "b": 7}, output="35", ) ``` ```typescript Tool log import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.tools.log({ path: "math-tool", tool: { function: { name: "multiply", description: "Multiply two numbers", parameters: { "type": "object", "properties": { "a": { "type": "number" }, "b": { "type": "number" } }, "required": [ "a", "b" ] } } }, inputs: { "a": 5, "b": 7 }, output: "35" }); ``` ```go Tool log package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/tools/log" payload := strings.NewReader("{\n \"path\": \"math-tool\",\n \"output\": \"35\"\n}") req, _ := http.NewRequest("POST", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby Tool log require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/tools/log") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{\n \"path\": \"math-tool\",\n \"output\": \"35\"\n}" response = http.request(request) puts response.read_body ``` ```java Tool log HttpResponse response = Unirest.post("https://api.humanloop.com/v5/tools/log") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{\n \"path\": \"math-tool\",\n \"output\": \"35\"\n}") .asString(); ``` ```php Tool log request('POST', 'https://api.humanloop.com/v5/tools/log', [ 'body' => '{ "path": "math-tool", "output": "35" }', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp Tool log var client = new RestClient("https://api.humanloop.com/v5/tools/log"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{\n \"path\": \"math-tool\",\n \"output\": \"35\"\n}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift Tool log import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [ "path": "math-tool", "output": "35" ] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/tools/log")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X POST "https://api.humanloop.com/v5/tools/log?version_id=string&environment=string" \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{}' ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.tools.log( path="math-tool", tool={ "function": { "name": "multiply", "description": "Multiply two numbers", "parameters": { "type": "object", "properties": { "a": {"type": "number"}, "b": {"type": "number"}, }, "required": ["a", "b"], }, } }, inputs={"a": 5, "b": 7}, output="35", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.tools.log({ path: "math-tool", tool: { function: { name: "multiply", description: "Multiply two numbers", parameters: { "type": "object", "properties": { "a": { "type": "number" }, "b": { "type": "number" } }, "required": [ "a", "b" ] } } }, inputs: { "a": 5, "b": 7 }, output: "35" }); ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/tools/log?version_id=string&environment=string" payload := strings.NewReader("{}") req, _ := http.NewRequest("POST", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/tools/log?version_id=string&environment=string") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{}" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.post("https://api.humanloop.com/v5/tools/log?version_id=string&environment=string") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{}") .asString(); ``` ```php request('POST', 'https://api.humanloop.com/v5/tools/log?version_id=string&environment=string', [ 'body' => '{}', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/tools/log?version_id=string&environment=string"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/tools/log?version_id=string&environment=string")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Update Tool Log ```http PATCH https://api.humanloop.com/v5/tools/{id}/log/{log_id} Content-Type: application/json ``` Update a Log. Update the details of a Log with the given ID. ## Path Parameters - Id (required): Unique identifier for Prompt. - LogId (required): Unique identifier for the Log. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell curl -X PATCH https://api.humanloop.com/v5/tools/id/log/log_id \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{}' ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.tools.update( id="id", log_id="log_id", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.tools.update("id", "log_id"); ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/tools/id/log/log_id" payload := strings.NewReader("{}") req, _ := http.NewRequest("PATCH", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/tools/id/log/log_id") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Patch.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{}" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.patch("https://api.humanloop.com/v5/tools/id/log/log_id") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{}") .asString(); ``` ```php request('PATCH', 'https://api.humanloop.com/v5/tools/id/log/log_id', [ 'body' => '{}', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/tools/id/log/log_id"); var request = new RestRequest(Method.PATCH); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/tools/id/log/log_id")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "PATCH" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X PATCH https://api.humanloop.com/v5/tools/:id/log/:log_id \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{}' ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.tools.update( id="id", log_id="log_id", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.tools.update("id", "log_id"); ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/tools/%3Aid/log/%3Alog_id" payload := strings.NewReader("{}") req, _ := http.NewRequest("PATCH", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/tools/%3Aid/log/%3Alog_id") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Patch.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{}" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.patch("https://api.humanloop.com/v5/tools/%3Aid/log/%3Alog_id") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{}") .asString(); ``` ```php request('PATCH', 'https://api.humanloop.com/v5/tools/%3Aid/log/%3Alog_id', [ 'body' => '{}', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/tools/%3Aid/log/%3Alog_id"); var request = new RestRequest(Method.PATCH); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/tools/%3Aid/log/%3Alog_id")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "PATCH" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # List Tools ```http GET https://api.humanloop.com/v5/tools ``` Get a list of all Tools. ## Query Parameters - Page (optional): Page offset for pagination. - Size (optional): Page size for pagination. Number of Tools to fetch. - Name (optional): Case-insensitive filter for Tool name. - UserFilter (optional): Case-insensitive filter for users in the Tool. This filter matches against both email address and name of users. - SortBy (optional): Field to sort Tools by - Order (optional): Direction to sort by. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell List tools curl -G https://api.humanloop.com/v5/tools \ -H "X-API-KEY: " \ -d size=1 ``` ```python List tools from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) response = client.tools.list( size=1, ) for item in response: yield item # alternatively, you can paginate page-by-page for page in response.iter_pages(): yield page ``` ```typescript List tools import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); const response = await client.tools.list({ size: 1 }); for await (const item of response) { console.log(item); } // Or you can manually iterate page-by-page const page = await client.tools.list({ size: 1 }); while (page.hasNextPage()) { page = page.getNextPage(); } ``` ```go List tools package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/tools?size=1" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby List tools require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/tools?size=1") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java List tools HttpResponse response = Unirest.get("https://api.humanloop.com/v5/tools?size=1") .header("X-API-KEY", "") .asString(); ``` ```php List tools request('GET', 'https://api.humanloop.com/v5/tools?size=1', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp List tools var client = new RestClient("https://api.humanloop.com/v5/tools?size=1"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift List tools import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/tools?size=1")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -G https://api.humanloop.com/v5/tools \ -H "X-API-KEY: " \ -d page=0 \ -d size=0 ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) response = client.tools.list( size=1, ) for item in response: yield item # alternatively, you can paginate page-by-page for page in response.iter_pages(): yield page ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); const response = await client.tools.list({ size: 1 }); for await (const item of response) { console.log(item); } // Or you can manually iterate page-by-page const page = await client.tools.list({ size: 1 }); while (page.hasNextPage()) { page = page.getNextPage(); } ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/tools?page=0&size=0" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/tools?page=0&size=0") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.get("https://api.humanloop.com/v5/tools?page=0&size=0") .header("X-API-KEY", "") .asString(); ``` ```php request('GET', 'https://api.humanloop.com/v5/tools?page=0&size=0', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/tools?page=0&size=0"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/tools?page=0&size=0")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Upsert Tool ```http POST https://api.humanloop.com/v5/tools Content-Type: application/json ``` Create a Tool or update it with a new version if it already exists. Tools are identified by the `ID` or their `path`. The name, description and parameters determine the versions of the Tool. You can provide `version_name` and `version_description` to identify and describe your versions. Version names must be unique within a Tool - attempting to create a version with a name that already exists will result in a 409 Conflict error. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell Create tool curl -X POST https://api.humanloop.com/v5/tools \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{ "path": "math-tool", "function": { "name": "multiply", "description": "Multiply two numbers", "parameters": { "type": "object", "properties": { "a": { "type": "number" }, "b": { "type": "number" } }, "required": [ "a", "b" ] } } }' ``` ```python Create tool from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.tools.upsert( path="math-tool", function={ "name": "multiply", "description": "Multiply two numbers", "parameters": { "type": "object", "properties": {"a": {"type": "number"}, "b": {"type": "number"}}, "required": ["a", "b"], }, }, ) ``` ```typescript Create tool import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.tools.upsert({ path: "math-tool", function: { name: "multiply", description: "Multiply two numbers", parameters: { "type": "object", "properties": { "a": { "type": "number" }, "b": { "type": "number" } }, "required": [ "a", "b" ] } }, commitMessage: "Initial commit" }); ``` ```go Create tool package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/tools" payload := strings.NewReader("{\n \"path\": \"math-tool\"\n}") req, _ := http.NewRequest("POST", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby Create tool require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/tools") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{\n \"path\": \"math-tool\"\n}" response = http.request(request) puts response.read_body ``` ```java Create tool HttpResponse response = Unirest.post("https://api.humanloop.com/v5/tools") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{\n \"path\": \"math-tool\"\n}") .asString(); ``` ```php Create tool request('POST', 'https://api.humanloop.com/v5/tools', [ 'body' => '{ "path": "math-tool" }', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp Create tool var client = new RestClient("https://api.humanloop.com/v5/tools"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{\n \"path\": \"math-tool\"\n}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift Create tool import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = ["path": "math-tool"] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/tools")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X POST https://api.humanloop.com/v5/tools \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{}' ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.tools.upsert( path="math-tool", function={ "name": "multiply", "description": "Multiply two numbers", "parameters": { "type": "object", "properties": {"a": {"type": "number"}, "b": {"type": "number"}}, "required": ["a", "b"], }, }, ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.tools.upsert({ path: "math-tool", function: { name: "multiply", description: "Multiply two numbers", parameters: { "type": "object", "properties": { "a": { "type": "number" }, "b": { "type": "number" } }, "required": [ "a", "b" ] } }, commitMessage: "Initial commit" }); ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/tools" payload := strings.NewReader("{}") req, _ := http.NewRequest("POST", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/tools") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{}" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.post("https://api.humanloop.com/v5/tools") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{}") .asString(); ``` ```php request('POST', 'https://api.humanloop.com/v5/tools', [ 'body' => '{}', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/tools"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/tools")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Get Tool ```http GET https://api.humanloop.com/v5/tools/{id} ``` Retrieve the Tool with the given ID. By default, the deployed version of the Tool is returned. Use the query parameters `version_id` or `environment` to target a specific version of the Tool. ## Path Parameters - Id (required): Unique identifier for Tool. ## Query Parameters - VersionId (optional): A specific Version ID of the Tool to retrieve. - Environment (optional): Name of the Environment to retrieve a deployed Version from. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell Get specific tool curl https://api.humanloop.com/v5/tools/tl_789ghi \ -H "X-API-KEY: " ``` ```python Get specific tool from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.tools.get( id="tl_789ghi", ) ``` ```typescript Get specific tool import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.tools.get("tl_789ghi"); ``` ```go Get specific tool package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/tools/tl_789ghi" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby Get specific tool require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/tools/tl_789ghi") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java Get specific tool HttpResponse response = Unirest.get("https://api.humanloop.com/v5/tools/tl_789ghi") .header("X-API-KEY", "") .asString(); ``` ```php Get specific tool request('GET', 'https://api.humanloop.com/v5/tools/tl_789ghi', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp Get specific tool var client = new RestClient("https://api.humanloop.com/v5/tools/tl_789ghi"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift Get specific tool import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/tools/tl_789ghi")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -G https://api.humanloop.com/v5/tools/:id \ -H "X-API-KEY: " \ -d version_id=string \ -d environment=string ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.tools.get( id="tl_789ghi", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.tools.get("tl_789ghi"); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/tools/%3Aid?version_id=string&environment=string" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/tools/%3Aid?version_id=string&environment=string") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.get("https://api.humanloop.com/v5/tools/%3Aid?version_id=string&environment=string") .header("X-API-KEY", "") .asString(); ``` ```php request('GET', 'https://api.humanloop.com/v5/tools/%3Aid?version_id=string&environment=string', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/tools/%3Aid?version_id=string&environment=string"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/tools/%3Aid?version_id=string&environment=string")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Delete Tool ```http DELETE https://api.humanloop.com/v5/tools/{id} ``` Delete the Tool with the given ID. ## Path Parameters - Id (required): Unique identifier for Tool. ## Response Body - 422: Validation Error ## Examples ```shell Delete tool curl -X DELETE https://api.humanloop.com/v5/tools/tl_789ghi \ -H "X-API-KEY: " ``` ```python Delete tool from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.tools.delete( id="tl_789ghi", ) ``` ```typescript Delete tool import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.tools.delete("tl_789ghi"); ``` ```go Delete tool package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/tools/tl_789ghi" req, _ := http.NewRequest("DELETE", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby Delete tool require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/tools/tl_789ghi") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Delete.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java Delete tool HttpResponse response = Unirest.delete("https://api.humanloop.com/v5/tools/tl_789ghi") .header("X-API-KEY", "") .asString(); ``` ```php Delete tool request('DELETE', 'https://api.humanloop.com/v5/tools/tl_789ghi', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp Delete tool var client = new RestClient("https://api.humanloop.com/v5/tools/tl_789ghi"); var request = new RestRequest(Method.DELETE); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift Delete tool import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/tools/tl_789ghi")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "DELETE" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X DELETE https://api.humanloop.com/v5/tools/:id \ -H "X-API-KEY: " ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.tools.delete( id="tl_789ghi", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.tools.delete("tl_789ghi"); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/tools/%3Aid" req, _ := http.NewRequest("DELETE", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/tools/%3Aid") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Delete.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.delete("https://api.humanloop.com/v5/tools/%3Aid") .header("X-API-KEY", "") .asString(); ``` ```php request('DELETE', 'https://api.humanloop.com/v5/tools/%3Aid', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/tools/%3Aid"); var request = new RestRequest(Method.DELETE); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/tools/%3Aid")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "DELETE" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Move Tool ```http PATCH https://api.humanloop.com/v5/tools/{id} Content-Type: application/json ``` Move the Tool to a different path or change the name. ## Path Parameters - Id (required): Unique identifier for Tool. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell Move tool curl -X PATCH https://api.humanloop.com/v5/tools/tl_789ghi \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{ "path": "new directory/new name" }' ``` ```python Move tool from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.tools.move( id="tl_789ghi", path="new directory/new name", ) ``` ```typescript Move tool import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.tools.move("tl_789ghi", { path: "new directory/new name" }); ``` ```go Move tool package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/tools/tl_789ghi" payload := strings.NewReader("{\n \"path\": \"new directory/new name\"\n}") req, _ := http.NewRequest("PATCH", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby Move tool require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/tools/tl_789ghi") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Patch.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{\n \"path\": \"new directory/new name\"\n}" response = http.request(request) puts response.read_body ``` ```java Move tool HttpResponse response = Unirest.patch("https://api.humanloop.com/v5/tools/tl_789ghi") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{\n \"path\": \"new directory/new name\"\n}") .asString(); ``` ```php Move tool request('PATCH', 'https://api.humanloop.com/v5/tools/tl_789ghi', [ 'body' => '{ "path": "new directory/new name" }', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp Move tool var client = new RestClient("https://api.humanloop.com/v5/tools/tl_789ghi"); var request = new RestRequest(Method.PATCH); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{\n \"path\": \"new directory/new name\"\n}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift Move tool import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = ["path": "new directory/new name"] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/tools/tl_789ghi")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "PATCH" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X PATCH https://api.humanloop.com/v5/tools/:id \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{}' ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.tools.move( id="tl_789ghi", path="new directory/new name", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.tools.move("tl_789ghi", { path: "new directory/new name" }); ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/tools/%3Aid" payload := strings.NewReader("{}") req, _ := http.NewRequest("PATCH", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/tools/%3Aid") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Patch.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{}" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.patch("https://api.humanloop.com/v5/tools/%3Aid") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{}") .asString(); ``` ```php request('PATCH', 'https://api.humanloop.com/v5/tools/%3Aid', [ 'body' => '{}', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/tools/%3Aid"); var request = new RestRequest(Method.PATCH); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/tools/%3Aid")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "PATCH" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # List Versions of a Tool ```http GET https://api.humanloop.com/v5/tools/{id}/versions ``` Get a list of all the versions of a Tool. ## Path Parameters - Id (required): Unique identifier for the Tool. ## Query Parameters - EvaluatorAggregates (optional): Whether to include Evaluator aggregate results for the versions in the response ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell List versions curl https://api.humanloop.com/v5/tools/tl_789ghi/versions \ -H "X-API-KEY: " ``` ```python List versions from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.tools.list_versions( id="tl_789ghi", ) ``` ```typescript List versions import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.tools.listVersions("tl_789ghi", { status: "committed" }); ``` ```go List versions package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/tools/tl_789ghi/versions" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby List versions require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/tools/tl_789ghi/versions") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java List versions HttpResponse response = Unirest.get("https://api.humanloop.com/v5/tools/tl_789ghi/versions") .header("X-API-KEY", "") .asString(); ``` ```php List versions request('GET', 'https://api.humanloop.com/v5/tools/tl_789ghi/versions', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp List versions var client = new RestClient("https://api.humanloop.com/v5/tools/tl_789ghi/versions"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift List versions import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/tools/tl_789ghi/versions")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -G https://api.humanloop.com/v5/tools/:id/versions \ -H "X-API-KEY: " \ -d evaluator_aggregates=true ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.tools.list_versions( id="tl_789ghi", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.tools.listVersions("tl_789ghi", { status: "committed" }); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/tools/%3Aid/versions?evaluator_aggregates=true" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/tools/%3Aid/versions?evaluator_aggregates=true") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.get("https://api.humanloop.com/v5/tools/%3Aid/versions?evaluator_aggregates=true") .header("X-API-KEY", "") .asString(); ``` ```php request('GET', 'https://api.humanloop.com/v5/tools/%3Aid/versions?evaluator_aggregates=true', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/tools/%3Aid/versions?evaluator_aggregates=true"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/tools/%3Aid/versions?evaluator_aggregates=true")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Delete Tool Version ```http DELETE https://api.humanloop.com/v5/tools/{id}/versions/{version_id} ``` Delete a version of the Tool. ## Path Parameters - Id (required): Unique identifier for Tool. - VersionId (required): Unique identifier for the specific version of the Tool. ## Response Body - 422: Validation Error ## Examples ```shell curl -X DELETE https://api.humanloop.com/v5/tools/id/versions/version_id \ -H "X-API-KEY: " ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.tools.delete_tool_version( id="id", version_id="version_id", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.tools.deleteToolVersion("id", "version_id"); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/tools/id/versions/version_id" req, _ := http.NewRequest("DELETE", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/tools/id/versions/version_id") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Delete.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.delete("https://api.humanloop.com/v5/tools/id/versions/version_id") .header("X-API-KEY", "") .asString(); ``` ```php request('DELETE', 'https://api.humanloop.com/v5/tools/id/versions/version_id', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/tools/id/versions/version_id"); var request = new RestRequest(Method.DELETE); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/tools/id/versions/version_id")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "DELETE" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X DELETE https://api.humanloop.com/v5/tools/:id/versions/:version_id \ -H "X-API-KEY: " ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.tools.delete_tool_version( id="id", version_id="version_id", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.tools.deleteToolVersion("id", "version_id"); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/tools/%3Aid/versions/%3Aversion_id" req, _ := http.NewRequest("DELETE", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/tools/%3Aid/versions/%3Aversion_id") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Delete.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.delete("https://api.humanloop.com/v5/tools/%3Aid/versions/%3Aversion_id") .header("X-API-KEY", "") .asString(); ``` ```php request('DELETE', 'https://api.humanloop.com/v5/tools/%3Aid/versions/%3Aversion_id', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/tools/%3Aid/versions/%3Aversion_id"); var request = new RestRequest(Method.DELETE); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/tools/%3Aid/versions/%3Aversion_id")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "DELETE" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Update Tool Version ```http PATCH https://api.humanloop.com/v5/tools/{id}/versions/{version_id} Content-Type: application/json ``` Update the name or description of the Tool version. ## Path Parameters - Id (required): Unique identifier for Tool. - VersionId (required): Unique identifier for the specific version of the Tool. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell curl -X PATCH https://api.humanloop.com/v5/tools/id/versions/version_id \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{}' ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.tools.update_tool_version( id="id", version_id="version_id", ) ``` ```javascript const url = 'https://api.humanloop.com/v5/tools/id/versions/version_id'; const options = { method: 'PATCH', headers: {'X-API-KEY': '', 'Content-Type': 'application/json'}, body: '{}' }; try { const response = await fetch(url, options); const data = await response.json(); console.log(data); } catch (error) { console.error(error); } ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/tools/id/versions/version_id" payload := strings.NewReader("{}") req, _ := http.NewRequest("PATCH", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/tools/id/versions/version_id") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Patch.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{}" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.patch("https://api.humanloop.com/v5/tools/id/versions/version_id") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{}") .asString(); ``` ```php request('PATCH', 'https://api.humanloop.com/v5/tools/id/versions/version_id', [ 'body' => '{}', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/tools/id/versions/version_id"); var request = new RestRequest(Method.PATCH); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/tools/id/versions/version_id")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "PATCH" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X PATCH https://api.humanloop.com/v5/tools/:id/versions/:version_id \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{}' ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.tools.update_tool_version( id="id", version_id="version_id", ) ``` ```javascript const url = 'https://api.humanloop.com/v5/tools/%3Aid/versions/%3Aversion_id'; const options = { method: 'PATCH', headers: {'X-API-KEY': '', 'Content-Type': 'application/json'}, body: '{}' }; try { const response = await fetch(url, options); const data = await response.json(); console.log(data); } catch (error) { console.error(error); } ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/tools/%3Aid/versions/%3Aversion_id" payload := strings.NewReader("{}") req, _ := http.NewRequest("PATCH", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/tools/%3Aid/versions/%3Aversion_id") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Patch.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{}" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.patch("https://api.humanloop.com/v5/tools/%3Aid/versions/%3Aversion_id") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{}") .asString(); ``` ```php request('PATCH', 'https://api.humanloop.com/v5/tools/%3Aid/versions/%3Aversion_id', [ 'body' => '{}', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/tools/%3Aid/versions/%3Aversion_id"); var request = new RestRequest(Method.PATCH); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/tools/%3Aid/versions/%3Aversion_id")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "PATCH" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Deploy Tool ```http POST https://api.humanloop.com/v5/tools/{id}/environments/{environment_id} ``` Deploy Tool to an Environment. Set the deployed version for the specified Environment. This Prompt will be used for calls made to the Tool in this Environment. ## Path Parameters - Id (required): Unique identifier for Tool. - EnvironmentId (required): Unique identifier for the Environment to deploy the Version to. ## Query Parameters - VersionId (required): Unique identifier for the specific version of the Tool. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell Deploy curl -X POST "https://api.humanloop.com/v5/tools/tl_789ghi/environments/staging?version_id=tv_012jkl" \ -H "X-API-KEY: " ``` ```python Deploy from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.tools.set_deployment( id="tl_789ghi", environment_id="staging", version_id="tv_012jkl", ) ``` ```typescript Deploy import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.tools.setDeployment("tl_789ghi", "staging", { versionId: "tv_012jkl" }); ``` ```go Deploy package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/tools/tl_789ghi/environments/staging?version_id=tv_012jkl" req, _ := http.NewRequest("POST", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby Deploy require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/tools/tl_789ghi/environments/staging?version_id=tv_012jkl") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java Deploy HttpResponse response = Unirest.post("https://api.humanloop.com/v5/tools/tl_789ghi/environments/staging?version_id=tv_012jkl") .header("X-API-KEY", "") .asString(); ``` ```php Deploy request('POST', 'https://api.humanloop.com/v5/tools/tl_789ghi/environments/staging?version_id=tv_012jkl', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp Deploy var client = new RestClient("https://api.humanloop.com/v5/tools/tl_789ghi/environments/staging?version_id=tv_012jkl"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift Deploy import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/tools/tl_789ghi/environments/staging?version_id=tv_012jkl")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X POST "https://api.humanloop.com/v5/tools/:id/environments/:environment_id?version_id=string" \ -H "X-API-KEY: " ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.tools.set_deployment( id="tl_789ghi", environment_id="staging", version_id="tv_012jkl", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.tools.setDeployment("tl_789ghi", "staging", { versionId: "tv_012jkl" }); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/tools/%3Aid/environments/%3Aenvironment_id?version_id=string" req, _ := http.NewRequest("POST", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/tools/%3Aid/environments/%3Aenvironment_id?version_id=string") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.post("https://api.humanloop.com/v5/tools/%3Aid/environments/%3Aenvironment_id?version_id=string") .header("X-API-KEY", "") .asString(); ``` ```php request('POST', 'https://api.humanloop.com/v5/tools/%3Aid/environments/%3Aenvironment_id?version_id=string', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/tools/%3Aid/environments/%3Aenvironment_id?version_id=string"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/tools/%3Aid/environments/%3Aenvironment_id?version_id=string")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Remove Deployment ```http DELETE https://api.humanloop.com/v5/tools/{id}/environments/{environment_id} ``` Remove deployed Tool from the Environment. Remove the deployed version for the specified Environment. This Tool will no longer be used for calls made to the Tool in this Environment. ## Path Parameters - Id (required): Unique identifier for Tool. - EnvironmentId (required): Unique identifier for the Environment to remove the deployment from. ## Response Body - 422: Validation Error ## Examples ```shell Delete environment curl -X DELETE https://api.humanloop.com/v5/tools/tl_789ghi/environments/staging \ -H "X-API-KEY: " ``` ```python Delete environment from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.tools.remove_deployment( id="tl_789ghi", environment_id="staging", ) ``` ```typescript Delete environment import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.tools.removeDeployment("tl_789ghi", "staging"); ``` ```go Delete environment package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/tools/tl_789ghi/environments/staging" req, _ := http.NewRequest("DELETE", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby Delete environment require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/tools/tl_789ghi/environments/staging") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Delete.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java Delete environment HttpResponse response = Unirest.delete("https://api.humanloop.com/v5/tools/tl_789ghi/environments/staging") .header("X-API-KEY", "") .asString(); ``` ```php Delete environment request('DELETE', 'https://api.humanloop.com/v5/tools/tl_789ghi/environments/staging', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp Delete environment var client = new RestClient("https://api.humanloop.com/v5/tools/tl_789ghi/environments/staging"); var request = new RestRequest(Method.DELETE); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift Delete environment import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/tools/tl_789ghi/environments/staging")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "DELETE" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X DELETE https://api.humanloop.com/v5/tools/:id/environments/:environment_id \ -H "X-API-KEY: " ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.tools.remove_deployment( id="tl_789ghi", environment_id="staging", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.tools.removeDeployment("tl_789ghi", "staging"); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/tools/%3Aid/environments/%3Aenvironment_id" req, _ := http.NewRequest("DELETE", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/tools/%3Aid/environments/%3Aenvironment_id") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Delete.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.delete("https://api.humanloop.com/v5/tools/%3Aid/environments/%3Aenvironment_id") .header("X-API-KEY", "") .asString(); ``` ```php request('DELETE', 'https://api.humanloop.com/v5/tools/%3Aid/environments/%3Aenvironment_id', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/tools/%3Aid/environments/%3Aenvironment_id"); var request = new RestRequest(Method.DELETE); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/tools/%3Aid/environments/%3Aenvironment_id")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "DELETE" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # List a Tool's Environments ```http GET https://api.humanloop.com/v5/tools/{id}/environments ``` List all Environments and their deployed versions for the Tool. ## Path Parameters - Id (required): Unique identifier for Tool. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell List environments curl https://api.humanloop.com/v5/tools/tl_789ghi/environments \ -H "X-API-KEY: " ``` ```python List environments from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.tools.list_environments( id="tl_789ghi", ) ``` ```typescript List environments import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.tools.listEnvironments("tl_789ghi"); ``` ```go List environments package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/tools/tl_789ghi/environments" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby List environments require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/tools/tl_789ghi/environments") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java List environments HttpResponse response = Unirest.get("https://api.humanloop.com/v5/tools/tl_789ghi/environments") .header("X-API-KEY", "") .asString(); ``` ```php List environments request('GET', 'https://api.humanloop.com/v5/tools/tl_789ghi/environments', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp List environments var client = new RestClient("https://api.humanloop.com/v5/tools/tl_789ghi/environments"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift List environments import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/tools/tl_789ghi/environments")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl https://api.humanloop.com/v5/tools/:id/environments \ -H "X-API-KEY: " ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.tools.list_environments( id="tl_789ghi", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.tools.listEnvironments("tl_789ghi"); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/tools/%3Aid/environments" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/tools/%3Aid/environments") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.get("https://api.humanloop.com/v5/tools/%3Aid/environments") .header("X-API-KEY", "") .asString(); ``` ```php request('GET', 'https://api.humanloop.com/v5/tools/%3Aid/environments', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/tools/%3Aid/environments"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/tools/%3Aid/environments")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Update Monitoring ```http POST https://api.humanloop.com/v5/tools/{id}/evaluators Content-Type: application/json ``` Activate and deactivate Evaluators for monitoring the Tool. An activated Evaluator will automatically be run on all new Logs within the Tool for monitoring purposes. ## Path Parameters - Id (required) ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell Update monitoring curl -X POST https://api.humanloop.com/v5/tools/tl_789ghi/evaluators \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{ "activate": [ { "evaluator_version_id": "evv_1abc4308abd" } ] }' ``` ```python Update monitoring from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.tools.update_monitoring( id="tl_789ghi", activate=[{"evaluator_version_id": "evv_1abc4308abd"}], ) ``` ```typescript Update monitoring import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.tools.updateMonitoring("tl_789ghi", { activate: [{ evaluatorVersionId: "evv_1abc4308abd" }] }); ``` ```go Update monitoring package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/tools/tl_789ghi/evaluators" payload := strings.NewReader("{}") req, _ := http.NewRequest("POST", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby Update monitoring require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/tools/tl_789ghi/evaluators") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{}" response = http.request(request) puts response.read_body ``` ```java Update monitoring HttpResponse response = Unirest.post("https://api.humanloop.com/v5/tools/tl_789ghi/evaluators") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{}") .asString(); ``` ```php Update monitoring request('POST', 'https://api.humanloop.com/v5/tools/tl_789ghi/evaluators', [ 'body' => '{}', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp Update monitoring var client = new RestClient("https://api.humanloop.com/v5/tools/tl_789ghi/evaluators"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift Update monitoring import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/tools/tl_789ghi/evaluators")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X POST https://api.humanloop.com/v5/tools/:id/evaluators \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{}' ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.tools.update_monitoring( id="tl_789ghi", activate=[{"evaluator_version_id": "evv_1abc4308abd"}], ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.tools.updateMonitoring("tl_789ghi", { activate: [{ evaluatorVersionId: "evv_1abc4308abd" }] }); ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/tools/%3Aid/evaluators" payload := strings.NewReader("{}") req, _ := http.NewRequest("POST", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/tools/%3Aid/evaluators") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{}" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.post("https://api.humanloop.com/v5/tools/%3Aid/evaluators") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{}") .asString(); ``` ```php request('POST', 'https://api.humanloop.com/v5/tools/%3Aid/evaluators', [ 'body' => '{}', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/tools/%3Aid/evaluators"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/tools/%3Aid/evaluators")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # List Datasets ```http GET https://api.humanloop.com/v5/datasets ``` List all Datasets. ## Query Parameters - Page (optional): Page offset for pagination. - Size (optional): Page size for pagination. Number of Datasets to fetch. - Name (optional): Case-insensitive filter for Dataset name. - UserFilter (optional): Case-insensitive filter for users in the Dataset. This filter matches against both email address and name of users. - SortBy (optional): Field to sort Datasets by - Order (optional): Direction to sort by. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell List datasets curl -G https://api.humanloop.com/v5/datasets \ -H "X-API-KEY: " \ -d size=1 ``` ```python List datasets from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) response = client.datasets.list( size=1, ) for item in response: yield item # alternatively, you can paginate page-by-page for page in response.iter_pages(): yield page ``` ```typescript List datasets import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); const response = await client.datasets.list({ size: 1 }); for await (const item of response) { console.log(item); } // Or you can manually iterate page-by-page const page = await client.datasets.list({ size: 1 }); while (page.hasNextPage()) { page = page.getNextPage(); } ``` ```go List datasets package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/datasets?size=1" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby List datasets require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/datasets?size=1") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java List datasets HttpResponse response = Unirest.get("https://api.humanloop.com/v5/datasets?size=1") .header("X-API-KEY", "") .asString(); ``` ```php List datasets request('GET', 'https://api.humanloop.com/v5/datasets?size=1', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp List datasets var client = new RestClient("https://api.humanloop.com/v5/datasets?size=1"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift List datasets import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/datasets?size=1")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -G https://api.humanloop.com/v5/datasets \ -H "X-API-KEY: " \ -d page=0 \ -d size=0 ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) response = client.datasets.list( size=1, ) for item in response: yield item # alternatively, you can paginate page-by-page for page in response.iter_pages(): yield page ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); const response = await client.datasets.list({ size: 1 }); for await (const item of response) { console.log(item); } // Or you can manually iterate page-by-page const page = await client.datasets.list({ size: 1 }); while (page.hasNextPage()) { page = page.getNextPage(); } ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/datasets?page=0&size=0" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/datasets?page=0&size=0") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.get("https://api.humanloop.com/v5/datasets?page=0&size=0") .header("X-API-KEY", "") .asString(); ``` ```php request('GET', 'https://api.humanloop.com/v5/datasets?page=0&size=0', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/datasets?page=0&size=0"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/datasets?page=0&size=0")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Upsert Dataset ```http POST https://api.humanloop.com/v5/datasets Content-Type: application/json ``` Create a Dataset or update it with a new version if it already exists. Datasets are identified by the `ID` or their `path`. The datapoints determine the versions of the Dataset. By default, the new Dataset version will be set to the list of Datapoints provided in the request. You can also create a new version by adding or removing Datapoints from an existing version by specifying `action` as `add` or `remove` respectively. In this case, you may specify the `version_id` or `environment` query parameters to identify the existing version to base the new version on. If neither is provided, the latest created version will be used. You can provide `version_name` and `version_description` to identify and describe your versions. Version names must be unique within a Dataset - attempting to create a version with a name that already exists will result in a 409 Conflict error. Humanloop also deduplicates Datapoints. If you try to add a Datapoint that already exists, it will be ignored. If you intentionally want to add a duplicate Datapoint, you can add a unique identifier to the Datapoint's inputs such as `{_dedupe_id: }`. ## Query Parameters - VersionId (optional): ID of the specific Dataset version to base the created Version on. Only used when `action` is `"add"` or `"remove"`. - Environment (optional): Name of the Environment identifying a deployed Version to base the created Version on. Only used when `action` is `"add"` or `"remove"`. - IncludeDatapoints (optional): If set to `true`, include all Datapoints in the response. Defaults to `false`. Consider using the paginated List Datapoints endpoint instead. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell AddToDataset curl -X POST https://api.humanloop.com/v5/datasets \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{ "datapoints": [ { "inputs": { "question": "What is the capital of France?" }, "target": { "answer": "Paris" } }, { "inputs": { "question": "Who wrote Hamlet?" }, "target": { "answer": "William Shakespeare" } } ], "path": "test-questions", "action": "set" }' ``` ```python AddToDataset from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.datasets.upsert( path="test-questions", datapoints=[ { "inputs": {"question": "What is the capital of France?"}, "target": {"answer": "Paris"}, }, { "inputs": {"question": "Who wrote Hamlet?"}, "target": {"answer": "William Shakespeare"}, }, ], action="set", ) ``` ```typescript AddToDataset import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.datasets.upsert({ path: "test-questions", datapoints: [{ inputs: { "question": "What is the capital of France?" }, target: { "answer": "Paris" } }, { inputs: { "question": "Who wrote Hamlet?" }, target: { "answer": "William Shakespeare" } }], action: "set", commitMessage: "Add two new questions and answers" }); ``` ```go AddToDataset package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/datasets" payload := strings.NewReader("{\n \"path\": \"test-questions\",\n \"action\": \"set\"\n}") req, _ := http.NewRequest("POST", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby AddToDataset require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/datasets") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{\n \"path\": \"test-questions\",\n \"action\": \"set\"\n}" response = http.request(request) puts response.read_body ``` ```java AddToDataset HttpResponse response = Unirest.post("https://api.humanloop.com/v5/datasets") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{\n \"path\": \"test-questions\",\n \"action\": \"set\"\n}") .asString(); ``` ```php AddToDataset request('POST', 'https://api.humanloop.com/v5/datasets', [ 'body' => '{ "path": "test-questions", "action": "set" }', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp AddToDataset var client = new RestClient("https://api.humanloop.com/v5/datasets"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{\n \"path\": \"test-questions\",\n \"action\": \"set\"\n}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift AddToDataset import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [ "path": "test-questions", "action": "set" ] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/datasets")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell CreateSupportDataset curl -X POST https://api.humanloop.com/v5/datasets \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{ "datapoints": [ { "messages": [ { "role": "user", "content": "How do i manage my organizations API keys?\n" } ], "target": { "response": "Hey, thanks for your questions. Here are steps for how to achieve: 1. Log in to the Humanloop Dashboard \n\n2. Click on \"Organization Settings.\"\n If you do not see this option, you might need to contact your organization admin to gain the necessary permissions.\n\n3. Within the settings or organization settings, select the option labeled \"API Keys\" on the left. Here you will be able to view and manage your API keys.\n\n4. You will see a list of existing API keys. You can perform various actions, such as:\n - **Generate New API Key:** Click on the \"Generate New Key\" button if you need a new API key.\n - **Revoke an API Key:** If you need to disable an existing key, find the key in the list and click the \"Revoke\" or \"Delete\" button.\n - **Copy an API Key:** If you need to use an existing key, you can copy it to your clipboard by clicking the \"Copy\" button next to the key.\n\n5. **Save and Secure API Keys:** Make sure to securely store any new or existing API keys you are using. Treat them like passwords and do not share them publicly.\n\nIf you encounter any issues or need further assistance, it might be helpful to engage with an engineer or your IT department to ensure you have the necessary permissions and support.\n\nWould you need help with anything else?" } }, { "messages": [ { "role": "user", "content": "Hey, can do I use my code evaluator for monitoring my legal-copilot prompt?" } ], "target": { "response": "Hey, thanks for your questions. Here are steps for how to achieve: 1. Navigate to your Prompt dashboard. \n 2. Select the `Monitoring` button on the top right of the Prompt dashboard \n 3. Within the model select the Version of the Evaluator you want to turn on for monitoring. \n\nWould you need help with anything else?" } } ], "path": "datasets/support-queries" }' ``` ```python CreateSupportDataset from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.datasets.upsert( path="datasets/support-queries", datapoints=[ { "messages": [ { "role": "user", "content": "How do i manage my organizations API keys?\n", } ], "target": { "response": 'Hey, thanks for your questions. Here are steps for how to achieve: 1. Log in to the Humanloop Dashboard \n\n2. Click on "Organization Settings."\n If you do not see this option, you might need to contact your organization admin to gain the necessary permissions.\n\n3. Within the settings or organization settings, select the option labeled "API Keys" on the left. Here you will be able to view and manage your API keys.\n\n4. You will see a list of existing API keys. You can perform various actions, such as:\n - **Generate New API Key:** Click on the "Generate New Key" button if you need a new API key.\n - **Revoke an API Key:** If you need to disable an existing key, find the key in the list and click the "Revoke" or "Delete" button.\n - **Copy an API Key:** If you need to use an existing key, you can copy it to your clipboard by clicking the "Copy" button next to the key.\n\n5. **Save and Secure API Keys:** Make sure to securely store any new or existing API keys you are using. Treat them like passwords and do not share them publicly.\n\nIf you encounter any issues or need further assistance, it might be helpful to engage with an engineer or your IT department to ensure you have the necessary permissions and support.\n\nWould you need help with anything else?' }, }, { "messages": [ { "role": "user", "content": "Hey, can do I use my code evaluator for monitoring my legal-copilot prompt?", } ], "target": { "response": "Hey, thanks for your questions. Here are steps for how to achieve: 1. Navigate to your Prompt dashboard. \n 2. Select the `Monitoring` button on the top right of the Prompt dashboard \n 3. Within the model select the Version of the Evaluator you want to turn on for monitoring. \n\nWould you need help with anything else?" }, }, ], ) ``` ```typescript CreateSupportDataset import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.datasets.upsert({ path: "datasets/support-queries", datapoints: [{ messages: [{ role: "user", content: "How do i manage my organizations API keys?\n" }], target: { "response": "Hey, thanks for your questions. Here are steps for how to achieve: 1. Log in to the Humanloop Dashboard \n\n2. Click on \"Organization Settings.\"\n If you do not see this option, you might need to contact your organization admin to gain the necessary permissions.\n\n3. Within the settings or organization settings, select the option labeled \"API Keys\" on the left. Here you will be able to view and manage your API keys.\n\n4. You will see a list of existing API keys. You can perform various actions, such as:\n - **Generate New API Key:** Click on the \"Generate New Key\" button if you need a new API key.\n - **Revoke an API Key:** If you need to disable an existing key, find the key in the list and click the \"Revoke\" or \"Delete\" button.\n - **Copy an API Key:** If you need to use an existing key, you can copy it to your clipboard by clicking the \"Copy\" button next to the key.\n\n5. **Save and Secure API Keys:** Make sure to securely store any new or existing API keys you are using. Treat them like passwords and do not share them publicly.\n\nIf you encounter any issues or need further assistance, it might be helpful to engage with an engineer or your IT department to ensure you have the necessary permissions and support.\n\nWould you need help with anything else?" } }, { messages: [{ role: "user", content: "Hey, can do I use my code evaluator for monitoring my legal-copilot prompt?" }], target: { "response": "Hey, thanks for your questions. Here are steps for how to achieve: 1. Navigate to your Prompt dashboard. \n 2. Select the `Monitoring` button on the top right of the Prompt dashboard \n 3. Within the model select the Version of the Evaluator you want to turn on for monitoring. \n\nWould you need help with anything else?" } }], commitMessage: "Add two new questions and answers" }); ``` ```go CreateSupportDataset package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/datasets" payload := strings.NewReader("{\n \"path\": \"datasets/support-queries\"\n}") req, _ := http.NewRequest("POST", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby CreateSupportDataset require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/datasets") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{\n \"path\": \"datasets/support-queries\"\n}" response = http.request(request) puts response.read_body ``` ```java CreateSupportDataset HttpResponse response = Unirest.post("https://api.humanloop.com/v5/datasets") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{\n \"path\": \"datasets/support-queries\"\n}") .asString(); ``` ```php CreateSupportDataset request('POST', 'https://api.humanloop.com/v5/datasets', [ 'body' => '{ "path": "datasets/support-queries" }', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp CreateSupportDataset var client = new RestClient("https://api.humanloop.com/v5/datasets"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{\n \"path\": \"datasets/support-queries\"\n}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift CreateSupportDataset import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = ["path": "datasets/support-queries"] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/datasets")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X POST "https://api.humanloop.com/v5/datasets?version_id=string&environment=string" \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{ "datapoints": [ {} ] }' ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.datasets.upsert( path="test-questions", datapoints=[ { "inputs": {"question": "What is the capital of France?"}, "target": {"answer": "Paris"}, }, { "inputs": {"question": "Who wrote Hamlet?"}, "target": {"answer": "William Shakespeare"}, }, ], action="set", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.datasets.upsert({ path: "test-questions", datapoints: [{ inputs: { "question": "What is the capital of France?" }, target: { "answer": "Paris" } }, { inputs: { "question": "Who wrote Hamlet?" }, target: { "answer": "William Shakespeare" } }], action: "set", commitMessage: "Add two new questions and answers" }); ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/datasets?version_id=string&environment=string" payload := strings.NewReader("{}") req, _ := http.NewRequest("POST", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/datasets?version_id=string&environment=string") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{}" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.post("https://api.humanloop.com/v5/datasets?version_id=string&environment=string") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{}") .asString(); ``` ```php request('POST', 'https://api.humanloop.com/v5/datasets?version_id=string&environment=string', [ 'body' => '{}', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/datasets?version_id=string&environment=string"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/datasets?version_id=string&environment=string")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Get Dataset ```http GET https://api.humanloop.com/v5/datasets/{id} ``` Retrieve the Dataset with the given ID. Unless `include_datapoints` is set to `true`, the response will not include the Datapoints. Use the List Datapoints endpoint (`GET /{id}/datapoints`) to efficiently retrieve Datapoints for a large Dataset. By default, the deployed version of the Dataset is returned. Use the query parameters `version_id` or `environment` to target a specific version of the Dataset. ## Path Parameters - Id (required): Unique identifier for Dataset. ## Query Parameters - VersionId (optional): A specific Version ID of the Dataset to retrieve. - Environment (optional): Name of the Environment to retrieve a deployed Version from. - IncludeDatapoints (optional): If set to `true`, include all Datapoints in the response. Defaults to `false`. Consider using the paginated List Datapoints endpoint instead. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell Get dataset curl -G https://api.humanloop.com/v5/datasets/ds_b0baF1ca7652 \ -H "X-API-KEY: " \ -d version_id=dsv_6L78pqrdFi2xa \ -d include_datapoints=true ``` ```python Get dataset from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.datasets.get( id="ds_b0baF1ca7652", version_id="dsv_6L78pqrdFi2xa", include_datapoints=True, ) ``` ```typescript Get dataset import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.datasets.get("ds_b0baF1ca7652", { versionId: "dsv_6L78pqrdFi2xa", includeDatapoints: true }); ``` ```go Get dataset package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/datasets/ds_b0baF1ca7652?version_id=dsv_6L78pqrdFi2xa&include_datapoints=true" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby Get dataset require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/datasets/ds_b0baF1ca7652?version_id=dsv_6L78pqrdFi2xa&include_datapoints=true") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java Get dataset HttpResponse response = Unirest.get("https://api.humanloop.com/v5/datasets/ds_b0baF1ca7652?version_id=dsv_6L78pqrdFi2xa&include_datapoints=true") .header("X-API-KEY", "") .asString(); ``` ```php Get dataset request('GET', 'https://api.humanloop.com/v5/datasets/ds_b0baF1ca7652?version_id=dsv_6L78pqrdFi2xa&include_datapoints=true', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp Get dataset var client = new RestClient("https://api.humanloop.com/v5/datasets/ds_b0baF1ca7652?version_id=dsv_6L78pqrdFi2xa&include_datapoints=true"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift Get dataset import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/datasets/ds_b0baF1ca7652?version_id=dsv_6L78pqrdFi2xa&include_datapoints=true")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -G https://api.humanloop.com/v5/datasets/:id \ -H "X-API-KEY: " \ -d version_id=string \ -d environment=string ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.datasets.get( id="ds_b0baF1ca7652", version_id="dsv_6L78pqrdFi2xa", include_datapoints=True, ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.datasets.get("ds_b0baF1ca7652", { versionId: "dsv_6L78pqrdFi2xa", includeDatapoints: true }); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/datasets/%3Aid?version_id=string&environment=string" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/datasets/%3Aid?version_id=string&environment=string") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.get("https://api.humanloop.com/v5/datasets/%3Aid?version_id=string&environment=string") .header("X-API-KEY", "") .asString(); ``` ```php request('GET', 'https://api.humanloop.com/v5/datasets/%3Aid?version_id=string&environment=string', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/datasets/%3Aid?version_id=string&environment=string"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/datasets/%3Aid?version_id=string&environment=string")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Delete Dataset ```http DELETE https://api.humanloop.com/v5/datasets/{id} ``` Delete the Dataset with the given ID. ## Path Parameters - Id (required): Unique identifier for Dataset. ## Response Body - 422: Validation Error ## Examples ```shell curl -X DELETE https://api.humanloop.com/v5/datasets/id \ -H "X-API-KEY: " ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.datasets.delete( id="id", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.datasets.delete("id"); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/datasets/id" req, _ := http.NewRequest("DELETE", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/datasets/id") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Delete.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.delete("https://api.humanloop.com/v5/datasets/id") .header("X-API-KEY", "") .asString(); ``` ```php request('DELETE', 'https://api.humanloop.com/v5/datasets/id', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/datasets/id"); var request = new RestRequest(Method.DELETE); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/datasets/id")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "DELETE" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X DELETE https://api.humanloop.com/v5/datasets/:id \ -H "X-API-KEY: " ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.datasets.delete( id="id", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.datasets.delete("id"); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/datasets/%3Aid" req, _ := http.NewRequest("DELETE", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/datasets/%3Aid") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Delete.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.delete("https://api.humanloop.com/v5/datasets/%3Aid") .header("X-API-KEY", "") .asString(); ``` ```php request('DELETE', 'https://api.humanloop.com/v5/datasets/%3Aid', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/datasets/%3Aid"); var request = new RestRequest(Method.DELETE); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/datasets/%3Aid")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "DELETE" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Move Dataset ```http PATCH https://api.humanloop.com/v5/datasets/{id} Content-Type: application/json ``` Move the Dataset to a different path or change the name. ## Path Parameters - Id (required): Unique identifier for Dataset. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell curl -X PATCH https://api.humanloop.com/v5/datasets/id \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{}' ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.datasets.move( id="id", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.datasets.move("id"); ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/datasets/id" payload := strings.NewReader("{}") req, _ := http.NewRequest("PATCH", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/datasets/id") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Patch.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{}" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.patch("https://api.humanloop.com/v5/datasets/id") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{}") .asString(); ``` ```php request('PATCH', 'https://api.humanloop.com/v5/datasets/id', [ 'body' => '{}', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/datasets/id"); var request = new RestRequest(Method.PATCH); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/datasets/id")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "PATCH" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X PATCH https://api.humanloop.com/v5/datasets/:id \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{}' ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.datasets.move( id="id", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.datasets.move("id"); ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/datasets/%3Aid" payload := strings.NewReader("{}") req, _ := http.NewRequest("PATCH", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/datasets/%3Aid") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Patch.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{}" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.patch("https://api.humanloop.com/v5/datasets/%3Aid") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{}") .asString(); ``` ```php request('PATCH', 'https://api.humanloop.com/v5/datasets/%3Aid', [ 'body' => '{}', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/datasets/%3Aid"); var request = new RestRequest(Method.PATCH); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/datasets/%3Aid")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "PATCH" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # List Datapoints ```http GET https://api.humanloop.com/v5/datasets/{id}/datapoints ``` List all Datapoints for the Dataset with the given ID. ## Path Parameters - Id (required): Unique identifier for Dataset. ## Query Parameters - VersionId (optional): A specific Version ID of the Dataset to retrieve. - Environment (optional): Name of the Environment to retrieve a deployed Version from. - Page (optional): Page number for pagination. - Size (optional): Page size for pagination. Number of Datapoints to fetch. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell List datapoints curl -G https://api.humanloop.com/v5/datasets/ds_b0baF1ca7652/datapoints \ -H "X-API-KEY: " \ -d size=1 ``` ```python List datapoints from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) response = client.datasets.list_datapoints( id="ds_b0baF1ca7652", size=1, ) for item in response: yield item # alternatively, you can paginate page-by-page for page in response.iter_pages(): yield page ``` ```typescript List datapoints import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); const response = await client.datasets.listDatapoints("ds_b0baF1ca7652", { size: 1 }); for await (const item of response) { console.log(item); } // Or you can manually iterate page-by-page const page = await client.datasets.listDatapoints("ds_b0baF1ca7652", { size: 1 }); while (page.hasNextPage()) { page = page.getNextPage(); } ``` ```go List datapoints package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/datasets/ds_b0baF1ca7652/datapoints?size=1" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby List datapoints require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/datasets/ds_b0baF1ca7652/datapoints?size=1") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java List datapoints HttpResponse response = Unirest.get("https://api.humanloop.com/v5/datasets/ds_b0baF1ca7652/datapoints?size=1") .header("X-API-KEY", "") .asString(); ``` ```php List datapoints request('GET', 'https://api.humanloop.com/v5/datasets/ds_b0baF1ca7652/datapoints?size=1', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp List datapoints var client = new RestClient("https://api.humanloop.com/v5/datasets/ds_b0baF1ca7652/datapoints?size=1"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift List datapoints import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/datasets/ds_b0baF1ca7652/datapoints?size=1")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -G https://api.humanloop.com/v5/datasets/:id/datapoints \ -H "X-API-KEY: " \ -d version_id=string \ -d environment=string ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) response = client.datasets.list_datapoints( id="ds_b0baF1ca7652", size=1, ) for item in response: yield item # alternatively, you can paginate page-by-page for page in response.iter_pages(): yield page ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); const response = await client.datasets.listDatapoints("ds_b0baF1ca7652", { size: 1 }); for await (const item of response) { console.log(item); } // Or you can manually iterate page-by-page const page = await client.datasets.listDatapoints("ds_b0baF1ca7652", { size: 1 }); while (page.hasNextPage()) { page = page.getNextPage(); } ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/datasets/%3Aid/datapoints?version_id=string&environment=string" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/datasets/%3Aid/datapoints?version_id=string&environment=string") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.get("https://api.humanloop.com/v5/datasets/%3Aid/datapoints?version_id=string&environment=string") .header("X-API-KEY", "") .asString(); ``` ```php request('GET', 'https://api.humanloop.com/v5/datasets/%3Aid/datapoints?version_id=string&environment=string', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/datasets/%3Aid/datapoints?version_id=string&environment=string"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/datasets/%3Aid/datapoints?version_id=string&environment=string")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # List Versions of a Dataset ```http GET https://api.humanloop.com/v5/datasets/{id}/versions ``` Get a list of the versions for a Dataset. ## Path Parameters - Id (required): Unique identifier for Dataset. ## Query Parameters - IncludeDatapoints (optional): If set to 'latest_saved', include datapoints for the latest saved version. Alternatively, 'latest_committed' (deprecated) includes datapoints for the latest committed version only. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell List versions curl https://api.humanloop.com/v5/datasets/ds_b0baF1ca7652/versions \ -H "X-API-KEY: " ``` ```python List versions from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.datasets.list_versions( id="ds_b0baF1ca7652", ) ``` ```typescript List versions import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.datasets.listVersions("ds_b0baF1ca7652", { status: "committed" }); ``` ```go List versions package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/datasets/ds_b0baF1ca7652/versions" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby List versions require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/datasets/ds_b0baF1ca7652/versions") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java List versions HttpResponse response = Unirest.get("https://api.humanloop.com/v5/datasets/ds_b0baF1ca7652/versions") .header("X-API-KEY", "") .asString(); ``` ```php List versions request('GET', 'https://api.humanloop.com/v5/datasets/ds_b0baF1ca7652/versions', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp List versions var client = new RestClient("https://api.humanloop.com/v5/datasets/ds_b0baF1ca7652/versions"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift List versions import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/datasets/ds_b0baF1ca7652/versions")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -G https://api.humanloop.com/v5/datasets/:id/versions \ -H "X-API-KEY: " \ -d include_datapoints=latest_committed ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.datasets.list_versions( id="ds_b0baF1ca7652", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.datasets.listVersions("ds_b0baF1ca7652", { status: "committed" }); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/datasets/%3Aid/versions?include_datapoints=latest_committed" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/datasets/%3Aid/versions?include_datapoints=latest_committed") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.get("https://api.humanloop.com/v5/datasets/%3Aid/versions?include_datapoints=latest_committed") .header("X-API-KEY", "") .asString(); ``` ```php request('GET', 'https://api.humanloop.com/v5/datasets/%3Aid/versions?include_datapoints=latest_committed', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/datasets/%3Aid/versions?include_datapoints=latest_committed"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/datasets/%3Aid/versions?include_datapoints=latest_committed")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Delete Dataset Version ```http DELETE https://api.humanloop.com/v5/datasets/{id}/versions/{version_id} ``` Delete a version of the Dataset. ## Path Parameters - Id (required): Unique identifier for Dataset. - VersionId (required): Unique identifier for the specific version of the Dataset. ## Response Body - 422: Validation Error ## Examples ```shell curl -X DELETE https://api.humanloop.com/v5/datasets/id/versions/version_id \ -H "X-API-KEY: " ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.datasets.delete_dataset_version( id="id", version_id="version_id", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.datasets.deleteDatasetVersion("id", "version_id"); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/datasets/id/versions/version_id" req, _ := http.NewRequest("DELETE", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/datasets/id/versions/version_id") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Delete.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.delete("https://api.humanloop.com/v5/datasets/id/versions/version_id") .header("X-API-KEY", "") .asString(); ``` ```php request('DELETE', 'https://api.humanloop.com/v5/datasets/id/versions/version_id', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/datasets/id/versions/version_id"); var request = new RestRequest(Method.DELETE); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/datasets/id/versions/version_id")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "DELETE" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X DELETE https://api.humanloop.com/v5/datasets/:id/versions/:version_id \ -H "X-API-KEY: " ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.datasets.delete_dataset_version( id="id", version_id="version_id", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.datasets.deleteDatasetVersion("id", "version_id"); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/datasets/%3Aid/versions/%3Aversion_id" req, _ := http.NewRequest("DELETE", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/datasets/%3Aid/versions/%3Aversion_id") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Delete.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.delete("https://api.humanloop.com/v5/datasets/%3Aid/versions/%3Aversion_id") .header("X-API-KEY", "") .asString(); ``` ```php request('DELETE', 'https://api.humanloop.com/v5/datasets/%3Aid/versions/%3Aversion_id', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/datasets/%3Aid/versions/%3Aversion_id"); var request = new RestRequest(Method.DELETE); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/datasets/%3Aid/versions/%3Aversion_id")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "DELETE" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Update Dataset Version ```http PATCH https://api.humanloop.com/v5/datasets/{id}/versions/{version_id} Content-Type: application/json ``` Update the name or description of the Dataset version. ## Path Parameters - Id (required): Unique identifier for Dataset. - VersionId (required): Unique identifier for the specific version of the Dataset. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell curl -X PATCH https://api.humanloop.com/v5/datasets/id/versions/version_id \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{}' ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.datasets.update_dataset_version( id="id", version_id="version_id", ) ``` ```javascript const url = 'https://api.humanloop.com/v5/datasets/id/versions/version_id'; const options = { method: 'PATCH', headers: {'X-API-KEY': '', 'Content-Type': 'application/json'}, body: '{}' }; try { const response = await fetch(url, options); const data = await response.json(); console.log(data); } catch (error) { console.error(error); } ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/datasets/id/versions/version_id" payload := strings.NewReader("{}") req, _ := http.NewRequest("PATCH", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/datasets/id/versions/version_id") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Patch.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{}" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.patch("https://api.humanloop.com/v5/datasets/id/versions/version_id") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{}") .asString(); ``` ```php request('PATCH', 'https://api.humanloop.com/v5/datasets/id/versions/version_id', [ 'body' => '{}', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/datasets/id/versions/version_id"); var request = new RestRequest(Method.PATCH); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/datasets/id/versions/version_id")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "PATCH" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X PATCH https://api.humanloop.com/v5/datasets/:id/versions/:version_id \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{}' ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.datasets.update_dataset_version( id="id", version_id="version_id", ) ``` ```javascript const url = 'https://api.humanloop.com/v5/datasets/%3Aid/versions/%3Aversion_id'; const options = { method: 'PATCH', headers: {'X-API-KEY': '', 'Content-Type': 'application/json'}, body: '{}' }; try { const response = await fetch(url, options); const data = await response.json(); console.log(data); } catch (error) { console.error(error); } ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/datasets/%3Aid/versions/%3Aversion_id" payload := strings.NewReader("{}") req, _ := http.NewRequest("PATCH", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/datasets/%3Aid/versions/%3Aversion_id") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Patch.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{}" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.patch("https://api.humanloop.com/v5/datasets/%3Aid/versions/%3Aversion_id") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{}") .asString(); ``` ```php request('PATCH', 'https://api.humanloop.com/v5/datasets/%3Aid/versions/%3Aversion_id', [ 'body' => '{}', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/datasets/%3Aid/versions/%3Aversion_id"); var request = new RestRequest(Method.PATCH); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/datasets/%3Aid/versions/%3Aversion_id")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "PATCH" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Upload CSV ```http POST https://api.humanloop.com/v5/datasets/{id}/datapoints/csv Content-Type: multipart/form-data ``` Add Datapoints from a CSV file to a Dataset. This will create a new version of the Dataset with the Datapoints from the CSV file. If either `version_id` or `environment` is provided, the new version will be based on the specified version, with the Datapoints from the CSV file added to the existing Datapoints in the version. If neither `version_id` nor `environment` is provided, the new version will be based on the version of the Dataset that is deployed to the default Environment. You can optionally provide a name and description for the new version using `version_name` and `version_description` parameters. ## Path Parameters - Id (required): Unique identifier for the Dataset ## Query Parameters - VersionId (optional): ID of the specific Dataset version to base the created Version on. - Environment (optional): Name of the Environment identifying a deployed Version to base the created Version on. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell curl -X POST https://api.humanloop.com/v5/datasets/id/datapoints/csv \ -H "X-API-KEY: " \ -H "Content-Type: multipart/form-data" \ -F file=@ ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.datasets.upload_csv( id="id", ) ``` ```typescript import { HumanloopClient } from "humanloop"; import * as fs from "fs"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.datasets.uploadCsv(fs.createReadStream("/path/to/your/file"), "id", { commitMessage: "commit_message" }); ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/datasets/id/datapoints/csv" payload := strings.NewReader("-----011000010111000001101001\r\nContent-Disposition: form-data; name=\"file\"; filename=\"\"\r\nContent-Type: application/octet-stream\r\n\r\n\r\n-----011000010111000001101001--\r\n") req, _ := http.NewRequest("POST", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "multipart/form-data; boundary=---011000010111000001101001") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/datasets/id/datapoints/csv") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'multipart/form-data; boundary=---011000010111000001101001' request.body = "-----011000010111000001101001\r\nContent-Disposition: form-data; name=\"file\"; filename=\"\"\r\nContent-Type: application/octet-stream\r\n\r\n\r\n-----011000010111000001101001--\r\n" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.post("https://api.humanloop.com/v5/datasets/id/datapoints/csv") .header("X-API-KEY", "") .header("Content-Type", "multipart/form-data; boundary=---011000010111000001101001") .body("-----011000010111000001101001\r\nContent-Disposition: form-data; name=\"file\"; filename=\"\"\r\nContent-Type: application/octet-stream\r\n\r\n\r\n-----011000010111000001101001--\r\n") .asString(); ``` ```php request('POST', 'https://api.humanloop.com/v5/datasets/id/datapoints/csv', [ 'multipart' => [ [ 'name' => 'file', 'filename' => '', 'contents' => null ] ] 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/datasets/id/datapoints/csv"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); request.AddParameter("multipart/form-data; boundary=---011000010111000001101001", "-----011000010111000001101001\r\nContent-Disposition: form-data; name=\"file\"; filename=\"\"\r\nContent-Type: application/octet-stream\r\n\r\n\r\n-----011000010111000001101001--\r\n", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "multipart/form-data; boundary=---011000010111000001101001" ] let parameters = [ [ "name": "file", "fileName": "" ] ] let boundary = "---011000010111000001101001" var body = "" var error: NSError? = nil for param in parameters { let paramName = param["name"]! body += "--\(boundary)\r\n" body += "Content-Disposition:form-data; name=\"\(paramName)\"" if let filename = param["fileName"] { let contentType = param["content-type"]! let fileContent = String(contentsOfFile: filename, encoding: String.Encoding.utf8) if (error != nil) { print(error as Any) } body += "; filename=\"\(filename)\"\r\n" body += "Content-Type: \(contentType)\r\n\r\n" body += fileContent } else if let paramValue = param["value"] { body += "\r\n\r\n\(paramValue)" } } let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/datasets/id/datapoints/csv")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X POST "https://api.humanloop.com/v5/datasets/:id/datapoints/csv?version_id=string&environment=string" \ -H "X-API-KEY: " \ -H "Content-Type: multipart/form-data" \ -F file=@ ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.datasets.upload_csv( id="id", ) ``` ```typescript import { HumanloopClient } from "humanloop"; import * as fs from "fs"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.datasets.uploadCsv(fs.createReadStream("/path/to/your/file"), "id", { commitMessage: "commit_message" }); ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/datasets/%3Aid/datapoints/csv?version_id=string&environment=string" payload := strings.NewReader("-----011000010111000001101001\r\nContent-Disposition: form-data; name=\"file\"; filename=\"\"\r\nContent-Type: application/octet-stream\r\n\r\n\r\n-----011000010111000001101001--\r\n") req, _ := http.NewRequest("POST", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "multipart/form-data; boundary=---011000010111000001101001") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/datasets/%3Aid/datapoints/csv?version_id=string&environment=string") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'multipart/form-data; boundary=---011000010111000001101001' request.body = "-----011000010111000001101001\r\nContent-Disposition: form-data; name=\"file\"; filename=\"\"\r\nContent-Type: application/octet-stream\r\n\r\n\r\n-----011000010111000001101001--\r\n" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.post("https://api.humanloop.com/v5/datasets/%3Aid/datapoints/csv?version_id=string&environment=string") .header("X-API-KEY", "") .header("Content-Type", "multipart/form-data; boundary=---011000010111000001101001") .body("-----011000010111000001101001\r\nContent-Disposition: form-data; name=\"file\"; filename=\"\"\r\nContent-Type: application/octet-stream\r\n\r\n\r\n-----011000010111000001101001--\r\n") .asString(); ``` ```php request('POST', 'https://api.humanloop.com/v5/datasets/%3Aid/datapoints/csv?version_id=string&environment=string', [ 'multipart' => [ [ 'name' => 'file', 'filename' => '', 'contents' => null ] ] 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/datasets/%3Aid/datapoints/csv?version_id=string&environment=string"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); request.AddParameter("multipart/form-data; boundary=---011000010111000001101001", "-----011000010111000001101001\r\nContent-Disposition: form-data; name=\"file\"; filename=\"\"\r\nContent-Type: application/octet-stream\r\n\r\n\r\n-----011000010111000001101001--\r\n", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "multipart/form-data; boundary=---011000010111000001101001" ] let parameters = [ [ "name": "file", "fileName": "" ] ] let boundary = "---011000010111000001101001" var body = "" var error: NSError? = nil for param in parameters { let paramName = param["name"]! body += "--\(boundary)\r\n" body += "Content-Disposition:form-data; name=\"\(paramName)\"" if let filename = param["fileName"] { let contentType = param["content-type"]! let fileContent = String(contentsOfFile: filename, encoding: String.Encoding.utf8) if (error != nil) { print(error as Any) } body += "; filename=\"\(filename)\"\r\n" body += "Content-Type: \(contentType)\r\n\r\n" body += fileContent } else if let paramValue = param["value"] { body += "\r\n\r\n\(paramValue)" } } let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/datasets/%3Aid/datapoints/csv?version_id=string&environment=string")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Deploy Dataset ```http POST https://api.humanloop.com/v5/datasets/{id}/environments/{environment_id} ``` Deploy Dataset to Environment. Set the deployed version for the specified Environment. ## Path Parameters - Id (required): Unique identifier for Dataset. - EnvironmentId (required): Unique identifier for the Environment to deploy the Version to. ## Query Parameters - VersionId (required): Unique identifier for the specific version of the Dataset. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell Deploy curl -X POST "https://api.humanloop.com/v5/datasets/ds_b0baF1ca7652/environments/staging?version_id=dsv_6L78pqrdFi2xa" \ -H "X-API-KEY: " ``` ```python Deploy from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.datasets.set_deployment( id="ds_b0baF1ca7652", environment_id="staging", version_id="dsv_6L78pqrdFi2xa", ) ``` ```typescript Deploy import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.datasets.setDeployment("ds_b0baF1ca7652", "staging", { versionId: "dsv_6L78pqrdFi2xa" }); ``` ```go Deploy package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/datasets/ds_b0baF1ca7652/environments/staging?version_id=dsv_6L78pqrdFi2xa" req, _ := http.NewRequest("POST", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby Deploy require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/datasets/ds_b0baF1ca7652/environments/staging?version_id=dsv_6L78pqrdFi2xa") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java Deploy HttpResponse response = Unirest.post("https://api.humanloop.com/v5/datasets/ds_b0baF1ca7652/environments/staging?version_id=dsv_6L78pqrdFi2xa") .header("X-API-KEY", "") .asString(); ``` ```php Deploy request('POST', 'https://api.humanloop.com/v5/datasets/ds_b0baF1ca7652/environments/staging?version_id=dsv_6L78pqrdFi2xa', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp Deploy var client = new RestClient("https://api.humanloop.com/v5/datasets/ds_b0baF1ca7652/environments/staging?version_id=dsv_6L78pqrdFi2xa"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift Deploy import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/datasets/ds_b0baF1ca7652/environments/staging?version_id=dsv_6L78pqrdFi2xa")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X POST "https://api.humanloop.com/v5/datasets/:id/environments/:environment_id?version_id=string" \ -H "X-API-KEY: " ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.datasets.set_deployment( id="ds_b0baF1ca7652", environment_id="staging", version_id="dsv_6L78pqrdFi2xa", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.datasets.setDeployment("ds_b0baF1ca7652", "staging", { versionId: "dsv_6L78pqrdFi2xa" }); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/datasets/%3Aid/environments/%3Aenvironment_id?version_id=string" req, _ := http.NewRequest("POST", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/datasets/%3Aid/environments/%3Aenvironment_id?version_id=string") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.post("https://api.humanloop.com/v5/datasets/%3Aid/environments/%3Aenvironment_id?version_id=string") .header("X-API-KEY", "") .asString(); ``` ```php request('POST', 'https://api.humanloop.com/v5/datasets/%3Aid/environments/%3Aenvironment_id?version_id=string', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/datasets/%3Aid/environments/%3Aenvironment_id?version_id=string"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/datasets/%3Aid/environments/%3Aenvironment_id?version_id=string")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Remove Deployment ```http DELETE https://api.humanloop.com/v5/datasets/{id}/environments/{environment_id} ``` Remove deployed Dataset from Environment. Remove the deployed version for the specified Environment. ## Path Parameters - Id (required): Unique identifier for Dataset. - EnvironmentId (required): Unique identifier for the Environment to remove the deployment from. ## Response Body - 422: Validation Error ## Examples ```shell Delete environment curl -X DELETE https://api.humanloop.com/v5/datasets/ds_b0baF1ca7652/environments/staging \ -H "X-API-KEY: " ``` ```python Delete environment from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.datasets.remove_deployment( id="ds_b0baF1ca7652", environment_id="staging", ) ``` ```typescript Delete environment import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.datasets.removeDeployment("ds_b0baF1ca7652", "staging"); ``` ```go Delete environment package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/datasets/ds_b0baF1ca7652/environments/staging" req, _ := http.NewRequest("DELETE", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby Delete environment require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/datasets/ds_b0baF1ca7652/environments/staging") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Delete.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java Delete environment HttpResponse response = Unirest.delete("https://api.humanloop.com/v5/datasets/ds_b0baF1ca7652/environments/staging") .header("X-API-KEY", "") .asString(); ``` ```php Delete environment request('DELETE', 'https://api.humanloop.com/v5/datasets/ds_b0baF1ca7652/environments/staging', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp Delete environment var client = new RestClient("https://api.humanloop.com/v5/datasets/ds_b0baF1ca7652/environments/staging"); var request = new RestRequest(Method.DELETE); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift Delete environment import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/datasets/ds_b0baF1ca7652/environments/staging")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "DELETE" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X DELETE https://api.humanloop.com/v5/datasets/:id/environments/:environment_id \ -H "X-API-KEY: " ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.datasets.remove_deployment( id="ds_b0baF1ca7652", environment_id="staging", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.datasets.removeDeployment("ds_b0baF1ca7652", "staging"); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/datasets/%3Aid/environments/%3Aenvironment_id" req, _ := http.NewRequest("DELETE", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/datasets/%3Aid/environments/%3Aenvironment_id") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Delete.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.delete("https://api.humanloop.com/v5/datasets/%3Aid/environments/%3Aenvironment_id") .header("X-API-KEY", "") .asString(); ``` ```php request('DELETE', 'https://api.humanloop.com/v5/datasets/%3Aid/environments/%3Aenvironment_id', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/datasets/%3Aid/environments/%3Aenvironment_id"); var request = new RestRequest(Method.DELETE); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/datasets/%3Aid/environments/%3Aenvironment_id")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "DELETE" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # List a Dataset's Environments ```http GET https://api.humanloop.com/v5/datasets/{id}/environments ``` List all Environments and their deployed versions for the Dataset. ## Path Parameters - Id (required): Unique identifier for Dataset. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell curl https://api.humanloop.com/v5/datasets/id/environments \ -H "X-API-KEY: " ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.datasets.list_environments( id="id", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.datasets.listEnvironments("id"); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/datasets/id/environments" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/datasets/id/environments") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.get("https://api.humanloop.com/v5/datasets/id/environments") .header("X-API-KEY", "") .asString(); ``` ```php request('GET', 'https://api.humanloop.com/v5/datasets/id/environments', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/datasets/id/environments"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/datasets/id/environments")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl https://api.humanloop.com/v5/datasets/:id/environments \ -H "X-API-KEY: " ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.datasets.list_environments( id="id", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.datasets.listEnvironments("id"); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/datasets/%3Aid/environments" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/datasets/%3Aid/environments") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.get("https://api.humanloop.com/v5/datasets/%3Aid/environments") .header("X-API-KEY", "") .asString(); ``` ```php request('GET', 'https://api.humanloop.com/v5/datasets/%3Aid/environments', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/datasets/%3Aid/environments"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/datasets/%3Aid/environments")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Submit Evaluator Judgment ```http POST https://api.humanloop.com/v5/evaluators/log Content-Type: application/json ``` Submit Evaluator judgment for an existing Log. Creates a new Log. The evaluated Log will be set as the parent of the created Log. ## Query Parameters - VersionId (optional): ID of the Evaluator version to log against. - Environment (optional): Name of the Environment identifying a deployed version to log to. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell curl -X POST https://api.humanloop.com/v5/evaluators/log \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{ "parent_id": "parent_id" }' ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluators.log( parent_id="parent_id", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.evaluators.log({ parentId: "parent_id" }); ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/evaluators/log" payload := strings.NewReader("{\n \"parent_id\": \"parent_id\"\n}") req, _ := http.NewRequest("POST", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/evaluators/log") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{\n \"parent_id\": \"parent_id\"\n}" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.post("https://api.humanloop.com/v5/evaluators/log") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{\n \"parent_id\": \"parent_id\"\n}") .asString(); ``` ```php request('POST', 'https://api.humanloop.com/v5/evaluators/log', [ 'body' => '{ "parent_id": "parent_id" }', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/evaluators/log"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{\n \"parent_id\": \"parent_id\"\n}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = ["parent_id": "parent_id"] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/evaluators/log")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X POST "https://api.humanloop.com/v5/evaluators/log?version_id=string&environment=string" \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{ "parent_id": "string" }' ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluators.log( parent_id="parent_id", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.evaluators.log({ parentId: "parent_id" }); ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/evaluators/log?version_id=string&environment=string" payload := strings.NewReader("{\n \"parent_id\": \"string\"\n}") req, _ := http.NewRequest("POST", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/evaluators/log?version_id=string&environment=string") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{\n \"parent_id\": \"string\"\n}" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.post("https://api.humanloop.com/v5/evaluators/log?version_id=string&environment=string") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{\n \"parent_id\": \"string\"\n}") .asString(); ``` ```php request('POST', 'https://api.humanloop.com/v5/evaluators/log?version_id=string&environment=string', [ 'body' => '{ "parent_id": "string" }', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/evaluators/log?version_id=string&environment=string"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{\n \"parent_id\": \"string\"\n}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = ["parent_id": "string"] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/evaluators/log?version_id=string&environment=string")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # List Evaluators ```http GET https://api.humanloop.com/v5/evaluators ``` Get a list of all Evaluators. ## Query Parameters - Page (optional): Page offset for pagination. - Size (optional): Page size for pagination. Number of Evaluators to fetch. - Name (optional): Case-insensitive filter for Evaluator name. - UserFilter (optional): Case-insensitive filter for users in the Evaluator. This filter matches against both email address and name of users. - SortBy (optional): Field to sort Evaluators by - Order (optional): Direction to sort by. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell List evaluators curl -G https://api.humanloop.com/v5/evaluators \ -H "X-API-KEY: " \ -d size=1 ``` ```python List evaluators from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) response = client.evaluators.list( size=1, ) for item in response: yield item # alternatively, you can paginate page-by-page for page in response.iter_pages(): yield page ``` ```typescript List evaluators import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); const response = await client.evaluators.list({ size: 1 }); for await (const item of response) { console.log(item); } // Or you can manually iterate page-by-page const page = await client.evaluators.list({ size: 1 }); while (page.hasNextPage()) { page = page.getNextPage(); } ``` ```go List evaluators package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/evaluators?size=1" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby List evaluators require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/evaluators?size=1") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java List evaluators HttpResponse response = Unirest.get("https://api.humanloop.com/v5/evaluators?size=1") .header("X-API-KEY", "") .asString(); ``` ```php List evaluators request('GET', 'https://api.humanloop.com/v5/evaluators?size=1', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp List evaluators var client = new RestClient("https://api.humanloop.com/v5/evaluators?size=1"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift List evaluators import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/evaluators?size=1")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -G https://api.humanloop.com/v5/evaluators \ -H "X-API-KEY: " \ -d page=0 \ -d size=0 ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) response = client.evaluators.list( size=1, ) for item in response: yield item # alternatively, you can paginate page-by-page for page in response.iter_pages(): yield page ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); const response = await client.evaluators.list({ size: 1 }); for await (const item of response) { console.log(item); } // Or you can manually iterate page-by-page const page = await client.evaluators.list({ size: 1 }); while (page.hasNextPage()) { page = page.getNextPage(); } ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/evaluators?page=0&size=0" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/evaluators?page=0&size=0") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.get("https://api.humanloop.com/v5/evaluators?page=0&size=0") .header("X-API-KEY", "") .asString(); ``` ```php request('GET', 'https://api.humanloop.com/v5/evaluators?page=0&size=0', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/evaluators?page=0&size=0"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/evaluators?page=0&size=0")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Upsert Evaluator ```http POST https://api.humanloop.com/v5/evaluators Content-Type: application/json ``` Create an Evaluator or update it with a new version if it already exists. Evaluators are identified by the `ID` or their `path`. The spec provided determines the version of the Evaluator. You can provide `version_name` and `version_description` to identify and describe your versions. Version names must be unique within an Evaluator - attempting to create a version with a name that already exists will result in a 409 Conflict error. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell Create evaluator curl -X POST https://api.humanloop.com/v5/evaluators \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{ "spec": { "arguments_type": "target_required", "return_type": "number", "evaluator_type": "python", "code": "def evaluate(answer, target):\n return 0.5" }, "path": "Shared Evaluators/Accuracy Evaluator" }' ``` ```python Create evaluator from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluators.upsert( path="Shared Evaluators/Accuracy Evaluator", spec={ "arguments_type": "target_required", "return_type": "number", "evaluator_type": "python", "code": "def evaluate(answer, target):\n return 0.5", }, ) ``` ```typescript Create evaluator import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.evaluators.upsert({ path: "Shared Evaluators/Accuracy Evaluator", spec: { argumentsType: "target_required", returnType: "number", evaluatorType: "python", code: "def evaluate(answer, target):\n return 0.5" }, commitMessage: "Initial commit" }); ``` ```go Create evaluator package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/evaluators" payload := strings.NewReader("{\n \"path\": \"Shared Evaluators/Accuracy Evaluator\"\n}") req, _ := http.NewRequest("POST", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby Create evaluator require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/evaluators") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{\n \"path\": \"Shared Evaluators/Accuracy Evaluator\"\n}" response = http.request(request) puts response.read_body ``` ```java Create evaluator HttpResponse response = Unirest.post("https://api.humanloop.com/v5/evaluators") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{\n \"path\": \"Shared Evaluators/Accuracy Evaluator\"\n}") .asString(); ``` ```php Create evaluator request('POST', 'https://api.humanloop.com/v5/evaluators', [ 'body' => '{ "path": "Shared Evaluators/Accuracy Evaluator" }', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp Create evaluator var client = new RestClient("https://api.humanloop.com/v5/evaluators"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{\n \"path\": \"Shared Evaluators/Accuracy Evaluator\"\n}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift Create evaluator import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = ["path": "Shared Evaluators/Accuracy Evaluator"] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/evaluators")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X POST https://api.humanloop.com/v5/evaluators \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{ "spec": { "arguments_type": "target_free", "return_type": "boolean", "evaluator_type": "llm" } }' ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluators.upsert( path="Shared Evaluators/Accuracy Evaluator", spec={ "arguments_type": "target_required", "return_type": "number", "evaluator_type": "python", "code": "def evaluate(answer, target):\n return 0.5", }, ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.evaluators.upsert({ path: "Shared Evaluators/Accuracy Evaluator", spec: { argumentsType: "target_required", returnType: "number", evaluatorType: "python", code: "def evaluate(answer, target):\n return 0.5" }, commitMessage: "Initial commit" }); ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/evaluators" payload := strings.NewReader("{}") req, _ := http.NewRequest("POST", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/evaluators") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{}" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.post("https://api.humanloop.com/v5/evaluators") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{}") .asString(); ``` ```php request('POST', 'https://api.humanloop.com/v5/evaluators', [ 'body' => '{}', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/evaluators"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/evaluators")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Get Evaluator ```http GET https://api.humanloop.com/v5/evaluators/{id} ``` Retrieve the Evaluator with the given ID. By default, the deployed version of the Evaluator is returned. Use the query parameters `version_id` or `environment` to target a specific version of the Evaluator. ## Path Parameters - Id (required): Unique identifier for Evaluator. ## Query Parameters - VersionId (optional): A specific Version ID of the Evaluator to retrieve. - Environment (optional): Name of the Environment to retrieve a deployed Version from. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell Get specific evaluator curl https://api.humanloop.com/v5/evaluators/ev_890bcd \ -H "X-API-KEY: " ``` ```python Get specific evaluator from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluators.get( id="ev_890bcd", ) ``` ```typescript Get specific evaluator import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.evaluators.get("ev_890bcd"); ``` ```go Get specific evaluator package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/evaluators/ev_890bcd" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby Get specific evaluator require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/evaluators/ev_890bcd") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java Get specific evaluator HttpResponse response = Unirest.get("https://api.humanloop.com/v5/evaluators/ev_890bcd") .header("X-API-KEY", "") .asString(); ``` ```php Get specific evaluator request('GET', 'https://api.humanloop.com/v5/evaluators/ev_890bcd', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp Get specific evaluator var client = new RestClient("https://api.humanloop.com/v5/evaluators/ev_890bcd"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift Get specific evaluator import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/evaluators/ev_890bcd")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -G https://api.humanloop.com/v5/evaluators/:id \ -H "X-API-KEY: " \ -d version_id=string \ -d environment=string ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluators.get( id="ev_890bcd", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.evaluators.get("ev_890bcd"); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/evaluators/%3Aid?version_id=string&environment=string" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/evaluators/%3Aid?version_id=string&environment=string") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.get("https://api.humanloop.com/v5/evaluators/%3Aid?version_id=string&environment=string") .header("X-API-KEY", "") .asString(); ``` ```php request('GET', 'https://api.humanloop.com/v5/evaluators/%3Aid?version_id=string&environment=string', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/evaluators/%3Aid?version_id=string&environment=string"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/evaluators/%3Aid?version_id=string&environment=string")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Delete Evaluator ```http DELETE https://api.humanloop.com/v5/evaluators/{id} ``` Delete the Evaluator with the given ID. ## Path Parameters - Id (required): Unique identifier for Evaluator. ## Response Body - 422: Validation Error ## Examples ```shell Delete evaluator curl -X DELETE https://api.humanloop.com/v5/evaluators/ev_890bcd \ -H "X-API-KEY: " ``` ```python Delete evaluator from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluators.delete( id="ev_890bcd", ) ``` ```typescript Delete evaluator import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.evaluators.delete("ev_890bcd"); ``` ```go Delete evaluator package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/evaluators/ev_890bcd" req, _ := http.NewRequest("DELETE", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby Delete evaluator require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/evaluators/ev_890bcd") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Delete.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java Delete evaluator HttpResponse response = Unirest.delete("https://api.humanloop.com/v5/evaluators/ev_890bcd") .header("X-API-KEY", "") .asString(); ``` ```php Delete evaluator request('DELETE', 'https://api.humanloop.com/v5/evaluators/ev_890bcd', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp Delete evaluator var client = new RestClient("https://api.humanloop.com/v5/evaluators/ev_890bcd"); var request = new RestRequest(Method.DELETE); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift Delete evaluator import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/evaluators/ev_890bcd")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "DELETE" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X DELETE https://api.humanloop.com/v5/evaluators/:id \ -H "X-API-KEY: " ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluators.delete( id="ev_890bcd", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.evaluators.delete("ev_890bcd"); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/evaluators/%3Aid" req, _ := http.NewRequest("DELETE", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/evaluators/%3Aid") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Delete.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.delete("https://api.humanloop.com/v5/evaluators/%3Aid") .header("X-API-KEY", "") .asString(); ``` ```php request('DELETE', 'https://api.humanloop.com/v5/evaluators/%3Aid', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/evaluators/%3Aid"); var request = new RestRequest(Method.DELETE); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/evaluators/%3Aid")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "DELETE" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Move Evaluator ```http PATCH https://api.humanloop.com/v5/evaluators/{id} Content-Type: application/json ``` Move the Evaluator to a different path or change the name. ## Path Parameters - Id (required): Unique identifier for Evaluator. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell Move evaluator curl -X PATCH https://api.humanloop.com/v5/evaluators/ev_890bcd \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{ "path": "new directory/new name" }' ``` ```python Move evaluator from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluators.move( id="ev_890bcd", path="new directory/new name", ) ``` ```typescript Move evaluator import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.evaluators.move("ev_890bcd", { path: "new directory/new name" }); ``` ```go Move evaluator package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/evaluators/ev_890bcd" payload := strings.NewReader("{\n \"path\": \"new directory/new name\"\n}") req, _ := http.NewRequest("PATCH", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby Move evaluator require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/evaluators/ev_890bcd") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Patch.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{\n \"path\": \"new directory/new name\"\n}" response = http.request(request) puts response.read_body ``` ```java Move evaluator HttpResponse response = Unirest.patch("https://api.humanloop.com/v5/evaluators/ev_890bcd") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{\n \"path\": \"new directory/new name\"\n}") .asString(); ``` ```php Move evaluator request('PATCH', 'https://api.humanloop.com/v5/evaluators/ev_890bcd', [ 'body' => '{ "path": "new directory/new name" }', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp Move evaluator var client = new RestClient("https://api.humanloop.com/v5/evaluators/ev_890bcd"); var request = new RestRequest(Method.PATCH); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{\n \"path\": \"new directory/new name\"\n}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift Move evaluator import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = ["path": "new directory/new name"] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/evaluators/ev_890bcd")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "PATCH" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X PATCH https://api.humanloop.com/v5/evaluators/:id \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{}' ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluators.move( id="ev_890bcd", path="new directory/new name", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.evaluators.move("ev_890bcd", { path: "new directory/new name" }); ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/evaluators/%3Aid" payload := strings.NewReader("{}") req, _ := http.NewRequest("PATCH", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/evaluators/%3Aid") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Patch.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{}" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.patch("https://api.humanloop.com/v5/evaluators/%3Aid") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{}") .asString(); ``` ```php request('PATCH', 'https://api.humanloop.com/v5/evaluators/%3Aid', [ 'body' => '{}', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/evaluators/%3Aid"); var request = new RestRequest(Method.PATCH); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/evaluators/%3Aid")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "PATCH" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # List Versions of an Evaluator ```http GET https://api.humanloop.com/v5/evaluators/{id}/versions ``` Get a list of all the versions of an Evaluator. ## Path Parameters - Id (required): Unique identifier for the Evaluator. ## Query Parameters - EvaluatorAggregates (optional): Whether to include Evaluator aggregate results for the versions in the response ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell List versions curl https://api.humanloop.com/v5/evaluators/ev_890bcd/versions \ -H "X-API-KEY: " ``` ```python List versions from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluators.list_versions( id="ev_890bcd", ) ``` ```typescript List versions import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.evaluators.listVersions("ev_890bcd"); ``` ```go List versions package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/evaluators/ev_890bcd/versions" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby List versions require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/evaluators/ev_890bcd/versions") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java List versions HttpResponse response = Unirest.get("https://api.humanloop.com/v5/evaluators/ev_890bcd/versions") .header("X-API-KEY", "") .asString(); ``` ```php List versions request('GET', 'https://api.humanloop.com/v5/evaluators/ev_890bcd/versions', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp List versions var client = new RestClient("https://api.humanloop.com/v5/evaluators/ev_890bcd/versions"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift List versions import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/evaluators/ev_890bcd/versions")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -G https://api.humanloop.com/v5/evaluators/:id/versions \ -H "X-API-KEY: " \ -d evaluator_aggregates=true ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluators.list_versions( id="ev_890bcd", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.evaluators.listVersions("ev_890bcd"); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/evaluators/%3Aid/versions?evaluator_aggregates=true" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/evaluators/%3Aid/versions?evaluator_aggregates=true") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.get("https://api.humanloop.com/v5/evaluators/%3Aid/versions?evaluator_aggregates=true") .header("X-API-KEY", "") .asString(); ``` ```php request('GET', 'https://api.humanloop.com/v5/evaluators/%3Aid/versions?evaluator_aggregates=true', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/evaluators/%3Aid/versions?evaluator_aggregates=true"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/evaluators/%3Aid/versions?evaluator_aggregates=true")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Delete Evaluator Version ```http DELETE https://api.humanloop.com/v5/evaluators/{id}/versions/{version_id} ``` Delete a version of the Evaluator. ## Path Parameters - Id (required): Unique identifier for Evaluator. - VersionId (required): Unique identifier for the specific version of the Evaluator. ## Response Body - 422: Validation Error ## Examples ```shell curl -X DELETE https://api.humanloop.com/v5/evaluators/id/versions/version_id \ -H "X-API-KEY: " ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluators.delete_evaluator_version( id="id", version_id="version_id", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.evaluators.deleteEvaluatorVersion("id", "version_id"); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/evaluators/id/versions/version_id" req, _ := http.NewRequest("DELETE", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/evaluators/id/versions/version_id") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Delete.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.delete("https://api.humanloop.com/v5/evaluators/id/versions/version_id") .header("X-API-KEY", "") .asString(); ``` ```php request('DELETE', 'https://api.humanloop.com/v5/evaluators/id/versions/version_id', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/evaluators/id/versions/version_id"); var request = new RestRequest(Method.DELETE); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/evaluators/id/versions/version_id")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "DELETE" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X DELETE https://api.humanloop.com/v5/evaluators/:id/versions/:version_id \ -H "X-API-KEY: " ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluators.delete_evaluator_version( id="id", version_id="version_id", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.evaluators.deleteEvaluatorVersion("id", "version_id"); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/evaluators/%3Aid/versions/%3Aversion_id" req, _ := http.NewRequest("DELETE", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/evaluators/%3Aid/versions/%3Aversion_id") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Delete.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.delete("https://api.humanloop.com/v5/evaluators/%3Aid/versions/%3Aversion_id") .header("X-API-KEY", "") .asString(); ``` ```php request('DELETE', 'https://api.humanloop.com/v5/evaluators/%3Aid/versions/%3Aversion_id', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/evaluators/%3Aid/versions/%3Aversion_id"); var request = new RestRequest(Method.DELETE); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/evaluators/%3Aid/versions/%3Aversion_id")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "DELETE" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Update Evaluator Version ```http PATCH https://api.humanloop.com/v5/evaluators/{id}/versions/{version_id} Content-Type: application/json ``` Update the name or description of the Evaluator version. ## Path Parameters - Id (required): Unique identifier for Evaluator. - VersionId (required): Unique identifier for the specific version of the Evaluator. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell curl -X PATCH https://api.humanloop.com/v5/evaluators/id/versions/version_id \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{}' ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluators.update_evaluator_version( id="id", version_id="version_id", ) ``` ```javascript const url = 'https://api.humanloop.com/v5/evaluators/id/versions/version_id'; const options = { method: 'PATCH', headers: {'X-API-KEY': '', 'Content-Type': 'application/json'}, body: '{}' }; try { const response = await fetch(url, options); const data = await response.json(); console.log(data); } catch (error) { console.error(error); } ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/evaluators/id/versions/version_id" payload := strings.NewReader("{}") req, _ := http.NewRequest("PATCH", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/evaluators/id/versions/version_id") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Patch.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{}" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.patch("https://api.humanloop.com/v5/evaluators/id/versions/version_id") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{}") .asString(); ``` ```php request('PATCH', 'https://api.humanloop.com/v5/evaluators/id/versions/version_id', [ 'body' => '{}', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/evaluators/id/versions/version_id"); var request = new RestRequest(Method.PATCH); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/evaluators/id/versions/version_id")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "PATCH" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X PATCH https://api.humanloop.com/v5/evaluators/:id/versions/:version_id \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{}' ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluators.update_evaluator_version( id="id", version_id="version_id", ) ``` ```javascript const url = 'https://api.humanloop.com/v5/evaluators/%3Aid/versions/%3Aversion_id'; const options = { method: 'PATCH', headers: {'X-API-KEY': '', 'Content-Type': 'application/json'}, body: '{}' }; try { const response = await fetch(url, options); const data = await response.json(); console.log(data); } catch (error) { console.error(error); } ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/evaluators/%3Aid/versions/%3Aversion_id" payload := strings.NewReader("{}") req, _ := http.NewRequest("PATCH", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/evaluators/%3Aid/versions/%3Aversion_id") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Patch.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{}" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.patch("https://api.humanloop.com/v5/evaluators/%3Aid/versions/%3Aversion_id") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{}") .asString(); ``` ```php request('PATCH', 'https://api.humanloop.com/v5/evaluators/%3Aid/versions/%3Aversion_id', [ 'body' => '{}', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/evaluators/%3Aid/versions/%3Aversion_id"); var request = new RestRequest(Method.PATCH); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/evaluators/%3Aid/versions/%3Aversion_id")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "PATCH" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Deploy Evaluator ```http POST https://api.humanloop.com/v5/evaluators/{id}/environments/{environment_id} ``` Deploy Evaluator to an Environment. Set the deployed version for the specified Environment. This Evaluator will be used for calls made to the Evaluator in this Environment. ## Path Parameters - Id (required): Unique identifier for Evaluator. - EnvironmentId (required): Unique identifier for the Environment to deploy the Version to. ## Query Parameters - VersionId (required): Unique identifier for the specific version of the Evaluator. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell Deploy curl -X POST "https://api.humanloop.com/v5/evaluators/ev_890bcd/environments/staging?version_id=evv_012def" \ -H "X-API-KEY: " ``` ```python Deploy from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluators.set_deployment( id="ev_890bcd", environment_id="staging", version_id="evv_012def", ) ``` ```typescript Deploy import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.evaluators.setDeployment("ev_890bcd", "staging", { versionId: "evv_012def" }); ``` ```go Deploy package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/evaluators/ev_890bcd/environments/staging?version_id=evv_012def" req, _ := http.NewRequest("POST", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby Deploy require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/evaluators/ev_890bcd/environments/staging?version_id=evv_012def") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java Deploy HttpResponse response = Unirest.post("https://api.humanloop.com/v5/evaluators/ev_890bcd/environments/staging?version_id=evv_012def") .header("X-API-KEY", "") .asString(); ``` ```php Deploy request('POST', 'https://api.humanloop.com/v5/evaluators/ev_890bcd/environments/staging?version_id=evv_012def', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp Deploy var client = new RestClient("https://api.humanloop.com/v5/evaluators/ev_890bcd/environments/staging?version_id=evv_012def"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift Deploy import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/evaluators/ev_890bcd/environments/staging?version_id=evv_012def")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X POST "https://api.humanloop.com/v5/evaluators/:id/environments/:environment_id?version_id=string" \ -H "X-API-KEY: " ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluators.set_deployment( id="ev_890bcd", environment_id="staging", version_id="evv_012def", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.evaluators.setDeployment("ev_890bcd", "staging", { versionId: "evv_012def" }); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/evaluators/%3Aid/environments/%3Aenvironment_id?version_id=string" req, _ := http.NewRequest("POST", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/evaluators/%3Aid/environments/%3Aenvironment_id?version_id=string") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.post("https://api.humanloop.com/v5/evaluators/%3Aid/environments/%3Aenvironment_id?version_id=string") .header("X-API-KEY", "") .asString(); ``` ```php request('POST', 'https://api.humanloop.com/v5/evaluators/%3Aid/environments/%3Aenvironment_id?version_id=string', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/evaluators/%3Aid/environments/%3Aenvironment_id?version_id=string"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/evaluators/%3Aid/environments/%3Aenvironment_id?version_id=string")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Remove Deployment ```http DELETE https://api.humanloop.com/v5/evaluators/{id}/environments/{environment_id} ``` Remove deployed Evaluator from the Environment. Remove the deployed version for the specified Environment. This Evaluator will no longer be used for calls made to the Evaluator in this Environment. ## Path Parameters - Id (required): Unique identifier for Evaluator. - EnvironmentId (required): Unique identifier for the Environment to remove the deployment from. ## Response Body - 422: Validation Error ## Examples ```shell Delete environment curl -X DELETE https://api.humanloop.com/v5/evaluators/ev_890bcd/environments/staging \ -H "X-API-KEY: " ``` ```python Delete environment from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluators.remove_deployment( id="ev_890bcd", environment_id="staging", ) ``` ```typescript Delete environment import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.evaluators.removeDeployment("ev_890bcd", "staging"); ``` ```go Delete environment package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/evaluators/ev_890bcd/environments/staging" req, _ := http.NewRequest("DELETE", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby Delete environment require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/evaluators/ev_890bcd/environments/staging") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Delete.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java Delete environment HttpResponse response = Unirest.delete("https://api.humanloop.com/v5/evaluators/ev_890bcd/environments/staging") .header("X-API-KEY", "") .asString(); ``` ```php Delete environment request('DELETE', 'https://api.humanloop.com/v5/evaluators/ev_890bcd/environments/staging', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp Delete environment var client = new RestClient("https://api.humanloop.com/v5/evaluators/ev_890bcd/environments/staging"); var request = new RestRequest(Method.DELETE); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift Delete environment import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/evaluators/ev_890bcd/environments/staging")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "DELETE" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X DELETE https://api.humanloop.com/v5/evaluators/:id/environments/:environment_id \ -H "X-API-KEY: " ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluators.remove_deployment( id="ev_890bcd", environment_id="staging", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.evaluators.removeDeployment("ev_890bcd", "staging"); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/evaluators/%3Aid/environments/%3Aenvironment_id" req, _ := http.NewRequest("DELETE", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/evaluators/%3Aid/environments/%3Aenvironment_id") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Delete.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.delete("https://api.humanloop.com/v5/evaluators/%3Aid/environments/%3Aenvironment_id") .header("X-API-KEY", "") .asString(); ``` ```php request('DELETE', 'https://api.humanloop.com/v5/evaluators/%3Aid/environments/%3Aenvironment_id', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/evaluators/%3Aid/environments/%3Aenvironment_id"); var request = new RestRequest(Method.DELETE); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/evaluators/%3Aid/environments/%3Aenvironment_id")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "DELETE" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # List an Evaluator's Environments ```http GET https://api.humanloop.com/v5/evaluators/{id}/environments ``` List all Environments and their deployed versions for the Evaluator. ## Path Parameters - Id (required): Unique identifier for Evaluator. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell List environments curl https://api.humanloop.com/v5/evaluators/ev_890bcd/environments \ -H "X-API-KEY: " ``` ```python List environments from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluators.list_environments( id="ev_890bcd", ) ``` ```typescript List environments import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.evaluators.listEnvironments("ev_890bcd"); ``` ```go List environments package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/evaluators/ev_890bcd/environments" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby List environments require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/evaluators/ev_890bcd/environments") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java List environments HttpResponse response = Unirest.get("https://api.humanloop.com/v5/evaluators/ev_890bcd/environments") .header("X-API-KEY", "") .asString(); ``` ```php List environments request('GET', 'https://api.humanloop.com/v5/evaluators/ev_890bcd/environments', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp List environments var client = new RestClient("https://api.humanloop.com/v5/evaluators/ev_890bcd/environments"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift List environments import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/evaluators/ev_890bcd/environments")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl https://api.humanloop.com/v5/evaluators/:id/environments \ -H "X-API-KEY: " ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluators.list_environments( id="ev_890bcd", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.evaluators.listEnvironments("ev_890bcd"); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/evaluators/%3Aid/environments" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/evaluators/%3Aid/environments") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.get("https://api.humanloop.com/v5/evaluators/%3Aid/environments") .header("X-API-KEY", "") .asString(); ``` ```php request('GET', 'https://api.humanloop.com/v5/evaluators/%3Aid/environments', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/evaluators/%3Aid/environments"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/evaluators/%3Aid/environments")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Update Monitoring ```http POST https://api.humanloop.com/v5/evaluators/{id}/evaluators Content-Type: application/json ``` Activate and deactivate Evaluators for monitoring the Evaluator. An activated Evaluator will automatically be run on all new Logs within the Evaluator for monitoring purposes. ## Path Parameters - Id (required) ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell curl -X POST https://api.humanloop.com/v5/evaluators/id/evaluators \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{}' ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluators.update_monitoring( id="id", ) ``` ```typescript import { HumanloopClient, Humanloop } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.evaluators.updateMonitoring("id", {}); ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/evaluators/id/evaluators" payload := strings.NewReader("{}") req, _ := http.NewRequest("POST", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/evaluators/id/evaluators") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{}" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.post("https://api.humanloop.com/v5/evaluators/id/evaluators") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{}") .asString(); ``` ```php request('POST', 'https://api.humanloop.com/v5/evaluators/id/evaluators', [ 'body' => '{}', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/evaluators/id/evaluators"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/evaluators/id/evaluators")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X POST https://api.humanloop.com/v5/evaluators/:id/evaluators \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{}' ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluators.update_monitoring( id="id", ) ``` ```typescript import { HumanloopClient, Humanloop } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.evaluators.updateMonitoring("id", {}); ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/evaluators/%3Aid/evaluators" payload := strings.NewReader("{}") req, _ := http.NewRequest("POST", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/evaluators/%3Aid/evaluators") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{}" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.post("https://api.humanloop.com/v5/evaluators/%3Aid/evaluators") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{}") .asString(); ``` ```php request('POST', 'https://api.humanloop.com/v5/evaluators/%3Aid/evaluators', [ 'body' => '{}', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/evaluators/%3Aid/evaluators"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/evaluators/%3Aid/evaluators")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Log to a Flow ```http POST https://api.humanloop.com/v5/flows/log Content-Type: application/json ``` Log to a Flow. You can use query parameters `version_id`, or `environment`, to target an existing version of the Flow. Otherwise, the default deployed version will be chosen. If you create the Flow Log with a `log_status` of `incomplete`, you should later update it to `complete` in order to trigger Evaluators. ## Query Parameters - VersionId (optional): A specific Version ID of the Flow to log to. - Environment (optional): Name of the Environment identifying a deployed version to log to. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell Log flow curl -X POST https://api.humanloop.com/v5/flows/log \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{ "id": "fl_6o701g4jmcanPVHxdqD0O", "start_time": "2024-07-08T22:40:35", "end_time": "2024-07-08T22:40:39", "output": "The patient is likely experiencing a myocardial infarction. Immediate medical attention is required.", "inputs": { "question": "Patient with a history of diabetes and hypertension presents with chest pain and shortness of breath." }, "log_status": "incomplete", "flow": { "attributes": { "prompt": { "template": "You are a helpful assistant helping with medical anamnesis", "model": "gpt-4o", "temperature": 0.8 }, "tool": { "name": "retrieval_tool_v3", "description": "Retrieval tool for MedQA.", "source_code": "def retrieval_tool(question: str) -> str:\n pass\n" } } } }' ``` ```python Log flow import datetime from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.flows.log( id="fl_6o701g4jmcanPVHxdqD0O", flow={ "attributes": { "prompt": { "template": "You are a helpful assistant helping with medical anamnesis", "model": "gpt-4o", "temperature": 0.8, }, "tool": { "name": "retrieval_tool_v3", "description": "Retrieval tool for MedQA.", "source_code": "def retrieval_tool(question: str) -> str:\n pass\n", }, } }, inputs={ "question": "Patient with a history of diabetes and hypertension presents with chest pain and shortness of breath." }, output="The patient is likely experiencing a myocardial infarction. Immediate medical attention is required.", log_status="incomplete", start_time=datetime.datetime.fromisoformat( "2024-07-08 22:40:35+00:00", ), end_time=datetime.datetime.fromisoformat( "2024-07-08 22:40:39+00:00", ), ) ``` ```typescript Log flow import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.flows.log({ id: "fl_6o701g4jmcanPVHxdqD0O", flow: { attributes: { "prompt": { "template": "You are a helpful assistant helping with medical anamnesis", "model": "gpt-4o", "temperature": 0.8 }, "tool": { "name": "retrieval_tool_v3", "description": "Retrieval tool for MedQA.", "source_code": "def retrieval_tool(question: str) -> str:\n pass\n" } } }, inputs: { "question": "Patient with a history of diabetes and hypertension presents with chest pain and shortness of breath." }, output: "The patient is likely experiencing a myocardial infarction. Immediate medical attention is required.", logStatus: "incomplete", startTime: "2024-07-08T22:40:35", endTime: "2024-07-08T22:40:39" }); ``` ```go Log flow package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/flows/log" payload := strings.NewReader("{\n \"id\": \"fl_6o701g4jmcanPVHxdqD0O\",\n \"start_time\": \"2024-07-08T22:40:35\",\n \"end_time\": \"2024-07-08T22:40:39\",\n \"output\": \"The patient is likely experiencing a myocardial infarction. Immediate medical attention is required.\",\n \"log_status\": \"incomplete\"\n}") req, _ := http.NewRequest("POST", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby Log flow require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/flows/log") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{\n \"id\": \"fl_6o701g4jmcanPVHxdqD0O\",\n \"start_time\": \"2024-07-08T22:40:35\",\n \"end_time\": \"2024-07-08T22:40:39\",\n \"output\": \"The patient is likely experiencing a myocardial infarction. Immediate medical attention is required.\",\n \"log_status\": \"incomplete\"\n}" response = http.request(request) puts response.read_body ``` ```java Log flow HttpResponse response = Unirest.post("https://api.humanloop.com/v5/flows/log") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{\n \"id\": \"fl_6o701g4jmcanPVHxdqD0O\",\n \"start_time\": \"2024-07-08T22:40:35\",\n \"end_time\": \"2024-07-08T22:40:39\",\n \"output\": \"The patient is likely experiencing a myocardial infarction. Immediate medical attention is required.\",\n \"log_status\": \"incomplete\"\n}") .asString(); ``` ```php Log flow request('POST', 'https://api.humanloop.com/v5/flows/log', [ 'body' => '{ "id": "fl_6o701g4jmcanPVHxdqD0O", "start_time": "2024-07-08T22:40:35", "end_time": "2024-07-08T22:40:39", "output": "The patient is likely experiencing a myocardial infarction. Immediate medical attention is required.", "log_status": "incomplete" }', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp Log flow var client = new RestClient("https://api.humanloop.com/v5/flows/log"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{\n \"id\": \"fl_6o701g4jmcanPVHxdqD0O\",\n \"start_time\": \"2024-07-08T22:40:35\",\n \"end_time\": \"2024-07-08T22:40:39\",\n \"output\": \"The patient is likely experiencing a myocardial infarction. Immediate medical attention is required.\",\n \"log_status\": \"incomplete\"\n}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift Log flow import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [ "id": "fl_6o701g4jmcanPVHxdqD0O", "start_time": "2024-07-08T22:40:35", "end_time": "2024-07-08T22:40:39", "output": "The patient is likely experiencing a myocardial infarction. Immediate medical attention is required.", "log_status": "incomplete" ] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/flows/log")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X POST "https://api.humanloop.com/v5/flows/log?version_id=string&environment=string" \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{}' ``` ```python import datetime from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.flows.log( id="fl_6o701g4jmcanPVHxdqD0O", flow={ "attributes": { "prompt": { "template": "You are a helpful assistant helping with medical anamnesis", "model": "gpt-4o", "temperature": 0.8, }, "tool": { "name": "retrieval_tool_v3", "description": "Retrieval tool for MedQA.", "source_code": "def retrieval_tool(question: str) -> str:\n pass\n", }, } }, inputs={ "question": "Patient with a history of diabetes and hypertension presents with chest pain and shortness of breath." }, output="The patient is likely experiencing a myocardial infarction. Immediate medical attention is required.", log_status="incomplete", start_time=datetime.datetime.fromisoformat( "2024-07-08 22:40:35+00:00", ), end_time=datetime.datetime.fromisoformat( "2024-07-08 22:40:39+00:00", ), ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.flows.log({ id: "fl_6o701g4jmcanPVHxdqD0O", flow: { attributes: { "prompt": { "template": "You are a helpful assistant helping with medical anamnesis", "model": "gpt-4o", "temperature": 0.8 }, "tool": { "name": "retrieval_tool_v3", "description": "Retrieval tool for MedQA.", "source_code": "def retrieval_tool(question: str) -> str:\n pass\n" } } }, inputs: { "question": "Patient with a history of diabetes and hypertension presents with chest pain and shortness of breath." }, output: "The patient is likely experiencing a myocardial infarction. Immediate medical attention is required.", logStatus: "incomplete", startTime: "2024-07-08T22:40:35", endTime: "2024-07-08T22:40:39" }); ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/flows/log?version_id=string&environment=string" payload := strings.NewReader("{}") req, _ := http.NewRequest("POST", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/flows/log?version_id=string&environment=string") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{}" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.post("https://api.humanloop.com/v5/flows/log?version_id=string&environment=string") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{}") .asString(); ``` ```php request('POST', 'https://api.humanloop.com/v5/flows/log?version_id=string&environment=string', [ 'body' => '{}', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/flows/log?version_id=string&environment=string"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/flows/log?version_id=string&environment=string")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Update Flow Log ```http PATCH https://api.humanloop.com/v5/flows/logs/{log_id} Content-Type: application/json ``` Update the status, inputs, output of a Flow Log. Marking a Flow Log as complete will trigger any monitoring Evaluators to run. Inputs and output (or error) must be provided in order to mark it as complete. The end_time log attribute will be set to match the time the log is marked as complete. ## Path Parameters - LogId (required): Unique identifier of the Flow Log. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell Update log curl -X PATCH https://api.humanloop.com/v5/flows/logs/medqa_experiment_0001 \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{ "inputs": { "question": "Patient with a history of diabetes and normal tension presents with chest pain and shortness of breath." }, "output": "The patient is likely experiencing a myocardial infarction. Immediate medical attention is required.", "error": null, "log_status": "complete" }' ``` ```python Update log from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.flows.update_log( log_id="medqa_experiment_0001", inputs={ "question": "Patient with a history of diabetes and normal tension presents with chest pain and shortness of breath." }, output="The patient is likely experiencing a myocardial infarction. Immediate medical attention is required.", log_status="complete", ) ``` ```typescript Update log import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.flows.updateLog("medqa_experiment_0001", { inputs: { "question": "Patient with a history of diabetes and normal tension presents with chest pain and shortness of breath." }, output: "The patient is likely experiencing a myocardial infarction. Immediate medical attention is required.", logStatus: "complete", error: undefined }); ``` ```go Update log package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/flows/logs/medqa_experiment_0001" payload := strings.NewReader("{\n \"output\": \"The patient is likely experiencing a myocardial infarction. Immediate medical attention is required.\",\n \"error\": null,\n \"log_status\": \"complete\"\n}") req, _ := http.NewRequest("PATCH", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby Update log require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/flows/logs/medqa_experiment_0001") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Patch.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{\n \"output\": \"The patient is likely experiencing a myocardial infarction. Immediate medical attention is required.\",\n \"error\": null,\n \"log_status\": \"complete\"\n}" response = http.request(request) puts response.read_body ``` ```java Update log HttpResponse response = Unirest.patch("https://api.humanloop.com/v5/flows/logs/medqa_experiment_0001") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{\n \"output\": \"The patient is likely experiencing a myocardial infarction. Immediate medical attention is required.\",\n \"error\": null,\n \"log_status\": \"complete\"\n}") .asString(); ``` ```php Update log request('PATCH', 'https://api.humanloop.com/v5/flows/logs/medqa_experiment_0001', [ 'body' => '{ "output": "The patient is likely experiencing a myocardial infarction. Immediate medical attention is required.", "error": null, "log_status": "complete" }', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp Update log var client = new RestClient("https://api.humanloop.com/v5/flows/logs/medqa_experiment_0001"); var request = new RestRequest(Method.PATCH); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{\n \"output\": \"The patient is likely experiencing a myocardial infarction. Immediate medical attention is required.\",\n \"error\": null,\n \"log_status\": \"complete\"\n}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift Update log import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [ "output": "The patient is likely experiencing a myocardial infarction. Immediate medical attention is required.", "error": , "log_status": "complete" ] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/flows/logs/medqa_experiment_0001")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "PATCH" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X PATCH https://api.humanloop.com/v5/flows/logs/:log_id \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{}' ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.flows.update_log( log_id="medqa_experiment_0001", inputs={ "question": "Patient with a history of diabetes and normal tension presents with chest pain and shortness of breath." }, output="The patient is likely experiencing a myocardial infarction. Immediate medical attention is required.", log_status="complete", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.flows.updateLog("medqa_experiment_0001", { inputs: { "question": "Patient with a history of diabetes and normal tension presents with chest pain and shortness of breath." }, output: "The patient is likely experiencing a myocardial infarction. Immediate medical attention is required.", logStatus: "complete", error: undefined }); ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/flows/logs/%3Alog_id" payload := strings.NewReader("{}") req, _ := http.NewRequest("PATCH", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/flows/logs/%3Alog_id") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Patch.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{}" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.patch("https://api.humanloop.com/v5/flows/logs/%3Alog_id") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{}") .asString(); ``` ```php request('PATCH', 'https://api.humanloop.com/v5/flows/logs/%3Alog_id', [ 'body' => '{}', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/flows/logs/%3Alog_id"); var request = new RestRequest(Method.PATCH); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/flows/logs/%3Alog_id")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "PATCH" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Get Flow ```http GET https://api.humanloop.com/v5/flows/{id} ``` Retrieve the Flow with the given ID. By default, the deployed version of the Flow is returned. Use the query parameters `version_id` or `environment` to target a specific version of the Flow. ## Path Parameters - Id (required): Unique identifier for Flow. ## Query Parameters - VersionId (optional): A specific Version ID of the Flow to retrieve. - Environment (optional): Name of the Environment to retrieve a deployed Version from. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell Get specific flow curl https://api.humanloop.com/v5/flows/fl_6o701g4jmcanPVHxdqD0O \ -H "X-API-KEY: " ``` ```python Get specific flow from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.flows.get( id="fl_6o701g4jmcanPVHxdqD0O", ) ``` ```typescript Get specific flow import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.flows.get("fl_6o701g4jmcanPVHxdqD0O"); ``` ```go Get specific flow package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/flows/fl_6o701g4jmcanPVHxdqD0O" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby Get specific flow require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/flows/fl_6o701g4jmcanPVHxdqD0O") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java Get specific flow HttpResponse response = Unirest.get("https://api.humanloop.com/v5/flows/fl_6o701g4jmcanPVHxdqD0O") .header("X-API-KEY", "") .asString(); ``` ```php Get specific flow request('GET', 'https://api.humanloop.com/v5/flows/fl_6o701g4jmcanPVHxdqD0O', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp Get specific flow var client = new RestClient("https://api.humanloop.com/v5/flows/fl_6o701g4jmcanPVHxdqD0O"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift Get specific flow import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/flows/fl_6o701g4jmcanPVHxdqD0O")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -G https://api.humanloop.com/v5/flows/:id \ -H "X-API-KEY: " \ -d version_id=string \ -d environment=string ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.flows.get( id="fl_6o701g4jmcanPVHxdqD0O", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.flows.get("fl_6o701g4jmcanPVHxdqD0O"); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/flows/%3Aid?version_id=string&environment=string" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/flows/%3Aid?version_id=string&environment=string") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.get("https://api.humanloop.com/v5/flows/%3Aid?version_id=string&environment=string") .header("X-API-KEY", "") .asString(); ``` ```php request('GET', 'https://api.humanloop.com/v5/flows/%3Aid?version_id=string&environment=string', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/flows/%3Aid?version_id=string&environment=string"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/flows/%3Aid?version_id=string&environment=string")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Delete Flow ```http DELETE https://api.humanloop.com/v5/flows/{id} ``` Delete the Flow with the given ID. ## Path Parameters - Id (required): Unique identifier for Flow. ## Response Body - 422: Validation Error ## Examples ```shell Delete flow curl -X DELETE https://api.humanloop.com/v5/flows/fl_6o701g4jmcanPVHxdqD0O \ -H "X-API-KEY: " ``` ```python Delete flow from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.flows.delete( id="fl_6o701g4jmcanPVHxdqD0O", ) ``` ```typescript Delete flow import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.flows.delete("fl_6o701g4jmcanPVHxdqD0O"); ``` ```go Delete flow package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/flows/fl_6o701g4jmcanPVHxdqD0O" req, _ := http.NewRequest("DELETE", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby Delete flow require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/flows/fl_6o701g4jmcanPVHxdqD0O") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Delete.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java Delete flow HttpResponse response = Unirest.delete("https://api.humanloop.com/v5/flows/fl_6o701g4jmcanPVHxdqD0O") .header("X-API-KEY", "") .asString(); ``` ```php Delete flow request('DELETE', 'https://api.humanloop.com/v5/flows/fl_6o701g4jmcanPVHxdqD0O', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp Delete flow var client = new RestClient("https://api.humanloop.com/v5/flows/fl_6o701g4jmcanPVHxdqD0O"); var request = new RestRequest(Method.DELETE); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift Delete flow import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/flows/fl_6o701g4jmcanPVHxdqD0O")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "DELETE" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X DELETE https://api.humanloop.com/v5/flows/:id \ -H "X-API-KEY: " ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.flows.delete( id="fl_6o701g4jmcanPVHxdqD0O", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.flows.delete("fl_6o701g4jmcanPVHxdqD0O"); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/flows/%3Aid" req, _ := http.NewRequest("DELETE", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/flows/%3Aid") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Delete.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.delete("https://api.humanloop.com/v5/flows/%3Aid") .header("X-API-KEY", "") .asString(); ``` ```php request('DELETE', 'https://api.humanloop.com/v5/flows/%3Aid', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/flows/%3Aid"); var request = new RestRequest(Method.DELETE); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/flows/%3Aid")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "DELETE" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Move Flow ```http PATCH https://api.humanloop.com/v5/flows/{id} Content-Type: application/json ``` Move the Flow to a different path or change the name. ## Path Parameters - Id (required): Unique identifier for Flow. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell Move flow curl -X PATCH https://api.humanloop.com/v5/flows/fl_6o701g4jmcanPVHxdqD0O \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{ "path": "new directory/new name" }' ``` ```python Move flow from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.flows.move( id="fl_6o701g4jmcanPVHxdqD0O", path="new directory/new name", ) ``` ```typescript Move flow import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.flows.move("fl_6o701g4jmcanPVHxdqD0O", { path: "new directory/new name" }); ``` ```go Move flow package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/flows/fl_6o701g4jmcanPVHxdqD0O" payload := strings.NewReader("{\n \"path\": \"new directory/new name\"\n}") req, _ := http.NewRequest("PATCH", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby Move flow require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/flows/fl_6o701g4jmcanPVHxdqD0O") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Patch.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{\n \"path\": \"new directory/new name\"\n}" response = http.request(request) puts response.read_body ``` ```java Move flow HttpResponse response = Unirest.patch("https://api.humanloop.com/v5/flows/fl_6o701g4jmcanPVHxdqD0O") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{\n \"path\": \"new directory/new name\"\n}") .asString(); ``` ```php Move flow request('PATCH', 'https://api.humanloop.com/v5/flows/fl_6o701g4jmcanPVHxdqD0O', [ 'body' => '{ "path": "new directory/new name" }', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp Move flow var client = new RestClient("https://api.humanloop.com/v5/flows/fl_6o701g4jmcanPVHxdqD0O"); var request = new RestRequest(Method.PATCH); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{\n \"path\": \"new directory/new name\"\n}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift Move flow import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = ["path": "new directory/new name"] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/flows/fl_6o701g4jmcanPVHxdqD0O")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "PATCH" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X PATCH https://api.humanloop.com/v5/flows/:id \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{}' ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.flows.move( id="fl_6o701g4jmcanPVHxdqD0O", path="new directory/new name", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.flows.move("fl_6o701g4jmcanPVHxdqD0O", { path: "new directory/new name" }); ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/flows/%3Aid" payload := strings.NewReader("{}") req, _ := http.NewRequest("PATCH", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/flows/%3Aid") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Patch.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{}" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.patch("https://api.humanloop.com/v5/flows/%3Aid") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{}") .asString(); ``` ```php request('PATCH', 'https://api.humanloop.com/v5/flows/%3Aid', [ 'body' => '{}', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/flows/%3Aid"); var request = new RestRequest(Method.PATCH); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/flows/%3Aid")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "PATCH" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # List Flows ```http GET https://api.humanloop.com/v5/flows ``` Get a list of Flows. ## Query Parameters - Page (optional): Page number for pagination. - Size (optional): Page size for pagination. Number of Flows to fetch. - Name (optional): Case-insensitive filter for Flow name. - UserFilter (optional): Case-insensitive filter for users in the Flow. This filter matches against both email address and name of users. - SortBy (optional): Field to sort Flows by - Order (optional): Direction to sort by. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell curl -G https://api.humanloop.com/v5/flows \ -H "X-API-KEY: " \ -d size=1 ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) response = client.flows.list( size=1, ) for item in response: yield item # alternatively, you can paginate page-by-page for page in response.iter_pages(): yield page ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); const response = await client.flows.list({ size: 1 }); for await (const item of response) { console.log(item); } // Or you can manually iterate page-by-page const page = await client.flows.list({ size: 1 }); while (page.hasNextPage()) { page = page.getNextPage(); } ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/flows?size=1" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/flows?size=1") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.get("https://api.humanloop.com/v5/flows?size=1") .header("X-API-KEY", "") .asString(); ``` ```php request('GET', 'https://api.humanloop.com/v5/flows?size=1', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/flows?size=1"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/flows?size=1")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -G https://api.humanloop.com/v5/flows \ -H "X-API-KEY: " \ -d page=0 \ -d size=0 ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) response = client.flows.list( size=1, ) for item in response: yield item # alternatively, you can paginate page-by-page for page in response.iter_pages(): yield page ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); const response = await client.flows.list({ size: 1 }); for await (const item of response) { console.log(item); } // Or you can manually iterate page-by-page const page = await client.flows.list({ size: 1 }); while (page.hasNextPage()) { page = page.getNextPage(); } ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/flows?page=0&size=0" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/flows?page=0&size=0") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.get("https://api.humanloop.com/v5/flows?page=0&size=0") .header("X-API-KEY", "") .asString(); ``` ```php request('GET', 'https://api.humanloop.com/v5/flows?page=0&size=0', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/flows?page=0&size=0"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/flows?page=0&size=0")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Upsert Flow ```http POST https://api.humanloop.com/v5/flows Content-Type: application/json ``` Create or update a Flow. Flows can also be identified by the `ID` or their `path`. You can provide `version_name` and `version_description` to identify and describe your versions. Version names must be unique within a Flow - attempting to create a version with a name that already exists will result in a 409 Conflict error. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell Upsert flow curl -X POST https://api.humanloop.com/v5/flows \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{ "attributes": { "prompt": { "template": "You are a helpful medical assistant helping with medical anamnesis. Answer {{question}}", "model": "gpt-4o", "temperature": 0.8 }, "tool": { "name": "retrieval_tool_v3", "description": "Retrieval tool for MedQA.", "source_code": "def retrieval_tool(question: str) -> str:\n pass\n" } }, "path": "Personal Projects/MedQA Flow" }' ``` ```python Upsert flow from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.flows.upsert( path="Personal Projects/MedQA Flow", attributes={ "prompt": { "template": "You are a helpful medical assistant helping with medical anamnesis. Answer {{question}}", "model": "gpt-4o", "temperature": 0.8, }, "tool": { "name": "retrieval_tool_v3", "description": "Retrieval tool for MedQA.", "source_code": "def retrieval_tool(question: str) -> str:\n pass\n", }, }, ) ``` ```typescript Upsert flow import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.flows.upsert({ path: "Personal Projects/MedQA Flow", attributes: { "prompt": { "template": "You are a helpful medical assistant helping with medical anamnesis. Answer {{question}}", "model": "gpt-4o", "temperature": 0.8 }, "tool": { "name": "retrieval_tool_v3", "description": "Retrieval tool for MedQA.", "source_code": "def retrieval_tool(question: str) -> str:\n pass\n" }, "commit_message": "Initial commit" } }); ``` ```go Upsert flow package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/flows" payload := strings.NewReader("{\n \"path\": \"Personal Projects/MedQA Flow\"\n}") req, _ := http.NewRequest("POST", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby Upsert flow require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/flows") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{\n \"path\": \"Personal Projects/MedQA Flow\"\n}" response = http.request(request) puts response.read_body ``` ```java Upsert flow HttpResponse response = Unirest.post("https://api.humanloop.com/v5/flows") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{\n \"path\": \"Personal Projects/MedQA Flow\"\n}") .asString(); ``` ```php Upsert flow request('POST', 'https://api.humanloop.com/v5/flows', [ 'body' => '{ "path": "Personal Projects/MedQA Flow" }', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp Upsert flow var client = new RestClient("https://api.humanloop.com/v5/flows"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{\n \"path\": \"Personal Projects/MedQA Flow\"\n}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift Upsert flow import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = ["path": "Personal Projects/MedQA Flow"] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/flows")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X POST https://api.humanloop.com/v5/flows \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{ "attributes": { "string": {} } }' ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.flows.upsert( path="Personal Projects/MedQA Flow", attributes={ "prompt": { "template": "You are a helpful medical assistant helping with medical anamnesis. Answer {{question}}", "model": "gpt-4o", "temperature": 0.8, }, "tool": { "name": "retrieval_tool_v3", "description": "Retrieval tool for MedQA.", "source_code": "def retrieval_tool(question: str) -> str:\n pass\n", }, }, ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.flows.upsert({ path: "Personal Projects/MedQA Flow", attributes: { "prompt": { "template": "You are a helpful medical assistant helping with medical anamnesis. Answer {{question}}", "model": "gpt-4o", "temperature": 0.8 }, "tool": { "name": "retrieval_tool_v3", "description": "Retrieval tool for MedQA.", "source_code": "def retrieval_tool(question: str) -> str:\n pass\n" }, "commit_message": "Initial commit" } }); ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/flows" payload := strings.NewReader("{}") req, _ := http.NewRequest("POST", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/flows") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{}" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.post("https://api.humanloop.com/v5/flows") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{}") .asString(); ``` ```php request('POST', 'https://api.humanloop.com/v5/flows', [ 'body' => '{}', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/flows"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/flows")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # List Versions of a Flow ```http GET https://api.humanloop.com/v5/flows/{id}/versions ``` Get a list of all the versions of a Flow. ## Path Parameters - Id (required): Unique identifier for Flow. ## Query Parameters - EvaluatorAggregates (optional): Whether to include Evaluator aggregate results for the versions in the response ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell List versions curl https://api.humanloop.com/v5/flows/fl_6o701g4jmcanPVHxdqD0O/versions \ -H "X-API-KEY: " ``` ```python List versions from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.flows.list_versions( id="fl_6o701g4jmcanPVHxdqD0O", ) ``` ```typescript List versions import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.flows.listVersions("fl_6o701g4jmcanPVHxdqD0O", { status: "committed" }); ``` ```go List versions package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/flows/fl_6o701g4jmcanPVHxdqD0O/versions" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby List versions require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/flows/fl_6o701g4jmcanPVHxdqD0O/versions") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java List versions HttpResponse response = Unirest.get("https://api.humanloop.com/v5/flows/fl_6o701g4jmcanPVHxdqD0O/versions") .header("X-API-KEY", "") .asString(); ``` ```php List versions request('GET', 'https://api.humanloop.com/v5/flows/fl_6o701g4jmcanPVHxdqD0O/versions', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp List versions var client = new RestClient("https://api.humanloop.com/v5/flows/fl_6o701g4jmcanPVHxdqD0O/versions"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift List versions import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/flows/fl_6o701g4jmcanPVHxdqD0O/versions")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -G https://api.humanloop.com/v5/flows/:id/versions \ -H "X-API-KEY: " \ -d evaluator_aggregates=true ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.flows.list_versions( id="fl_6o701g4jmcanPVHxdqD0O", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.flows.listVersions("fl_6o701g4jmcanPVHxdqD0O", { status: "committed" }); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/flows/%3Aid/versions?evaluator_aggregates=true" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/flows/%3Aid/versions?evaluator_aggregates=true") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.get("https://api.humanloop.com/v5/flows/%3Aid/versions?evaluator_aggregates=true") .header("X-API-KEY", "") .asString(); ``` ```php request('GET', 'https://api.humanloop.com/v5/flows/%3Aid/versions?evaluator_aggregates=true', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/flows/%3Aid/versions?evaluator_aggregates=true"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/flows/%3Aid/versions?evaluator_aggregates=true")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Delete Flow Version ```http DELETE https://api.humanloop.com/v5/flows/{id}/versions/{version_id} ``` Delete a version of the Flow. ## Path Parameters - Id (required): Unique identifier for Flow. - VersionId (required): Unique identifier for the specific version of the Flow. ## Response Body - 422: Validation Error ## Examples ```shell curl -X DELETE https://api.humanloop.com/v5/flows/id/versions/version_id \ -H "X-API-KEY: " ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.flows.delete_flow_version( id="id", version_id="version_id", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.flows.deleteFlowVersion("id", "version_id"); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/flows/id/versions/version_id" req, _ := http.NewRequest("DELETE", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/flows/id/versions/version_id") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Delete.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.delete("https://api.humanloop.com/v5/flows/id/versions/version_id") .header("X-API-KEY", "") .asString(); ``` ```php request('DELETE', 'https://api.humanloop.com/v5/flows/id/versions/version_id', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/flows/id/versions/version_id"); var request = new RestRequest(Method.DELETE); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/flows/id/versions/version_id")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "DELETE" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X DELETE https://api.humanloop.com/v5/flows/:id/versions/:version_id \ -H "X-API-KEY: " ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.flows.delete_flow_version( id="id", version_id="version_id", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.flows.deleteFlowVersion("id", "version_id"); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/flows/%3Aid/versions/%3Aversion_id" req, _ := http.NewRequest("DELETE", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/flows/%3Aid/versions/%3Aversion_id") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Delete.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.delete("https://api.humanloop.com/v5/flows/%3Aid/versions/%3Aversion_id") .header("X-API-KEY", "") .asString(); ``` ```php request('DELETE', 'https://api.humanloop.com/v5/flows/%3Aid/versions/%3Aversion_id', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/flows/%3Aid/versions/%3Aversion_id"); var request = new RestRequest(Method.DELETE); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/flows/%3Aid/versions/%3Aversion_id")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "DELETE" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Update Flow Version ```http PATCH https://api.humanloop.com/v5/flows/{id}/versions/{version_id} Content-Type: application/json ``` Update the name or description of the Flow version. ## Path Parameters - Id (required): Unique identifier for Flow. - VersionId (required): Unique identifier for the specific version of the Flow. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell curl -X PATCH https://api.humanloop.com/v5/flows/id/versions/version_id \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{}' ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.flows.update_flow_version( id="id", version_id="version_id", ) ``` ```javascript const url = 'https://api.humanloop.com/v5/flows/id/versions/version_id'; const options = { method: 'PATCH', headers: {'X-API-KEY': '', 'Content-Type': 'application/json'}, body: '{}' }; try { const response = await fetch(url, options); const data = await response.json(); console.log(data); } catch (error) { console.error(error); } ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/flows/id/versions/version_id" payload := strings.NewReader("{}") req, _ := http.NewRequest("PATCH", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/flows/id/versions/version_id") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Patch.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{}" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.patch("https://api.humanloop.com/v5/flows/id/versions/version_id") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{}") .asString(); ``` ```php request('PATCH', 'https://api.humanloop.com/v5/flows/id/versions/version_id', [ 'body' => '{}', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/flows/id/versions/version_id"); var request = new RestRequest(Method.PATCH); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/flows/id/versions/version_id")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "PATCH" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X PATCH https://api.humanloop.com/v5/flows/:id/versions/:version_id \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{}' ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.flows.update_flow_version( id="id", version_id="version_id", ) ``` ```javascript const url = 'https://api.humanloop.com/v5/flows/%3Aid/versions/%3Aversion_id'; const options = { method: 'PATCH', headers: {'X-API-KEY': '', 'Content-Type': 'application/json'}, body: '{}' }; try { const response = await fetch(url, options); const data = await response.json(); console.log(data); } catch (error) { console.error(error); } ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/flows/%3Aid/versions/%3Aversion_id" payload := strings.NewReader("{}") req, _ := http.NewRequest("PATCH", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/flows/%3Aid/versions/%3Aversion_id") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Patch.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{}" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.patch("https://api.humanloop.com/v5/flows/%3Aid/versions/%3Aversion_id") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{}") .asString(); ``` ```php request('PATCH', 'https://api.humanloop.com/v5/flows/%3Aid/versions/%3Aversion_id', [ 'body' => '{}', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/flows/%3Aid/versions/%3Aversion_id"); var request = new RestRequest(Method.PATCH); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/flows/%3Aid/versions/%3Aversion_id")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "PATCH" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Deploy Flow ```http POST https://api.humanloop.com/v5/flows/{id}/environments/{environment_id} ``` Deploy Flow to an Environment. Set the deployed version for the specified Environment. This Flow will be used for calls made to the Flow in this Environment. ## Path Parameters - Id (required): Unique identifier for Flow. - EnvironmentId (required): Unique identifier for the Environment to deploy the Version to. ## Query Parameters - VersionId (required): Unique identifier for the specific version of the Flow. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell Deploy curl -X POST "https://api.humanloop.com/v5/flows/fl_6o701g4jmcanPVHxdqD0O/environments/staging?version_id=flv_6o701g4jmcanPVHxdqD0O" \ -H "X-API-KEY: " ``` ```python Deploy from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.flows.set_deployment( id="fl_6o701g4jmcanPVHxdqD0O", environment_id="staging", version_id="flv_6o701g4jmcanPVHxdqD0O", ) ``` ```typescript Deploy import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.flows.setDeployment("fl_6o701g4jmcanPVHxdqD0O", "staging", { versionId: "flv_6o701g4jmcanPVHxdqD0O" }); ``` ```go Deploy package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/flows/fl_6o701g4jmcanPVHxdqD0O/environments/staging?version_id=flv_6o701g4jmcanPVHxdqD0O" req, _ := http.NewRequest("POST", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby Deploy require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/flows/fl_6o701g4jmcanPVHxdqD0O/environments/staging?version_id=flv_6o701g4jmcanPVHxdqD0O") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java Deploy HttpResponse response = Unirest.post("https://api.humanloop.com/v5/flows/fl_6o701g4jmcanPVHxdqD0O/environments/staging?version_id=flv_6o701g4jmcanPVHxdqD0O") .header("X-API-KEY", "") .asString(); ``` ```php Deploy request('POST', 'https://api.humanloop.com/v5/flows/fl_6o701g4jmcanPVHxdqD0O/environments/staging?version_id=flv_6o701g4jmcanPVHxdqD0O', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp Deploy var client = new RestClient("https://api.humanloop.com/v5/flows/fl_6o701g4jmcanPVHxdqD0O/environments/staging?version_id=flv_6o701g4jmcanPVHxdqD0O"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift Deploy import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/flows/fl_6o701g4jmcanPVHxdqD0O/environments/staging?version_id=flv_6o701g4jmcanPVHxdqD0O")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X POST "https://api.humanloop.com/v5/flows/:id/environments/:environment_id?version_id=string" \ -H "X-API-KEY: " ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.flows.set_deployment( id="fl_6o701g4jmcanPVHxdqD0O", environment_id="staging", version_id="flv_6o701g4jmcanPVHxdqD0O", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.flows.setDeployment("fl_6o701g4jmcanPVHxdqD0O", "staging", { versionId: "flv_6o701g4jmcanPVHxdqD0O" }); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/flows/%3Aid/environments/%3Aenvironment_id?version_id=string" req, _ := http.NewRequest("POST", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/flows/%3Aid/environments/%3Aenvironment_id?version_id=string") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.post("https://api.humanloop.com/v5/flows/%3Aid/environments/%3Aenvironment_id?version_id=string") .header("X-API-KEY", "") .asString(); ``` ```php request('POST', 'https://api.humanloop.com/v5/flows/%3Aid/environments/%3Aenvironment_id?version_id=string', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/flows/%3Aid/environments/%3Aenvironment_id?version_id=string"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/flows/%3Aid/environments/%3Aenvironment_id?version_id=string")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Remove Deployment ```http DELETE https://api.humanloop.com/v5/flows/{id}/environments/{environment_id} ``` Remove deployed Flow from the Environment. Remove the deployed version for the specified Environment. This Flow will no longer be used for calls made to the Flow in this Environment. ## Path Parameters - Id (required): Unique identifier for Flow. - EnvironmentId (required): Unique identifier for the Environment to remove the deployment from. ## Response Body - 422: Validation Error ## Examples ```shell Delete environment curl -X DELETE https://api.humanloop.com/v5/flows/fl_6o701g4jmcanPVHxdqD0O/environments/staging \ -H "X-API-KEY: " ``` ```python Delete environment from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.flows.remove_deployment( id="fl_6o701g4jmcanPVHxdqD0O", environment_id="staging", ) ``` ```typescript Delete environment import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.flows.removeDeployment("fl_6o701g4jmcanPVHxdqD0O", "staging"); ``` ```go Delete environment package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/flows/fl_6o701g4jmcanPVHxdqD0O/environments/staging" req, _ := http.NewRequest("DELETE", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby Delete environment require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/flows/fl_6o701g4jmcanPVHxdqD0O/environments/staging") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Delete.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java Delete environment HttpResponse response = Unirest.delete("https://api.humanloop.com/v5/flows/fl_6o701g4jmcanPVHxdqD0O/environments/staging") .header("X-API-KEY", "") .asString(); ``` ```php Delete environment request('DELETE', 'https://api.humanloop.com/v5/flows/fl_6o701g4jmcanPVHxdqD0O/environments/staging', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp Delete environment var client = new RestClient("https://api.humanloop.com/v5/flows/fl_6o701g4jmcanPVHxdqD0O/environments/staging"); var request = new RestRequest(Method.DELETE); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift Delete environment import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/flows/fl_6o701g4jmcanPVHxdqD0O/environments/staging")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "DELETE" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X DELETE https://api.humanloop.com/v5/flows/:id/environments/:environment_id \ -H "X-API-KEY: " ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.flows.remove_deployment( id="fl_6o701g4jmcanPVHxdqD0O", environment_id="staging", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.flows.removeDeployment("fl_6o701g4jmcanPVHxdqD0O", "staging"); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/flows/%3Aid/environments/%3Aenvironment_id" req, _ := http.NewRequest("DELETE", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/flows/%3Aid/environments/%3Aenvironment_id") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Delete.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.delete("https://api.humanloop.com/v5/flows/%3Aid/environments/%3Aenvironment_id") .header("X-API-KEY", "") .asString(); ``` ```php request('DELETE', 'https://api.humanloop.com/v5/flows/%3Aid/environments/%3Aenvironment_id', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/flows/%3Aid/environments/%3Aenvironment_id"); var request = new RestRequest(Method.DELETE); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/flows/%3Aid/environments/%3Aenvironment_id")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "DELETE" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # List a Flow's Environments ```http GET https://api.humanloop.com/v5/flows/{id}/environments ``` List all Environments and their deployed versions for the Flow. ## Path Parameters - Id (required): Unique identifier for Flow. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell List environments curl https://api.humanloop.com/v5/flows/fl_6o701g4jmcanPVHxdqD0O/environments \ -H "X-API-KEY: " ``` ```python List environments from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.flows.list_environments( id="fl_6o701g4jmcanPVHxdqD0O", ) ``` ```typescript List environments import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.flows.listEnvironments("fl_6o701g4jmcanPVHxdqD0O"); ``` ```go List environments package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/flows/fl_6o701g4jmcanPVHxdqD0O/environments" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby List environments require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/flows/fl_6o701g4jmcanPVHxdqD0O/environments") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java List environments HttpResponse response = Unirest.get("https://api.humanloop.com/v5/flows/fl_6o701g4jmcanPVHxdqD0O/environments") .header("X-API-KEY", "") .asString(); ``` ```php List environments request('GET', 'https://api.humanloop.com/v5/flows/fl_6o701g4jmcanPVHxdqD0O/environments', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp List environments var client = new RestClient("https://api.humanloop.com/v5/flows/fl_6o701g4jmcanPVHxdqD0O/environments"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift List environments import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/flows/fl_6o701g4jmcanPVHxdqD0O/environments")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl https://api.humanloop.com/v5/flows/:id/environments \ -H "X-API-KEY: " ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.flows.list_environments( id="fl_6o701g4jmcanPVHxdqD0O", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.flows.listEnvironments("fl_6o701g4jmcanPVHxdqD0O"); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/flows/%3Aid/environments" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/flows/%3Aid/environments") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.get("https://api.humanloop.com/v5/flows/%3Aid/environments") .header("X-API-KEY", "") .asString(); ``` ```php request('GET', 'https://api.humanloop.com/v5/flows/%3Aid/environments', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/flows/%3Aid/environments"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/flows/%3Aid/environments")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Update Monitoring ```http POST https://api.humanloop.com/v5/flows/{id}/evaluators Content-Type: application/json ``` Activate and deactivate Evaluators for monitoring the Flow. An activated Evaluator will automatically be run on all new "completed" Logs within the Flow for monitoring purposes. ## Path Parameters - Id (required) ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell Add evaluator curl -X POST https://api.humanloop.com/v5/flows/fl_6o701g4jmcanPVHxdqD0O/evaluators \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{ "activate": [ { "evaluator_version_id": "evv_1abc4308abd" } ] }' ``` ```python Add evaluator from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.flows.update_monitoring( id="fl_6o701g4jmcanPVHxdqD0O", activate=[{"evaluator_version_id": "evv_1abc4308abd"}], ) ``` ```typescript Add evaluator import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.flows.updateMonitoring("fl_6o701g4jmcanPVHxdqD0O", { activate: [{ evaluatorVersionId: "evv_1abc4308abd" }] }); ``` ```go Add evaluator package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/flows/fl_6o701g4jmcanPVHxdqD0O/evaluators" payload := strings.NewReader("{}") req, _ := http.NewRequest("POST", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby Add evaluator require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/flows/fl_6o701g4jmcanPVHxdqD0O/evaluators") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{}" response = http.request(request) puts response.read_body ``` ```java Add evaluator HttpResponse response = Unirest.post("https://api.humanloop.com/v5/flows/fl_6o701g4jmcanPVHxdqD0O/evaluators") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{}") .asString(); ``` ```php Add evaluator request('POST', 'https://api.humanloop.com/v5/flows/fl_6o701g4jmcanPVHxdqD0O/evaluators', [ 'body' => '{}', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp Add evaluator var client = new RestClient("https://api.humanloop.com/v5/flows/fl_6o701g4jmcanPVHxdqD0O/evaluators"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift Add evaluator import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/flows/fl_6o701g4jmcanPVHxdqD0O/evaluators")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X POST https://api.humanloop.com/v5/flows/:id/evaluators \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{}' ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.flows.update_monitoring( id="fl_6o701g4jmcanPVHxdqD0O", activate=[{"evaluator_version_id": "evv_1abc4308abd"}], ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.flows.updateMonitoring("fl_6o701g4jmcanPVHxdqD0O", { activate: [{ evaluatorVersionId: "evv_1abc4308abd" }] }); ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/flows/%3Aid/evaluators" payload := strings.NewReader("{}") req, _ := http.NewRequest("POST", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/flows/%3Aid/evaluators") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{}" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.post("https://api.humanloop.com/v5/flows/%3Aid/evaluators") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{}") .asString(); ``` ```php request('POST', 'https://api.humanloop.com/v5/flows/%3Aid/evaluators', [ 'body' => '{}', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/flows/%3Aid/evaluators"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/flows/%3Aid/evaluators")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # List ```http GET https://api.humanloop.com/v5/directories ``` Retrieve a list of all Directories. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell curl https://api.humanloop.com/v5/directories \ -H "X-API-KEY: " ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.directories.list() ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.directories.list(); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/directories" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/directories") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.get("https://api.humanloop.com/v5/directories") .header("X-API-KEY", "") .asString(); ``` ```php request('GET', 'https://api.humanloop.com/v5/directories', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/directories"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/directories")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl https://api.humanloop.com/v5/directories \ -H "X-API-KEY: " ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.directories.list() ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.directories.list(); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/directories" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/directories") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.get("https://api.humanloop.com/v5/directories") .header("X-API-KEY", "") .asString(); ``` ```php request('GET', 'https://api.humanloop.com/v5/directories', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/directories"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/directories")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Create ```http POST https://api.humanloop.com/v5/directories Content-Type: application/json ``` Creates a Directory. ## Response Body - 201: Successful Response - 422: Validation Error ## Examples ```shell curl -X POST https://api.humanloop.com/v5/directories \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{}' ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.directories.create() ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.directories.create(); ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/directories" payload := strings.NewReader("{}") req, _ := http.NewRequest("POST", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/directories") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{}" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.post("https://api.humanloop.com/v5/directories") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{}") .asString(); ``` ```php request('POST', 'https://api.humanloop.com/v5/directories', [ 'body' => '{}', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/directories"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/directories")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X POST https://api.humanloop.com/v5/directories \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{}' ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.directories.create() ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.directories.create(); ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/directories" payload := strings.NewReader("{}") req, _ := http.NewRequest("POST", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/directories") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{}" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.post("https://api.humanloop.com/v5/directories") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{}") .asString(); ``` ```php request('POST', 'https://api.humanloop.com/v5/directories', [ 'body' => '{}', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/directories"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/directories")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Get ```http GET https://api.humanloop.com/v5/directories/{id} ``` Fetches a directory by ID. ## Path Parameters - Id (required): String ID of directory. Starts with `dir_`. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell curl https://api.humanloop.com/v5/directories/id \ -H "X-API-KEY: " ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.directories.get( id="id", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.directories.get("id"); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/directories/id" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/directories/id") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.get("https://api.humanloop.com/v5/directories/id") .header("X-API-KEY", "") .asString(); ``` ```php request('GET', 'https://api.humanloop.com/v5/directories/id', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/directories/id"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/directories/id")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl https://api.humanloop.com/v5/directories/:id \ -H "X-API-KEY: " ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.directories.get( id="id", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.directories.get("id"); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/directories/%3Aid" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/directories/%3Aid") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.get("https://api.humanloop.com/v5/directories/%3Aid") .header("X-API-KEY", "") .asString(); ``` ```php request('GET', 'https://api.humanloop.com/v5/directories/%3Aid', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/directories/%3Aid"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/directories/%3Aid")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Delete ```http DELETE https://api.humanloop.com/v5/directories/{id} ``` Delete the Directory with the given ID. The Directory must be empty (i.e. contain no Directories or Files). ## Path Parameters - Id (required): Unique identifier for Directory. Starts with `dir_`. ## Response Body - 422: Validation Error ## Examples ```shell curl -X DELETE https://api.humanloop.com/v5/directories/id \ -H "X-API-KEY: " ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.directories.delete( id="id", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.directories.delete("id"); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/directories/id" req, _ := http.NewRequest("DELETE", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/directories/id") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Delete.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.delete("https://api.humanloop.com/v5/directories/id") .header("X-API-KEY", "") .asString(); ``` ```php request('DELETE', 'https://api.humanloop.com/v5/directories/id', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/directories/id"); var request = new RestRequest(Method.DELETE); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/directories/id")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "DELETE" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X DELETE https://api.humanloop.com/v5/directories/:id \ -H "X-API-KEY: " ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.directories.delete( id="id", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.directories.delete("id"); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/directories/%3Aid" req, _ := http.NewRequest("DELETE", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/directories/%3Aid") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Delete.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.delete("https://api.humanloop.com/v5/directories/%3Aid") .header("X-API-KEY", "") .asString(); ``` ```php request('DELETE', 'https://api.humanloop.com/v5/directories/%3Aid', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/directories/%3Aid"); var request = new RestRequest(Method.DELETE); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/directories/%3Aid")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "DELETE" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Update ```http PATCH https://api.humanloop.com/v5/directories/{id} Content-Type: application/json ``` Update the Directory with the given ID. ## Path Parameters - Id (required): Unique identifier for Directory. Starts with `dir_`. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell curl -X PATCH https://api.humanloop.com/v5/directories/id \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{}' ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.directories.update( id="id", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.directories.update("id"); ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/directories/id" payload := strings.NewReader("{}") req, _ := http.NewRequest("PATCH", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/directories/id") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Patch.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{}" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.patch("https://api.humanloop.com/v5/directories/id") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{}") .asString(); ``` ```php request('PATCH', 'https://api.humanloop.com/v5/directories/id', [ 'body' => '{}', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/directories/id"); var request = new RestRequest(Method.PATCH); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/directories/id")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "PATCH" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X PATCH https://api.humanloop.com/v5/directories/:id \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{}' ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.directories.update( id="id", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.directories.update("id"); ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/directories/%3Aid" payload := strings.NewReader("{}") req, _ := http.NewRequest("PATCH", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/directories/%3Aid") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Patch.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{}" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.patch("https://api.humanloop.com/v5/directories/%3Aid") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{}") .asString(); ``` ```php request('PATCH', 'https://api.humanloop.com/v5/directories/%3Aid', [ 'body' => '{}', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/directories/%3Aid"); var request = new RestRequest(Method.PATCH); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/directories/%3Aid")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "PATCH" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # List Files ```http GET https://api.humanloop.com/v5/files ``` Get a paginated list of files. ## Query Parameters - Page (optional): Page offset for pagination. - Size (optional): Page size for pagination. Number of files to fetch. - Name (optional): Case-insensitive filter for file name. - Template (optional): Filter to include only template files. - Type (optional): List of file types to filter for. - Environment (optional): Case-sensitive filter for files with a deployment in the specified environment. Requires the environment name. - SortBy (optional): Field to sort files by - Order (optional): Direction to sort by. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell curl https://api.humanloop.com/v5/files \ -H "X-API-KEY: " ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.files.list_files() ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.files.listFiles(); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/files" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/files") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.get("https://api.humanloop.com/v5/files") .header("X-API-KEY", "") .asString(); ``` ```php request('GET', 'https://api.humanloop.com/v5/files', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/files"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/files")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -G https://api.humanloop.com/v5/files \ -H "X-API-KEY: " \ -d page=0 \ -d size=0 ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.files.list_files() ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.files.listFiles(); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/files?page=0&size=0" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/files?page=0&size=0") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.get("https://api.humanloop.com/v5/files?page=0&size=0") .header("X-API-KEY", "") .asString(); ``` ```php request('GET', 'https://api.humanloop.com/v5/files?page=0&size=0', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/files?page=0&size=0"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/files?page=0&size=0")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Retrieve by path ```http POST https://api.humanloop.com/v5/files/retrieve-by-path Content-Type: application/json ``` Retrieve a File by path. ## Query Parameters - Environment (optional): Name of the Environment to retrieve a deployed Version from. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell curl -X POST https://api.humanloop.com/v5/files/retrieve-by-path \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{ "path": "path" }' ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.files.retrieve_by_path( path="path", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.files.retrieveByPath({ path: "path" }); ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/files/retrieve-by-path" payload := strings.NewReader("{\n \"path\": \"path\"\n}") req, _ := http.NewRequest("POST", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/files/retrieve-by-path") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{\n \"path\": \"path\"\n}" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.post("https://api.humanloop.com/v5/files/retrieve-by-path") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{\n \"path\": \"path\"\n}") .asString(); ``` ```php request('POST', 'https://api.humanloop.com/v5/files/retrieve-by-path', [ 'body' => '{ "path": "path" }', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/files/retrieve-by-path"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{\n \"path\": \"path\"\n}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = ["path": "path"] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/files/retrieve-by-path")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X POST "https://api.humanloop.com/v5/files/retrieve-by-path?environment=string" \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{ "path": "string" }' ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.files.retrieve_by_path( path="path", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.files.retrieveByPath({ path: "path" }); ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/files/retrieve-by-path?environment=string" payload := strings.NewReader("{\n \"path\": \"string\"\n}") req, _ := http.NewRequest("POST", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/files/retrieve-by-path?environment=string") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{\n \"path\": \"string\"\n}" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.post("https://api.humanloop.com/v5/files/retrieve-by-path?environment=string") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{\n \"path\": \"string\"\n}") .asString(); ``` ```php request('POST', 'https://api.humanloop.com/v5/files/retrieve-by-path?environment=string', [ 'body' => '{ "path": "string" }', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/files/retrieve-by-path?environment=string"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{\n \"path\": \"string\"\n}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = ["path": "string"] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/files/retrieve-by-path?environment=string")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # List Evaluations ```http GET https://api.humanloop.com/v5/evaluations ``` Retrieve a list of Evaluations for the specified File. ## Query Parameters - FileId (required): Filter by File ID. Only Evaluations for the specified File will be returned. - Page (optional): Page number for pagination. - Size (optional): Page size for pagination. Number of Evaluations to fetch. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell List evaluations for file curl -G https://api.humanloop.com/v5/evaluations \ -H "X-API-KEY: " \ -d file_id=pr_30gco7dx6JDq4200GVOHa \ -d size=1 ``` ```python List evaluations for file from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) response = client.evaluations.list( file_id="pr_30gco7dx6JDq4200GVOHa", size=1, ) for item in response: yield item # alternatively, you can paginate page-by-page for page in response.iter_pages(): yield page ``` ```typescript List evaluations for file import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); const response = await client.evaluations.list({ fileId: "pr_30gco7dx6JDq4200GVOHa", size: 1 }); for await (const item of response) { console.log(item); } // Or you can manually iterate page-by-page const page = await client.evaluations.list({ fileId: "pr_30gco7dx6JDq4200GVOHa", size: 1 }); while (page.hasNextPage()) { page = page.getNextPage(); } ``` ```go List evaluations for file package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/evaluations?file_id=pr_30gco7dx6JDq4200GVOHa&size=1" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby List evaluations for file require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/evaluations?file_id=pr_30gco7dx6JDq4200GVOHa&size=1") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java List evaluations for file HttpResponse response = Unirest.get("https://api.humanloop.com/v5/evaluations?file_id=pr_30gco7dx6JDq4200GVOHa&size=1") .header("X-API-KEY", "") .asString(); ``` ```php List evaluations for file request('GET', 'https://api.humanloop.com/v5/evaluations?file_id=pr_30gco7dx6JDq4200GVOHa&size=1', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp List evaluations for file var client = new RestClient("https://api.humanloop.com/v5/evaluations?file_id=pr_30gco7dx6JDq4200GVOHa&size=1"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift List evaluations for file import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/evaluations?file_id=pr_30gco7dx6JDq4200GVOHa&size=1")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -G https://api.humanloop.com/v5/evaluations \ -H "X-API-KEY: " \ -d file_id=string \ -d page=0 \ -d size=0 ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) response = client.evaluations.list( file_id="pr_30gco7dx6JDq4200GVOHa", size=1, ) for item in response: yield item # alternatively, you can paginate page-by-page for page in response.iter_pages(): yield page ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); const response = await client.evaluations.list({ fileId: "pr_30gco7dx6JDq4200GVOHa", size: 1 }); for await (const item of response) { console.log(item); } // Or you can manually iterate page-by-page const page = await client.evaluations.list({ fileId: "pr_30gco7dx6JDq4200GVOHa", size: 1 }); while (page.hasNextPage()) { page = page.getNextPage(); } ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/evaluations?file_id=string&page=0&size=0" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/evaluations?file_id=string&page=0&size=0") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.get("https://api.humanloop.com/v5/evaluations?file_id=string&page=0&size=0") .header("X-API-KEY", "") .asString(); ``` ```php request('GET', 'https://api.humanloop.com/v5/evaluations?file_id=string&page=0&size=0', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/evaluations?file_id=string&page=0&size=0"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/evaluations?file_id=string&page=0&size=0")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Create Evaluation ```http POST https://api.humanloop.com/v5/evaluations Content-Type: application/json ``` Create an Evaluation. Create a new Evaluation by specifying the File to evaluate, and a name for the Evaluation. You can then add Runs to this Evaluation using the `POST /evaluations/{id}/runs` endpoint. ## Response Body - 201: Successful Response - 422: Validation Error ## Examples ```shell curl -X POST https://api.humanloop.com/v5/evaluations \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{ "evaluators": [ { "version_id": "version_id" } ] }' ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluations.create( evaluators=[{"version_id": "version_id"}], ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.evaluations.create({ evaluators: [{ versionId: "version_id" }] }); ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/evaluations" payload := strings.NewReader("{}") req, _ := http.NewRequest("POST", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/evaluations") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{}" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.post("https://api.humanloop.com/v5/evaluations") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{}") .asString(); ``` ```php request('POST', 'https://api.humanloop.com/v5/evaluations', [ 'body' => '{}', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/evaluations"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/evaluations")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X POST https://api.humanloop.com/v5/evaluations \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{ "evaluators": [ { "version_id": "string" } ] }' ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluations.create( evaluators=[{"version_id": "version_id"}], ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.evaluations.create({ evaluators: [{ versionId: "version_id" }] }); ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/evaluations" payload := strings.NewReader("{}") req, _ := http.NewRequest("POST", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/evaluations") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{}" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.post("https://api.humanloop.com/v5/evaluations") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{}") .asString(); ``` ```php request('POST', 'https://api.humanloop.com/v5/evaluations', [ 'body' => '{}', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/evaluations"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/evaluations")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Add Evaluators ```http POST https://api.humanloop.com/v5/evaluations/{id}/evaluators Content-Type: application/json ``` Add Evaluators to an Evaluation. The Evaluators will be run on the Logs generated for the Evaluation. ## Path Parameters - Id (required): Unique identifier for Evaluation. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell curl -X POST https://api.humanloop.com/v5/evaluations/id/evaluators \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{ "evaluators": [ { "version_id": "version_id" } ] }' ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluations.add_evaluators( id="id", evaluators=[{"version_id": "version_id"}], ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.evaluations.addEvaluators("id", { evaluators: [{ versionId: "version_id" }] }); ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/evaluations/id/evaluators" payload := strings.NewReader("{}") req, _ := http.NewRequest("POST", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/evaluations/id/evaluators") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{}" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.post("https://api.humanloop.com/v5/evaluations/id/evaluators") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{}") .asString(); ``` ```php request('POST', 'https://api.humanloop.com/v5/evaluations/id/evaluators', [ 'body' => '{}', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/evaluations/id/evaluators"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/evaluations/id/evaluators")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X POST https://api.humanloop.com/v5/evaluations/:id/evaluators \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{ "evaluators": [ { "version_id": "string" } ] }' ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluations.add_evaluators( id="id", evaluators=[{"version_id": "version_id"}], ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.evaluations.addEvaluators("id", { evaluators: [{ versionId: "version_id" }] }); ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/evaluations/%3Aid/evaluators" payload := strings.NewReader("{}") req, _ := http.NewRequest("POST", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/evaluations/%3Aid/evaluators") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{}" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.post("https://api.humanloop.com/v5/evaluations/%3Aid/evaluators") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{}") .asString(); ``` ```php request('POST', 'https://api.humanloop.com/v5/evaluations/%3Aid/evaluators', [ 'body' => '{}', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/evaluations/%3Aid/evaluators"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/evaluations/%3Aid/evaluators")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Remove Evaluator ```http DELETE https://api.humanloop.com/v5/evaluations/{id}/evaluators/{evaluator_version_id} ``` Remove an Evaluator from an Evaluation. The Evaluator will no longer be run on the Logs in the Evaluation. ## Path Parameters - Id (required): Unique identifier for Evaluation. - EvaluatorVersionId (required): Unique identifier for Evaluator Version. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell curl -X DELETE https://api.humanloop.com/v5/evaluations/id/evaluators/evaluator_version_id \ -H "X-API-KEY: " ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluations.remove_evaluator( id="id", evaluator_version_id="evaluator_version_id", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.evaluations.removeEvaluator("id", "evaluator_version_id"); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/evaluations/id/evaluators/evaluator_version_id" req, _ := http.NewRequest("DELETE", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/evaluations/id/evaluators/evaluator_version_id") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Delete.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.delete("https://api.humanloop.com/v5/evaluations/id/evaluators/evaluator_version_id") .header("X-API-KEY", "") .asString(); ``` ```php request('DELETE', 'https://api.humanloop.com/v5/evaluations/id/evaluators/evaluator_version_id', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/evaluations/id/evaluators/evaluator_version_id"); var request = new RestRequest(Method.DELETE); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/evaluations/id/evaluators/evaluator_version_id")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "DELETE" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X DELETE https://api.humanloop.com/v5/evaluations/:id/evaluators/:evaluator_version_id \ -H "X-API-KEY: " ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluations.remove_evaluator( id="id", evaluator_version_id="evaluator_version_id", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.evaluations.removeEvaluator("id", "evaluator_version_id"); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/evaluations/%3Aid/evaluators/%3Aevaluator_version_id" req, _ := http.NewRequest("DELETE", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/evaluations/%3Aid/evaluators/%3Aevaluator_version_id") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Delete.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.delete("https://api.humanloop.com/v5/evaluations/%3Aid/evaluators/%3Aevaluator_version_id") .header("X-API-KEY", "") .asString(); ``` ```php request('DELETE', 'https://api.humanloop.com/v5/evaluations/%3Aid/evaluators/%3Aevaluator_version_id', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/evaluations/%3Aid/evaluators/%3Aevaluator_version_id"); var request = new RestRequest(Method.DELETE); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/evaluations/%3Aid/evaluators/%3Aevaluator_version_id")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "DELETE" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Get Evaluation ```http GET https://api.humanloop.com/v5/evaluations/{id} ``` Get an Evaluation. This includes the Evaluators associated with the Evaluation and metadata about the Evaluation, such as its name. To get the Runs associated with the Evaluation, use the `GET /evaluations/{id}/runs` endpoint. To retrieve stats for the Evaluation, use the `GET /evaluations/{id}/stats` endpoint. ## Path Parameters - Id (required): Unique identifier for Evaluation. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell Get evaluation curl https://api.humanloop.com/v5/evaluations/ev_567yza \ -H "X-API-KEY: " ``` ```python Get evaluation from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluations.get( id="ev_567yza", ) ``` ```typescript Get evaluation import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.evaluations.get("ev_567yza"); ``` ```go Get evaluation package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/evaluations/ev_567yza" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby Get evaluation require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/evaluations/ev_567yza") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java Get evaluation HttpResponse response = Unirest.get("https://api.humanloop.com/v5/evaluations/ev_567yza") .header("X-API-KEY", "") .asString(); ``` ```php Get evaluation request('GET', 'https://api.humanloop.com/v5/evaluations/ev_567yza', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp Get evaluation var client = new RestClient("https://api.humanloop.com/v5/evaluations/ev_567yza"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift Get evaluation import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/evaluations/ev_567yza")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl https://api.humanloop.com/v5/evaluations/:id \ -H "X-API-KEY: " ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluations.get( id="ev_567yza", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.evaluations.get("ev_567yza"); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/evaluations/%3Aid" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/evaluations/%3Aid") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.get("https://api.humanloop.com/v5/evaluations/%3Aid") .header("X-API-KEY", "") .asString(); ``` ```php request('GET', 'https://api.humanloop.com/v5/evaluations/%3Aid', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/evaluations/%3Aid"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/evaluations/%3Aid")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Delete Evaluation ```http DELETE https://api.humanloop.com/v5/evaluations/{id} ``` Delete an Evaluation. The Runs and Evaluators in the Evaluation will not be deleted. ## Path Parameters - Id (required): Unique identifier for Evaluation. ## Response Body - 422: Validation Error ## Examples ```shell Delete evaluation curl -X DELETE https://api.humanloop.com/v5/evaluations/ev_567yza \ -H "X-API-KEY: " ``` ```python Delete evaluation from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluations.delete( id="ev_567yza", ) ``` ```typescript Delete evaluation import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.evaluations.delete("ev_567yza"); ``` ```go Delete evaluation package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/evaluations/ev_567yza" req, _ := http.NewRequest("DELETE", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby Delete evaluation require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/evaluations/ev_567yza") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Delete.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java Delete evaluation HttpResponse response = Unirest.delete("https://api.humanloop.com/v5/evaluations/ev_567yza") .header("X-API-KEY", "") .asString(); ``` ```php Delete evaluation request('DELETE', 'https://api.humanloop.com/v5/evaluations/ev_567yza', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp Delete evaluation var client = new RestClient("https://api.humanloop.com/v5/evaluations/ev_567yza"); var request = new RestRequest(Method.DELETE); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift Delete evaluation import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/evaluations/ev_567yza")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "DELETE" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X DELETE https://api.humanloop.com/v5/evaluations/:id \ -H "X-API-KEY: " ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluations.delete( id="ev_567yza", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.evaluations.delete("ev_567yza"); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/evaluations/%3Aid" req, _ := http.NewRequest("DELETE", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/evaluations/%3Aid") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Delete.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.delete("https://api.humanloop.com/v5/evaluations/%3Aid") .header("X-API-KEY", "") .asString(); ``` ```php request('DELETE', 'https://api.humanloop.com/v5/evaluations/%3Aid', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/evaluations/%3Aid"); var request = new RestRequest(Method.DELETE); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/evaluations/%3Aid")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "DELETE" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # List Runs for Evaluation ```http GET https://api.humanloop.com/v5/evaluations/{id}/runs ``` List all Runs for an Evaluation. ## Path Parameters - Id (required): Unique identifier for Evaluation. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell curl https://api.humanloop.com/v5/evaluations/id/runs \ -H "X-API-KEY: " ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluations.list_runs_for_evaluation( id="id", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.evaluations.listRunsForEvaluation("id"); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/evaluations/id/runs" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/evaluations/id/runs") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.get("https://api.humanloop.com/v5/evaluations/id/runs") .header("X-API-KEY", "") .asString(); ``` ```php request('GET', 'https://api.humanloop.com/v5/evaluations/id/runs', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/evaluations/id/runs"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/evaluations/id/runs")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl https://api.humanloop.com/v5/evaluations/:id/runs \ -H "X-API-KEY: " ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluations.list_runs_for_evaluation( id="id", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.evaluations.listRunsForEvaluation("id"); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/evaluations/%3Aid/runs" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/evaluations/%3Aid/runs") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.get("https://api.humanloop.com/v5/evaluations/%3Aid/runs") .header("X-API-KEY", "") .asString(); ``` ```php request('GET', 'https://api.humanloop.com/v5/evaluations/%3Aid/runs', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/evaluations/%3Aid/runs"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/evaluations/%3Aid/runs")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Create Run ```http POST https://api.humanloop.com/v5/evaluations/{id}/runs Content-Type: application/json ``` Create an Evaluation Run. Optionally specify the Dataset and version to be evaluated. Humanloop will automatically start generating Logs and running Evaluators where `orchestrated=true`. If you are generating Logs yourself, you can set `orchestrated=false` and then generate and submit the required Logs via the API. If `dataset` and `version` are provided, you can set `use_existing_logs=True` to reuse existing Logs, avoiding generating new Logs unnecessarily. Logs that are associated with the specified Version and have `source_datapoint_id` referencing a datapoint in the specified Dataset will be associated with the Run. To keep updated on the progress of the Run, you can poll the Run using the `GET /evaluations/{id}/runs` endpoint and check its status. ## Path Parameters - Id (required): Unique identifier for Evaluation. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell curl -X POST https://api.humanloop.com/v5/evaluations/id/runs \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{}' ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluations.create_run( id="id", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.evaluations.createRun("id"); ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/evaluations/id/runs" payload := strings.NewReader("{}") req, _ := http.NewRequest("POST", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/evaluations/id/runs") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{}" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.post("https://api.humanloop.com/v5/evaluations/id/runs") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{}") .asString(); ``` ```php request('POST', 'https://api.humanloop.com/v5/evaluations/id/runs', [ 'body' => '{}', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/evaluations/id/runs"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/evaluations/id/runs")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X POST https://api.humanloop.com/v5/evaluations/:id/runs \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{}' ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluations.create_run( id="id", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.evaluations.createRun("id"); ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/evaluations/%3Aid/runs" payload := strings.NewReader("{}") req, _ := http.NewRequest("POST", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/evaluations/%3Aid/runs") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{}" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.post("https://api.humanloop.com/v5/evaluations/%3Aid/runs") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{}") .asString(); ``` ```php request('POST', 'https://api.humanloop.com/v5/evaluations/%3Aid/runs', [ 'body' => '{}', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/evaluations/%3Aid/runs"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/evaluations/%3Aid/runs")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Add Existing Run ```http POST https://api.humanloop.com/v5/evaluations/{id}/runs/{run_id} ``` Add an existing Run to the specified Evaluation. This is useful if you want to compare the Runs in this Evaluation with an existing Run that exists within another Evaluation. ## Path Parameters - Id (required): Unique identifier for Evaluation. - RunId (required): Unique identifier for Run. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell curl -X POST https://api.humanloop.com/v5/evaluations/id/runs/run_id \ -H "X-API-KEY: " ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluations.add_existing_run( id="id", run_id="run_id", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.evaluations.addExistingRun("id", "run_id"); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/evaluations/id/runs/run_id" req, _ := http.NewRequest("POST", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/evaluations/id/runs/run_id") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.post("https://api.humanloop.com/v5/evaluations/id/runs/run_id") .header("X-API-KEY", "") .asString(); ``` ```php request('POST', 'https://api.humanloop.com/v5/evaluations/id/runs/run_id', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/evaluations/id/runs/run_id"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/evaluations/id/runs/run_id")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X POST https://api.humanloop.com/v5/evaluations/:id/runs/:run_id \ -H "X-API-KEY: " ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluations.add_existing_run( id="id", run_id="run_id", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.evaluations.addExistingRun("id", "run_id"); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/evaluations/%3Aid/runs/%3Arun_id" req, _ := http.NewRequest("POST", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/evaluations/%3Aid/runs/%3Arun_id") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.post("https://api.humanloop.com/v5/evaluations/%3Aid/runs/%3Arun_id") .header("X-API-KEY", "") .asString(); ``` ```php request('POST', 'https://api.humanloop.com/v5/evaluations/%3Aid/runs/%3Arun_id', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/evaluations/%3Aid/runs/%3Arun_id"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/evaluations/%3Aid/runs/%3Arun_id")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Remove Run ```http DELETE https://api.humanloop.com/v5/evaluations/{id}/runs/{run_id} ``` Remove a Run from an Evaluation. The Logs and Versions used in the Run will not be deleted. If this Run is used in any other Evaluations, it will still be available in those Evaluations. ## Path Parameters - Id (required): Unique identifier for Evaluation. - RunId (required): Unique identifier for Run. ## Response Body - 422: Validation Error ## Examples ```shell curl -X DELETE https://api.humanloop.com/v5/evaluations/id/runs/run_id \ -H "X-API-KEY: " ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluations.remove_run( id="id", run_id="run_id", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.evaluations.removeRun("id", "run_id"); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/evaluations/id/runs/run_id" req, _ := http.NewRequest("DELETE", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/evaluations/id/runs/run_id") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Delete.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.delete("https://api.humanloop.com/v5/evaluations/id/runs/run_id") .header("X-API-KEY", "") .asString(); ``` ```php request('DELETE', 'https://api.humanloop.com/v5/evaluations/id/runs/run_id', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/evaluations/id/runs/run_id"); var request = new RestRequest(Method.DELETE); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/evaluations/id/runs/run_id")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "DELETE" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X DELETE https://api.humanloop.com/v5/evaluations/:id/runs/:run_id \ -H "X-API-KEY: " ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluations.remove_run( id="id", run_id="run_id", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.evaluations.removeRun("id", "run_id"); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/evaluations/%3Aid/runs/%3Arun_id" req, _ := http.NewRequest("DELETE", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/evaluations/%3Aid/runs/%3Arun_id") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Delete.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.delete("https://api.humanloop.com/v5/evaluations/%3Aid/runs/%3Arun_id") .header("X-API-KEY", "") .asString(); ``` ```php request('DELETE', 'https://api.humanloop.com/v5/evaluations/%3Aid/runs/%3Arun_id', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/evaluations/%3Aid/runs/%3Arun_id"); var request = new RestRequest(Method.DELETE); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/evaluations/%3Aid/runs/%3Arun_id")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "DELETE" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Update Evaluation Run ```http PATCH https://api.humanloop.com/v5/evaluations/{id}/runs/{run_id} Content-Type: application/json ``` Update an Evaluation Run. Specify `control=true` to use this Run as the control Run for the Evaluation. You can cancel a running/pending Run, or mark a Run that uses external or human Evaluators as completed. ## Path Parameters - Id (required): Unique identifier for Evaluation. - RunId (required): Unique identifier for Run. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell curl -X PATCH https://api.humanloop.com/v5/evaluations/id/runs/run_id \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{}' ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluations.update_evaluation_run( id="id", run_id="run_id", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.evaluations.updateEvaluationRun("id", "run_id"); ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/evaluations/id/runs/run_id" payload := strings.NewReader("{}") req, _ := http.NewRequest("PATCH", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/evaluations/id/runs/run_id") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Patch.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{}" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.patch("https://api.humanloop.com/v5/evaluations/id/runs/run_id") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{}") .asString(); ``` ```php request('PATCH', 'https://api.humanloop.com/v5/evaluations/id/runs/run_id', [ 'body' => '{}', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/evaluations/id/runs/run_id"); var request = new RestRequest(Method.PATCH); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/evaluations/id/runs/run_id")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "PATCH" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X PATCH https://api.humanloop.com/v5/evaluations/:id/runs/:run_id \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{}' ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluations.update_evaluation_run( id="id", run_id="run_id", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.evaluations.updateEvaluationRun("id", "run_id"); ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/evaluations/%3Aid/runs/%3Arun_id" payload := strings.NewReader("{}") req, _ := http.NewRequest("PATCH", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/evaluations/%3Aid/runs/%3Arun_id") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Patch.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{}" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.patch("https://api.humanloop.com/v5/evaluations/%3Aid/runs/%3Arun_id") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{}") .asString(); ``` ```php request('PATCH', 'https://api.humanloop.com/v5/evaluations/%3Aid/runs/%3Arun_id', [ 'body' => '{}', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/evaluations/%3Aid/runs/%3Arun_id"); var request = new RestRequest(Method.PATCH); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/evaluations/%3Aid/runs/%3Arun_id")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "PATCH" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Add Logs to Run ```http POST https://api.humanloop.com/v5/evaluations/{id}/runs/{run_id}/logs Content-Type: application/json ``` Add the specified Logs to a Run. ## Path Parameters - Id (required): Unique identifier for Evaluation. - RunId (required): Unique identifier for Run. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell curl -X POST https://api.humanloop.com/v5/evaluations/id/runs/run_id/logs \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{ "log_ids": [ "log_ids" ] }' ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluations.add_logs_to_run( id="id", run_id="run_id", log_ids=["log_ids"], ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.evaluations.addLogsToRun("id", "run_id", { logIds: ["log_ids"] }); ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/evaluations/id/runs/run_id/logs" payload := strings.NewReader("{}") req, _ := http.NewRequest("POST", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/evaluations/id/runs/run_id/logs") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{}" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.post("https://api.humanloop.com/v5/evaluations/id/runs/run_id/logs") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{}") .asString(); ``` ```php request('POST', 'https://api.humanloop.com/v5/evaluations/id/runs/run_id/logs', [ 'body' => '{}', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/evaluations/id/runs/run_id/logs"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/evaluations/id/runs/run_id/logs")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X POST https://api.humanloop.com/v5/evaluations/:id/runs/:run_id/logs \ -H "X-API-KEY: " \ -H "Content-Type: application/json" \ -d '{ "log_ids": [ "string" ] }' ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluations.add_logs_to_run( id="id", run_id="run_id", log_ids=["log_ids"], ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.evaluations.addLogsToRun("id", "run_id", { logIds: ["log_ids"] }); ``` ```go package main import ( "fmt" "strings" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/evaluations/%3Aid/runs/%3Arun_id/logs" payload := strings.NewReader("{}") req, _ := http.NewRequest("POST", url, payload) req.Header.Add("X-API-KEY", "") req.Header.Add("Content-Type", "application/json") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/evaluations/%3Aid/runs/%3Arun_id/logs") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Post.new(url) request["X-API-KEY"] = '' request["Content-Type"] = 'application/json' request.body = "{}" response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.post("https://api.humanloop.com/v5/evaluations/%3Aid/runs/%3Arun_id/logs") .header("X-API-KEY", "") .header("Content-Type", "application/json") .body("{}") .asString(); ``` ```php request('POST', 'https://api.humanloop.com/v5/evaluations/%3Aid/runs/%3Arun_id/logs', [ 'body' => '{}', 'headers' => [ 'Content-Type' => 'application/json', 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/evaluations/%3Aid/runs/%3Arun_id/logs"); var request = new RestRequest(Method.POST); request.AddHeader("X-API-KEY", ""); request.AddHeader("Content-Type", "application/json"); request.AddParameter("application/json", "{}", ParameterType.RequestBody); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = [ "X-API-KEY": "", "Content-Type": "application/json" ] let parameters = [] as [String : Any] let postData = JSONSerialization.data(withJSONObject: parameters, options: []) let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/evaluations/%3Aid/runs/%3Arun_id/logs")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "POST" request.allHTTPHeaderFields = headers request.httpBody = postData as Data let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Get Evaluation Stats ```http GET https://api.humanloop.com/v5/evaluations/{id}/stats ``` Get Evaluation Stats. Retrieve aggregate stats for the specified Evaluation. This includes the number of generated Logs for each Run and the corresponding Evaluator statistics (such as the mean and percentiles). ## Path Parameters - Id (required): Unique identifier for Evaluation. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell curl https://api.humanloop.com/v5/evaluations/id/stats \ -H "X-API-KEY: " ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluations.get_stats( id="id", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.evaluations.getStats("id"); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/evaluations/id/stats" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/evaluations/id/stats") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.get("https://api.humanloop.com/v5/evaluations/id/stats") .header("X-API-KEY", "") .asString(); ``` ```php request('GET', 'https://api.humanloop.com/v5/evaluations/id/stats', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/evaluations/id/stats"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/evaluations/id/stats")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl https://api.humanloop.com/v5/evaluations/:id/stats \ -H "X-API-KEY: " ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluations.get_stats( id="id", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.evaluations.getStats("id"); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/evaluations/%3Aid/stats" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/evaluations/%3Aid/stats") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.get("https://api.humanloop.com/v5/evaluations/%3Aid/stats") .header("X-API-KEY", "") .asString(); ``` ```php request('GET', 'https://api.humanloop.com/v5/evaluations/%3Aid/stats', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/evaluations/%3Aid/stats"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/evaluations/%3Aid/stats")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Get Logs for Evaluation ```http GET https://api.humanloop.com/v5/evaluations/{id}/logs ``` Get the Logs associated to a specific Evaluation. This returns the Logs associated to all Runs within with the Evaluation. ## Path Parameters - Id (required): String ID of evaluation. Starts with `ev_` or `evr_`. ## Query Parameters - Page (optional): Page number for pagination. - Size (optional): Page size for pagination. Number of Logs to fetch. - RunId (optional): Filter by Run IDs. Only Logs for the specified Runs will be returned. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell curl https://api.humanloop.com/v5/evaluations/id/logs \ -H "X-API-KEY: " ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluations.get_logs( id="id", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.evaluations.getLogs("id"); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/evaluations/id/logs" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/evaluations/id/logs") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.get("https://api.humanloop.com/v5/evaluations/id/logs") .header("X-API-KEY", "") .asString(); ``` ```php request('GET', 'https://api.humanloop.com/v5/evaluations/id/logs', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/evaluations/id/logs"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/evaluations/id/logs")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -G https://api.humanloop.com/v5/evaluations/:id/logs \ -H "X-API-KEY: " \ -d page=0 \ -d size=0 ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluations.get_logs( id="id", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.evaluations.getLogs("id"); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/evaluations/%3Aid/logs?page=0&size=0" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/evaluations/%3Aid/logs?page=0&size=0") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.get("https://api.humanloop.com/v5/evaluations/%3Aid/logs?page=0&size=0") .header("X-API-KEY", "") .asString(); ``` ```php request('GET', 'https://api.humanloop.com/v5/evaluations/%3Aid/logs?page=0&size=0', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/evaluations/%3Aid/logs?page=0&size=0"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/evaluations/%3Aid/logs?page=0&size=0")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # List Logs ```http GET https://api.humanloop.com/v5/logs ``` List all Logs for the given filter criteria. ## Query Parameters - FileId (required): Unique identifier for the File to list Logs for. - Page (optional): Page number for pagination. - Size (optional): Page size for pagination. Number of Logs to fetch. - VersionId (optional): If provided, only Logs belonging to the specified Version will be returned. - VersionStatus (optional): If provided, only Logs belonging to Versions with the specified status will be returned. - Id (optional): If provided, returns Logs whose IDs contain any of the specified values as substrings. - Search (optional): If provided, only Logs that contain the provided string in its inputs and output will be returned. - MetadataSearch (optional): If provided, only Logs that contain the provided string in its metadata will be returned. - StartDate (optional): If provided, only Logs created after the specified date will be returned. - EndDate (optional): If provided, only Logs created before the specified date will be returned. - IncludeParent (optional): If true, include the full parent Log in the response. Only applicable when retrieving Evaluator Logs. - InTraceFilter (optional): If true, return Logs that are associated to a Trace. False, return Logs that are not associated to a Trace. - Sample (optional): If provided, limit the response to a random subset of logs from the filtered results. (This will be an approximate sample, not a strict limit.) - IncludeTraceChildren (optional): If true, populate `trace_children` for the retrieved Logs. Only applicable when retrieving Flow Logs. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell List logs curl -G https://api.humanloop.com/v5/logs \ -H "X-API-KEY: " \ -d file_id=file_123abc \ -d size=1 ``` ```python List logs from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) response = client.logs.list( file_id="file_123abc", size=1, ) for item in response: yield item # alternatively, you can paginate page-by-page for page in response.iter_pages(): yield page ``` ```typescript List logs import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); const response = await client.logs.list({ fileId: "file_123abc", size: 1 }); for await (const item of response) { console.log(item); } // Or you can manually iterate page-by-page const page = await client.logs.list({ fileId: "file_123abc", size: 1 }); while (page.hasNextPage()) { page = page.getNextPage(); } ``` ```go List logs package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/logs?file_id=file_123abc&size=1" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby List logs require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/logs?file_id=file_123abc&size=1") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java List logs HttpResponse response = Unirest.get("https://api.humanloop.com/v5/logs?file_id=file_123abc&size=1") .header("X-API-KEY", "") .asString(); ``` ```php List logs request('GET', 'https://api.humanloop.com/v5/logs?file_id=file_123abc&size=1', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp List logs var client = new RestClient("https://api.humanloop.com/v5/logs?file_id=file_123abc&size=1"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift List logs import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/logs?file_id=file_123abc&size=1")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -G https://api.humanloop.com/v5/logs \ -H "X-API-KEY: " \ -d file_id=string \ -d page=0 \ -d size=0 ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) response = client.logs.list( file_id="file_123abc", size=1, ) for item in response: yield item # alternatively, you can paginate page-by-page for page in response.iter_pages(): yield page ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); const response = await client.logs.list({ fileId: "file_123abc", size: 1 }); for await (const item of response) { console.log(item); } // Or you can manually iterate page-by-page const page = await client.logs.list({ fileId: "file_123abc", size: 1 }); while (page.hasNextPage()) { page = page.getNextPage(); } ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/logs?file_id=string&page=0&size=0" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/logs?file_id=string&page=0&size=0") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.get("https://api.humanloop.com/v5/logs?file_id=string&page=0&size=0") .header("X-API-KEY", "") .asString(); ``` ```php request('GET', 'https://api.humanloop.com/v5/logs?file_id=string&page=0&size=0', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/logs?file_id=string&page=0&size=0"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/logs?file_id=string&page=0&size=0")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Delete Logs ```http DELETE https://api.humanloop.com/v5/logs ``` Delete Logs with the given IDs. ## Query Parameters - Id (optional): Unique identifiers for the Logs to delete. ## Response Body - 422: Validation Error ## Examples ```shell Delete logs curl -X DELETE "https://api.humanloop.com/v5/logs?id=prv_Wu6zx1lAWJRqOyL8nWuZk" \ -H "X-API-KEY: " ``` ```python Delete logs from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.logs.delete( id="prv_Wu6zx1lAWJRqOyL8nWuZk", ) ``` ```typescript Delete logs import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.logs.delete({ id: "prv_Wu6zx1lAWJRqOyL8nWuZk" }); ``` ```go Delete logs package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/logs?id=prv_Wu6zx1lAWJRqOyL8nWuZk" req, _ := http.NewRequest("DELETE", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby Delete logs require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/logs?id=prv_Wu6zx1lAWJRqOyL8nWuZk") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Delete.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java Delete logs HttpResponse response = Unirest.delete("https://api.humanloop.com/v5/logs?id=prv_Wu6zx1lAWJRqOyL8nWuZk") .header("X-API-KEY", "") .asString(); ``` ```php Delete logs request('DELETE', 'https://api.humanloop.com/v5/logs?id=prv_Wu6zx1lAWJRqOyL8nWuZk', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp Delete logs var client = new RestClient("https://api.humanloop.com/v5/logs?id=prv_Wu6zx1lAWJRqOyL8nWuZk"); var request = new RestRequest(Method.DELETE); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift Delete logs import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/logs?id=prv_Wu6zx1lAWJRqOyL8nWuZk")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "DELETE" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl -X DELETE "https://api.humanloop.com/v5/logs?id=string" \ -H "X-API-KEY: " ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.logs.delete( id="prv_Wu6zx1lAWJRqOyL8nWuZk", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.logs.delete({ id: "prv_Wu6zx1lAWJRqOyL8nWuZk" }); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/logs?id=string" req, _ := http.NewRequest("DELETE", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/logs?id=string") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Delete.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.delete("https://api.humanloop.com/v5/logs?id=string") .header("X-API-KEY", "") .asString(); ``` ```php request('DELETE', 'https://api.humanloop.com/v5/logs?id=string', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/logs?id=string"); var request = new RestRequest(Method.DELETE); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/logs?id=string")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "DELETE" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # Get Log ```http GET https://api.humanloop.com/v5/logs/{id} ``` Retrieve the Log with the given ID. ## Path Parameters - Id (required): Unique identifier for Log. ## Response Body - 200: Successful Response - 422: Validation Error ## Examples ```shell Get log curl https://api.humanloop.com/v5/logs/prv_Wu6zx1lAWJRqOyL8nWuZk \ -H "X-API-KEY: " ``` ```python Get log from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.logs.get( id="prv_Wu6zx1lAWJRqOyL8nWuZk", ) ``` ```typescript Get log import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.logs.get("prv_Wu6zx1lAWJRqOyL8nWuZk"); ``` ```go Get log package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/logs/prv_Wu6zx1lAWJRqOyL8nWuZk" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby Get log require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/logs/prv_Wu6zx1lAWJRqOyL8nWuZk") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java Get log HttpResponse response = Unirest.get("https://api.humanloop.com/v5/logs/prv_Wu6zx1lAWJRqOyL8nWuZk") .header("X-API-KEY", "") .asString(); ``` ```php Get log request('GET', 'https://api.humanloop.com/v5/logs/prv_Wu6zx1lAWJRqOyL8nWuZk', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp Get log var client = new RestClient("https://api.humanloop.com/v5/logs/prv_Wu6zx1lAWJRqOyL8nWuZk"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift Get log import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/logs/prv_Wu6zx1lAWJRqOyL8nWuZk")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` ```shell curl https://api.humanloop.com/v5/logs/:id \ -H "X-API-KEY: " ``` ```python from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.logs.get( id="prv_Wu6zx1lAWJRqOyL8nWuZk", ) ``` ```typescript import { HumanloopClient } from "humanloop"; const client = new HumanloopClient({ apiKey: "YOUR_API_KEY" }); await client.logs.get("prv_Wu6zx1lAWJRqOyL8nWuZk"); ``` ```go package main import ( "fmt" "net/http" "io" ) func main() { url := "https://api.humanloop.com/v5/logs/%3Aid" req, _ := http.NewRequest("GET", url, nil) req.Header.Add("X-API-KEY", "") res, _ := http.DefaultClient.Do(req) defer res.Body.Close() body, _ := io.ReadAll(res.Body) fmt.Println(res) fmt.Println(string(body)) } ``` ```ruby require 'uri' require 'net/http' url = URI("https://api.humanloop.com/v5/logs/%3Aid") http = Net::HTTP.new(url.host, url.port) http.use_ssl = true request = Net::HTTP::Get.new(url) request["X-API-KEY"] = '' response = http.request(request) puts response.read_body ``` ```java HttpResponse response = Unirest.get("https://api.humanloop.com/v5/logs/%3Aid") .header("X-API-KEY", "") .asString(); ``` ```php request('GET', 'https://api.humanloop.com/v5/logs/%3Aid', [ 'headers' => [ 'X-API-KEY' => '', ], ]); echo $response->getBody(); ``` ```csharp var client = new RestClient("https://api.humanloop.com/v5/logs/%3Aid"); var request = new RestRequest(Method.GET); request.AddHeader("X-API-KEY", ""); IRestResponse response = client.Execute(request); ``` ```swift import Foundation let headers = ["X-API-KEY": ""] let request = NSMutableURLRequest(url: NSURL(string: "https://api.humanloop.com/v5/logs/%3Aid")! as URL, cachePolicy: .useProtocolCachePolicy, timeoutInterval: 10.0) request.httpMethod = "GET" request.allHTTPHeaderFields = headers let session = URLSession.shared let dataTask = session.dataTask(with: request as URLRequest, completionHandler: { (data, response, error) -> Void in if (error != nil) { print(error as Any) } else { let httpResponse = response as? HTTPURLResponse print(httpResponse) } }) dataTask.resume() ``` # April ## o3 and o4-mini Models *April 17th, 2025* OpenAI's new reasoning models, o3 and o4-mini, are now available on Humanloop. These models excel at complex reasoning tasks including coding, mathematics, and scientific analysis: * **o3**: OpenAI's most powerful reasoning model, delivering exceptional performance across coding, math, science, and vision tasks * **o4-mini**: A faster, cost-efficient alternative that maintains strong capabilities while being more accessible * **Enhanced Vision**: Both models feature advanced image understanding capabilities * **Cost-Effective**: Competitive pricing with cached input support for optimized costs Read more [here](https://openai.com/index/introducing-o3-and-o4-mini/). ## GPT-4.1 Model Family *April 14th, 2025* OpenAI's GPT-4.1 model family is now available on Humanloop, including GPT-4.1, GPT-4.1 Mini, and GPT-4.1 Nano variants. ![Screenshot showing GPT-4.1 models](file:d893e289-9e32-4a49-8fe6-16686da8720d) The new GPT-4.1 family brings significant improvements across all variants: * **Massive Context Window**: All models support up to 1 million tokens * **Enhanced Reliability**: Better at following instructions and reduced hallucinations * **Improved Efficiency**: 26% more cost-effective than previous versions * **Specialized Variants**: From the full GPT-4.1 to the lightweight Nano version for different use cases Learn more about the capabilities of these new models on [Open AI's website](https://openai.com/index/gpt-4-1/). ## Simplified Versioning System: From "Commit" to "Save" *April 11th, 2025* We've redesigned our versioning system to make it more intuitive and easier to use. * **Save vs. Commit**: Files are now simply saved to create new versions, replacing the previous commit-based workflow * **Optional Version Names**: Versions can now be identified with an optional name * **Unified Version List**: All versions now appear in a single list, eliminating the "uncommitted" status > Note: All existing commit messages have been preserved as version descriptions in the new system. > Previously committed versions have been automatically named v1, v2, etc., to help you identify them in the new unified list. ![Screenshot showing version names in the UI](file:0c5eb8a1-d94a-4773-8264-bb03bc87f1bd) ### Version Details Popover Hover over any version label to access the new version details popover where you can: * Edit version name and description * View key parameters at a glance * Check the creation date of the version ![Screenshot showing the version details popover](file:d5d3df8e-4c17-4fe5-8da0-01ab70ede26f) This streamlined approach simplifies version management during evaluations and comparisons. ## Gemini 2.5 Pro preview model *April 4th, 2025* Google's Gemini 2.5 Pro preview is now available on Humanloop. ![Screenshot showing Gemini 2.5 Pro preview](file:7ebc3ed2-8413-4768-942d-00413b8679da) Gemini 2.5 Pro Preview delivers state-of-the-art performance in mathematics, science, and code generation tasks, powered by its massive 1 million token context window. Learn more on the [Google DeepMind website](https://deepmind.google/technologies/gemini/pro/). # March ## Gemini 2.5 Pro Experimental model *March 27th, 2025* We've added Google's Gemini 2.5 Pro Experimental model to Humanloop! ![Screenshot showing Gemini 2.5 Pro Experimental](file:54a4bfb8-8888-4142-a567-f44280545fc7) Gemini 2.5 Pro Experimental ranks #1 on [LMArena](https://lmarena.ai/?leaderboard) and features enhanced reasoning capabilities with a 1 million token context window. It excels in complex tasks like mathematics, science, and coding by reasoning through its thoughts before responding. To find out more, read the [Google blog post](https://blog.google/technology/google-deepmind/gemini-model-thinking-updates-march-2025/). ## Improved Log search *March 7th, 2025* We've improved search on Logs to make it easier to find specific Logs based on their content. You can select the search target in a dropdown beside the search box to search by inputs, output, messages, or metadata. ![Search bar above the Logs table](file:1fb9334c-84cf-41d9-8f5e-519ec28c53e5) Select "All" in the search target dropdown beside the search box to search across all fields (emulating the previous behavior of our search bar). ## Log from Vercel AI SDK via OpenTelemetry *March 7th, 2025*
We've added support for importing OpenTelemetry traces from the Vercel AI SDK. This allows you to see the steps taken by any AI system in Humanloop. To try it, you need to enable the `experimental_telemetry` flag for each Vercel AI SDK call and configure the OpenTelemetry SDK to instrument your code. Follow our [guide to the Vercel AI SDK](/docs/v5/integrations/vercel-ai-sdk) to learn more. ## New filters in Logs tab *March 6th, 2025* We've made filtering in Logs tab more intuitive and powerful with a revamped UI. You can now filter logs by *metadata key-value pairs*, *evaluator judgments*, and more, making it easier to find exactly what you need. ![Screenshot showing the new filters in the Logs tab](file:55ec704e-2b75-4231-8a40-bb5a238d1b6e) To check it out, navigate to the Logs tab and click on the **Filter** button to quickly find the logs you need. # February ## OpenAI GPT 4.5 preview model *February 27th, 2025* We've added OpenAI GPT 4.5 preview model to Humanloop the day it was released! ![Screenshot showing GPT 4.5 Preview](file:3ea6831f-41eb-4eb2-a8f4-e595fe0d5fa2) GPT-4.5 delivers more natural responses thanks to its broader knowledge base, enhanced ability to follow user intent, and improved emotional intelligence. It excels in tasks like writing enhancement, programming, and problem-solving. To find out more, read the [OpenAI blog post](https://openai.com/index/introducing-gpt-4-5/). ## Claude 3.7 Sonnet model *February 24th, 2025* We've added Anthropic Claude 3.7 to Humanloop the day it was released! ![Screenshot showing Claude 3.7 Sonnet](file:7b9c86c9-4535-44bc-ae0e-a1bc6b4ee9d9) Claude 3.7 Sonnet can produce near-instant responses or extended, step-by-step thinking. Additionally, it shows particularly strong improvements in coding and front-end web development. To find out more, read the [Anthropic blog post](https://www.anthropic.com/news/claude-3-7-sonnet). ## Improved Log Display *February 20th, 2025* To significantly improve readability in your use cases with strict formatting requirements, you can now choose to view your log contents in various formats. Formats such as JSON, YAML, Markdown, and more will be automatically detected and made available for manual selection. ![Screenshot showing the improved Log Display](file:f555ca07-3871-478d-86f7-2c17a79561aa) To try it, simply open up a Log from the Logs or Evals tab and hover over its output content. ## Jinja Support *February 17th, 2025* We've added support for Jinja in the Editor. [Jinja](https://jinja.palletsprojects.com/en/stable/) is a powerful templating engine that allows you to define variables, conditionals and other logic. You can now leverage this directly in the text of your Prompt Templates. To try it out, navigate to the Editor tab and select Jinja as the templating language in the new dropdown in the **Template** section: ![Screenshot showing Jinja in Editor](file:8cd0f90c-64f0-4968-b939-d27b379761b9) We've also added a new API endpoint `prompts/{id}/populate` that takes the `inputs` required by your template and returns the Prompt version along with the populated template. ## Improved Stats Page in Evaluations *February 14th, 2025* We've revamped our Stats page in Evaluations, which shows a summary of the evaluation results across runs and compares them to a control run. ![Screenshot showing the improved Stats page in Evaluations](file:79f2aded-fa81-4a31-9c9a-c220a0aa19a5) Some highlights: * Collapsible sections to view the most important top-line metric for each evaluator at a glance * New and improved bar charts for categorial and boolean evaluators * Clear percentage-based comparisons against the control run for each metric * Click through from metric values and comparisons to a filtered Review page to drill down into datapoints ## Gemini 2.0 Flash is now available *February 12th, 2025* We've added support for the latest Google Gemini 2.0 Flash model on Humanloop. ![Screenshot showing reasoning effort in the Editor](file:2338d930-11ad-48a3-94cc-738ab8bd1f36) Gemini Flash 2.0 is optimal for high-volume, high-frequency tasks. It is highly capable of multimodal reasoning across vast amounts of information, with a context window of 1 million tokens. ## Reasoning effort in Editor *February 11th, 2025* You can now specify the reasoning effort for o3-mini and o1 models in Editor. ![Screenshot showing reasoning effort in the Editor](file:50e285be-f88a-4759-9262-e0ced48cd936) The reasoning effort parameter gives the model guidance on how many reasoning tokens it should generate before creating a response to the prompt. For more complex tasks, you may want to increase the reasoning effort to allow the model to 'think' longer before responding. ## Structured Outputs in Editor *February 3rd, 2025* You can now specify the model’s response format directly in Editor. This is especially useful when you need the model to respond in a defined structure, such as JSON. ![Screenshot showing structured outputs in the Editor](file:1af838b1-c220-423e-9f34-b95e84eb3f40) To try it out: > 1. Select a **Prompt**. > 2. Open the **Editor** tab. > 3. In the **Response Format** dropdown at the bottom, select **JSON Schema**. You can then define your JSON schema manually or use our AI-powered JSON schema generator by clicking the Generate button above the schema editor. # January ## o3-mini now available *January 31st, 2025* o3-mini is now available on Humanloop. The OpenAI 'o' model series is trained with large-scale reinforcement learning to reason using chain of thought, providing advanced reasoning capabilities and improved safety and robustness. The o3-mini model brings state-of-the-art performance on key benchmarks, and excels in areas of STEM such as math and coding. ![o3-mini excels in coding](file:4d9ad237-2160-49a0-9262-c37021403ed9) o3 has 200,000 token context length, with 100,000 output tokens and it can show superior performance to o1, but for 9x cheaper and 4x faster. To try it, update your model parameter to `o3-mini` in the Prompt Editor. ## Filter by Evaluator error in Review tab *January 31st, 2025* You can now filter Logs by whether an Evaluator errored in the Review tab of an Evaluation. This feature allows you to quickly retrieve Logs with errored judgments for debugging, or only consider judgments that did not error while reviewing. ![Judgment filters](file:49c8dd8d-44a4-42c7-bf23-1e1c5f5ab7f7) To filter Logs, click on the **Filter** button on the Review tab to set up your first filter. ## DeepSeek Integration Available *January 29th, 2025* We've added support for the DeepSeek API to Humanloop -- now you can use DeepSeek V3 and R1 (non-distilled) directly! Add your API key to your organization's [API Keys page](https://app.humanloop.com/account/api-keys) to get started. ![Screenshot showing DeepSeek API key setup model](file:46b5381d-6ecd-4285-b420-2e4715fbf8a4) Note on usage: * DeepSeek's API ([status](https://status.deepseek.com/)) has suffered from degraded performance over the past few days, so requests may fail or take longer than expected. * [DeepSeek R1](https://api-docs.deepseek.com/guides/reasoning_model) does not support temperature control, top p, presence penalty, or frequence penalty. `max_tokens` will default to 4K, and the API supports up to 64K context length. * This provider's servers are located in China. ## DeepSeek-R1-Distill-Llama-70B on Groq *January 27th, 2025* You can now access DeepSeek-R1-Distill-Llama-70B on Humanloop. Note that this model is a preview model on Groq and may be discontinued at short notice. As such, it should not be used in production environments. To try it, update your model parameter to `deepseek-r1-distill-11ama-70b`. ![Screenshot showing parameters for the DeepSeek-R1-Distill-Llama-70B model](file:6845a80c-290d-4d2d-9bf8-45f9e1dbbbf8) The DeepSeek team [recommends](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B#usage-recommendations) using the R1 models in the following way: > 1. Set the temperature within the range of 0.5-0.7 (0.6 is recommended) to prevent endless repetitions or incoherent outputs. > 2. Avoid adding a system prompt; all instructions should be contained within the user prompt. > 3. For mathematical problems, it is advisable to include a directive in your prompt such as: "Please reason step by step, and put your final answer within \boxed{}." > 4. When evaluating model performance, it is recommended to conduct multiple tests and average the results. ## Duplicate Datapoints *January 23rd, 2025* You can now duplicate Datapoints from the UI Editor. This is useful if creating lots of variations of an existing Datapoint. ![File library](file:9d7cba01-3a9a-4b18-9614-3b524d6a78ea) Alternatively, you can use the [.csv upload](https://humanloop.com/docs/v5/guides/evals/upload-dataset-csv) feature to create multiple Datapoints at once. ## Aggregate stats for Eval Runs *January 18th, 2025* We've added aggregate statistics to the **Runs** table to help you quickly compare performance across different Evaluators. You can view these statistics in the **Runs** tab of any Evaluation that contains Evaluators. For boolean Evaluators, we show the percentage of `true` judgments. For number Evaluators, we display the average value. For select and multi-select Evaluators, we display a bar chart showing the distribution of the judgments. ![Run stats with a tooltip showing breakdown for an Issues Evaluator](file:79ee4d25-a273-4206-a58f-3d61b4d0d532) Additional icons indicate the status of the Run, relevant to the aggregate stat: * A spinning icon indicates that not all Logs have judgments, and the Run is currently being executed. The displayed aggregate statistic may not be final. * A clock icon shows that not all Logs have judgments, though the Run is not currently being executed * A red warning icon indicates errors when running the Evaluator Hover over these icons or aggregate statistics to view more details in the tooltip, such as the number of judgments and the number of errors (if any). ## Filter Eval Runs *January 14th, 2025* You can now more easily compare your relevant Runs by selecting them in the **Runs** tab. To filter to a subset of Runs, go to the **Runs** tab and select them by clicking the checkbox or by pressing `x` with your cursor on the row. Then, go to the **Stats** or **Review** tab to see the comparison between the selected Runs. Your control Run will always be included in the comparison. ![Selecting Runs on Runs tab](file:3b5d9dc5-a67e-4e17-9782-725cb2dfc1af) ![Select Runs for comparison](file:21f33235-61d5-43ab-bdce-f1780f36f6af) ## Filter by Judgement in Review tab *January 9th, 2025* You can now filter Logs by Evaluator judgments in the Review tab of an Evaluation. This feature allows you to quickly retrieve specific Logs, such as those marked as "Good" or "Bad" by a subject-matter expert, or those with latency below a certain threshold. ![Judgment filters](file:7bed3810-02e9-44d8-b745-30934c5eaa99) To filter Logs, click on the **Filter** button on the Review tab to set up your first filter. ## Template Library *January 4th, 2025* We’ve introduced the first version of our Template Library, designed to help you get started with example projects on Humanloop. This new feature allows you to browse and search for relevant templates using tags. You can then clone templates into your workspace to help overcome the cold-start problem. ![File library](file:94f2f356-d367-4600-8d19-854f1561e2de) This first release focuses on providing useful Evaluator examples alongside a set of curated datasets from Hugging Face. In upcoming releases, we plan to expand the library with additional Agent and RAG templates for a wide range of use cases. Stay tuned for more updates! # December ## Improved TypeScript SDK Evals *December 18th, 2024* We've enhanced our TypeScript SDK with an evaluation utility, similar to our [Python SDK](https://humanloop.com/docs/v5/changelog/2024/10#evaluations-sdk-improvements). The utility can run evaluations on either your runtime or Humanloop's. To use your local runtime, you need to provide: * A callable function that takes your inputs/ messages * A Dataset of inputs/ messages to evaluate the function against * A set of Evaluators to use to provide judgments on the outputs of your function Here's how our [evals in code guide](https://humanloop.com/docs/v5/quickstart/evals-in-code) looks in the new TypeScript SDK: ```typescript maxLines=50 import { Humanloop } from "humanloop"; // Get API key at https://app.humanloop.com/account/api-keys const hl = new Humanloop({ apiKey: "", }); const checks = hl.evaluations.run({ name: "Initial Test", file: { path: "Scifi/App", callable: (messages: { role: string; content: string }[]) => { // Replace with your AI model logic const lastMessageContent = messages[messages.length - 1].content.toLowerCase(); return lastMessageContent === "hal" ? "I'm sorry, Dave. I'm afraid I can't do that." : "Beep boop!"; }, }, dataset: { path: "Scifi/Tests", // Replace with your own dataset datapoints: [ { messages: [ { role: "system", content: "You are an AI that responds like famous sci-fi AIs." }, { role: "user", content: "HAL" }, ], target: { output: "I'm sorry, Dave. I'm afraid I can't do that.", }, }, { messages: [ { role: "system", content: "You are an AI that responds like famous sci-fi AIs." }, { role: "user", content: "R2D2" }, ], target: { output: "Beep boop beep!", }, }, ], }, // Replace with your own Evaluators evaluators: [ { path: "Example Evaluators/Code/Exact match" }, { path: "Example Evaluators/Code/Latency" }, { path: "Example Evaluators/AI/Semantic similarity" }, ], }); console.log("Evaluation checks:", checks); ``` Check out this [cookbook example](https://github.com/humanloop/humanloop-cookbook/tree/main/node-evaluate-medqa) to learn how to evaluate a RAG pipeline with the new SDK. ## SDK Decorators in Typescript \[beta] *December 17th, 2024* We're excited to announce the beta release of our [TypeScript SDK](https://www.npmjs.com/package/humanloop/v/0.8.9-beta5), aligning with the Python logging utilities we introduced [last month](/v5/changelog/2024/11#logging-with-decorators). The new utilities help you integrate Humanloop with minimal changes to your existing code base. Take this basic chat agent instrumented through Humanloop: ```typescript maxLines=50 const callModel = (traceId: string, messages: MessageType[]) => { const response = await openAIClient.chat.completions.create({ model: "gpt-4o", temperature: 0.8, messages: messages, }); const output = response.choices[0].message.content || ""; await humanloop.prompts.log({ path: "Chat Agent/Call Model", prompt: { model: "gpt-4o", messages: [...messages, { role: "assistant", content: output }], temperature: 0.8, }, traceParentId: traceId, }) return output; } const chatAgent = () => { const traceId = humanloop.flows.log( path: "Chat Agent/Agent", ).id const messages = [{ role: "system", content: "You are a helpful assistant." }]; while (true) { const userMessage = await getCLIInput(); if (userMessage === "exit") { break; } messages.push({ role: "user", content: userMessage }); const response = await callModel(traceId, messages); messages.push({ role: "assistant", content: response }); } humanloop.flows.updateLog( traceId, { traceStatus: "complete", messages: messages } ) } ``` Using the new logging utilities, the SDK will automatically manage the Files and logging for you. Through them you can integrate Humanloop to your project with less changes to your existing codebase. Calling a function wrapped in an utility will create a Log on Humanloop. Furthermore, the SDK will detect changes to the LLM hyperparameters and create a new version automatically. The code below is equivalent to the previous example: ```typescript maxLines=50 const callModel = (messages: MessageType[]) => humanloop.prompt({ path: "Chat Agent/Call Model", callable: async (inputs: any, messages: MessageType[]) => { const response = await openAIClient.chat.completions.create({ model: "gpt-4o", temperature: 0.8, messages: messages, }); return response.choices[0].message.content || ""; }, })(undefined, messages); const chatAgent = () => humanloop.flow({ path: "Chat Agent/Agent", callable: async (inputs: any, messages: MessageType[]) => { const messages = [{ role: "system", content: "You are a helpful assistant." }]; while (true) { const userMessage = await getCLIInput(); if (userMessage === "exit") { break; } messages.push({ role: "user", content: userMessage }); const response = await callModel(messages); messages.push({ role: "assistant", content: response }); } return messages; }, })(undefined, []); ``` This release introduces three decorators: * **`flow()`**: Serves as the entry point for your AI features. Use it to call other decorated functions and trace your feature's execution. * **`prompt()`**: Monitors LLM client library calls to version your Prompt Files. Supports **OpenAI**, **Anthropic**, and **Replicate** clients. Changing the provider or hyperparameters creates a new version in Humanloop. * **`tool()`**: Versions tools using their source code. Includes a `jsonSchema` decorated to streamline function calling. Explore our [cookbook example](https://github.com/humanloop/humanloop-cookbook/tree/main/node-instrument-chat-agent) to see a simple chat agent instrumented with the new logging utilities. ## Function-calling AI Evaluators *December 15th, 2024* We've updated our AI Evaluators to use function calling by default, improving their reliability and performance. We've also updated the AI Evaluator Editor to support this change. ![AI Evaluator Editor with function calling](file:55c1a425-fbf5-4daa-88f3-d2dea415abdb) New AI Evaluators will now use function calling by default. When you create an AI Evaluator in Humanloop, you will now create an AI Evaluator with a `submit_judgment(judgment, reasoning)` tool that takes `judgment` and `reasoning` as arguments. When you run this Evaluator on a Log, Humanloop will force the model to call the tool. The model will then return an appropriate judgment alongside its reasoning. You can customize the AI Evaluator in its Editor tab. Here, Humanloop displays a "Parameters" and a "Template" section, similar to the Prompt Editor, allowing you to define the messages and parameters used to call the model. In the "Judgment" section below those, you can customize the function descriptions and disable the `reasoning` argument. To test the AI Evaluator, you can load Logs from a Prompt with the **Select a Prompt or Dataset** button in the **Debug console** panel. After Logs are loaded, click the **Run** button to run the AI Evaluator on the Logs. The resulting judgments will be shown beside the Logs. If reasoning is enabled, you can view the reasoning by hovering over the judgment or by clicking the **Open in drawer** button next to the judgment. ![AI Evaluator Editor with function calling](file:13a9154d-4227-4236-97b5-1bbcf5fb1914) ## New models: Gemini 2.0 Flash, Llama 3.3 70B *December 12th, 2024* To support you in adopting the latest models, we've added support for more new models, including the [latest experimental models for Gemini](https://developers.googleblog.com/en/the-next-chapter-of-the-gemini-era-for-developers/). These include `gemini-2.0-flash-exp` with better performance than Gemini 1.5 Pro and tool use, and [`gemini-exp-1206`](https://blog.google/feed/gemini-exp-1206/), the latest experimental advanced model. We've also added support for [Llama 3.3 70B](https://github.com/meta-llama/llama-models/blob/main/models/llama3_3/MODEL_CARD.md) on Groq, Meta's latest model with performance comparable to their largest Llama 3.1 405B model. You can start using these models in your Prompts by going to the Editor and selecting the model from the dropdown. (To use the Gemini models, you need to have a Google API key saved in your Humanloop [account settings](https://app.humanloop.com/hl-demo/account/api-keys).) ![Gemini 2.0 Flash in Prompt Editor](file:3bf7089d-4025-421b-a062-856e56206206) ## Drag and Drop in the Sidebar *December 9th, 2024* You can now drag and drop files into the sidebar to organize your Prompts, Evaluators, Datasets, and Flows into Directories. With this much requested feature, you can easily reorganize your workspace hierarchy without having to use the 'Move...' modals. This improvement makes it easier to maintain a clean and organized workspace. We recommend using a Directory per project to group together related files. ## Logs with user-defined IDs *December 6th, 2024* We’ve added the ability to create Logs with your own unique ID, which you can then use to reference the Log when making API calls to Humanloop. ```python highlight={19} my_id = "my_very_own_and_unique_id" # create Log with "my_very_own_and_unique_id" id humanloop.prompts.call( path="path_to_the_prompt", prompt={ "model": "gpt-4", "template": [ { "role": "system", "content": "You are a helpful assistant. Tell the truth, the whole truth, and nothing but the truth", }, ], }, log_id=my_id, messages=[{"role": "user", "content": "Is it acceptable to put pineapples on pizza?"}], ) # add evaluator judgment to this Log using your own id humanloop.evaluators.log( parent_id=my_id, path="path_to_my_evaluator", judgment="good", spec={ "arguments_type": "target_free", "return_type": "select", "evaluator_type": "human", "options": [{"name": "bad", "valence": "negative"}, {"name": "good", "valence": "positive"}] }) ``` This is particularly useful for providing judgments on the Logs without requiring you to store Humanloop-generated IDs in your application. ## Flow Trace in Review View *December 3rd, 2024* We've added the ability to see the full Flow trace directly in the Review view. This is useful to get the full context of what was called during the execution of a Flow. To open the Log drawer side panel, click on the Log ID above the Log output in the Review view. # November ## Logging with Decorators *November 16th, 2024* We've released a new version of our [Python SDK](https://pypi.org/project/humanloop/0.8.14b5/) in beta that includes a suite of decorators that allow you to more seamlessly add Humanloop logging to your existing AI features. By adding the new decorators like `@flow` or `@prompt` to your existing functions, the next time your code runs, Humanloop will start to version and monitor your application. In this release, we're introducing decorators for Prompts, Tools, and Flows: * `@prompt`: Automatically creates a Prompt on Humanloop and tracks your LLM provider calls; with details like provider and any hyperparameters. This decorator supports **OpenAI**, **Anthropic**, **Replicate**, **Cohere**, and **Bedrock** clients. Changing the LLM provider, or the hyperparameters used, will automatically bump the Prompt version on Humanloop. * `@tool`: Uses the function's signature and docstring to create and version a Tool. Changing the code of the function will create a new version of the Tool and any calls to the Tool will be logged appropriately. * `@flow`: Designed for the main entry point of your LLM feature to capture all the steps within. Any other decorated functions called within the `@flow` are automatically logged into its trace. You can also explicitly pass values for decorator arguments; including `attributes` and `metadata`. Values passed explicitly to the decorator will override any inference made by the SDK when logging to Humanloop. Here's an example of how to instrument a basic chat agent. Each conversation creates a Log under a `Flow`, with `Prompt` and `Tool` Logs then captured for each interaction. ```python import random import os import json from openai import OpenAI from humanloop import Humanloop PROMPT_TEMPLATE = ( "You are a helpful assistant knowledgeable on the " "following topics: {topics}. When you reply you " "should use the following tone of voice: {tone}" ) client = OpenAI(api_key=os.getenv("OPENAI_KEY")) hl = Humanloop(api_key=os.getenv("HUMANLOOP_KEY")) @hl.tool(path="Science Chatbot/Calculator") def calculator(operation: str, num1: int, num2: int) -> str: """Do arithmetic operations on two numbers.""" if operation == "add": return num1 + num2 elif operation == "subtract": return num1 - num2 elif operation == "multiply": return num1 * num2 elif operation == "divide": return num1 / num2 else: raise NotImplementedError("Invalid operation") @hl.tool(path="Science Chatbot/Random Number") def pick_random_number(): """Pick a random number between 1 and 100.""" return random.randint(1, 100) @hl.prompt( path="Science Chatbot/Agent Prompt", template=PROMPT_TEMPLATE, tools=[ pick_random_number.json_schema, calculator.json_schema, ], ) def call_agent(messages: list[dict[str, str]]) -> str: output = client.chat.completions.create( model="gpt-4o", messages=messages, tools=[ { "type": "function", # @tool decorated functions have a json_schema defined "function": calculator.json_schema, }, { "type": "function", "function": pick_random_number.json_schema, }, ], temperature=0.8, ) # Check if tool calls are present in the output if output.choices[0].message.tool_calls: for tool_call in output.choices[0].message.tool_calls: arguments = json.loads(tool_call.function.arguments) if tool_call.function.name == "calculator": result = calculator(**arguments) elif tool_call.function.name == "pick_random_number": result = pick_random_number(**arguments) else: raise NotImplementedError("Invalid tool call") return f"[TOOL CALL] {result}" return output.choices[0].message.content @hl.flow(path="Science Chatbot/Agent Flow", attributes={"version": "0.0.1"}) def chat(): messages = [ { "role": "system", "content": hl.prompts.populate_template( template=PROMPT_TEMPLATE, inputs={ "topics": "science", "tone": "cool surfer dude", } ), }, ] input_output_pairs = [] while True: user_input = input("You: ") input_output = [user_input] if user_input == "exit": break messages.append({"role": "user", "content": user_input}) response = call_agent(messages=messages) messages.append({"role": "assistant", "content": str(response)}) input_output.append(str(response)) print(f"Agent: {response}") input_output_pairs.append(input_output) return json.dumps(input_output_pairs) if __name__ == "__main__": chat() ``` After running this code, the full trace of the agent will be visible on Humanloop immediately: ![Decorators Example](file:855b9a7a-2619-4d2d-bfd7-1b594f661ef4) This decorator logging also works natively with our existing [offline eval workflows](https://humanloop.com/docs/v5/quickstart/evals-in-code). If you first instrument your AI feature with decorators and then subsequently want to run evals against it, you can just pass it in as the callable to `hl.evaluations.run(...)`. The logs generated using the decorators will automatically be picked up by the Eval Run. Similar functionality coming soon to TypeScript! ## New App Layout *November 14th, 2024* We've launched a major redesign of our application interface, focusing on giving you a clearer app structure and more consistent navigation. The new design features refined sidebars, tabs, and side panels that create a more cohesive experience. The primary views **Dashboard**, **Editor**, **Logs** and **Evaluations** are now located in the top navigation bar as consistent tabs for all files. The sidebar no longer expands to show these views under each file, which gives you a more stable sense of where you are in the app. The new layout also speeds up navigation between files through wiser prefetching of the content, and the default view when opening a file is now the **Editor**. These changes lay the foundation for further improvements to come such as consistent ways to slice and dice the data on different versions through each view. ![The New Layout](file:56f20a21-ecf2-44d5-a285-277f1562d4bc) *The new layout with the top navigation bar* ![](file:145d42cd-5630-4302-9ed5-cdbf390d36f9) Before ![](file:f5e830ce-24db-4dbe-b91f-0d3f6956bdb1) After ## Evals Comparison Mode progress bar *November 13th, 2024* We've added a progress bar to the comparison view to help you and your Subject Matter Experts (SME) track the progress of your human evaluations more easily. ![Progress bar in Comparison Mode](file:7cdc7440-21de-4629-a116-5a6b82eca299) You can also now mark individual cells as complete without providing a judgment value for the Evaluator. This is particularly useful when the Evaluator is not applicable for the output under review. ## Evals Comparison Mode filters *November 8th, 2024* We've added filters to the comparison view to help you and your domain experts provide judgments more efficiently and quickly. ![Filters in Comparison Mode](file:b73b0db5-4e53-40f3-91ea-a0248efc8bb7) While on the Review tab, click the Filters button to open the filters panel. You can filter the datapoints by full or partial text matches of the input variable values. In future updates, we will add support for filtering by evaluator judgments. This will provide you with more flexibility in how you view and interact with your evaluations. ## Enhanced Eval Runs *November 5th, 2024* We've extended our concept of an Eval Run to make it more versatile and easier to organise. Before now, every Run in an Evaluation had to use the exact same version of your Dataset. With this change: * We now allow you to change your Dataset between Runs if required; this is particularly useful when trying to iterate on and improve your Dataset during the process. * You can now create a Run using existing logs, without first requiring a Dataset at all; this is great for using evals to help spot check production logs, or to more easily leverage your existing logs for evals. * We've also added a new `Runs` tab within an Evaluation to provide a clearer UI around the setup, progress and organisation of different Runs.

How to create Runs

In the newly-introduced Runs tab, click on the **+ Run** button to start creating a new Run. This will insert a new row in the table where you can select a Version and Dataset as before, before clicking **Save** to create the Run. ![Evaluation Runs table](file:9d832a6e-f5af-4668-935c-b52ad22f0f02) To start using Eval Runs in your code, install the latest version of the Humanloop SDK. In Python, you can use the `humanloop.evaluations.run(...)` [utility](https://humanloop.com/docs/v5/quickstart/evals-in-code) to create a Run. Alternatively, when managing API calls directly yourself, you can create a Run by calling `humanloop.evaluation.create_run(...)` and pass the generated `run_id` into your `humanloop.prompts.log(run_id=run_id, ...)` call. This replaces the previous `evaluation_id` and `batch_id` arguments in the `log` method. In order to create a Run for existing logs, use the `humanloop.evaluations.create_run(...)` method without specifying a Dataset and then use `humanloop.prompts.log(run_id=run_id, ...)` to associate your Logs to the Run. Furthermore, if the Logs are already on Humanloop, you can add them to the Run by calling `humanloop.evaluations.add_logs_to_run(id=evaluation.id, run_id=run.id, log_ids=log_ids)` with the `log_ids` of the Logs you want to add. # October *** ## Evals Comparison Mode shortcuts *October 25th, 2024* We've added keyboard shortcuts to the side-by-side comparison view to help you and your domain experts work through review tasks more efficiently and quickly. While on the review tab, press the arrow keys ⬆️, ⬇️, ⬅️, or ➡️ to navigate between judgment cells, and press `Enter` to edit the judgment. You can also press keys `J` and `K` to switch between datapoints without having to use your mouse or trackpad. *** ## Onboarding improvements *October 24th, 2024* We've improved how we introduce Humanloop to new users. We've highlighted more the common workflow of triggering evals from code for existing AI apps. ![Onboarding wizard with code snippet](file:87c1742a-a947-4930-88d7-74155ff6bafb)

Evals code example

When you first enter Humanloop, we'll give you a code snippet demonstrating how to run an eval on a simple Prompt - you can skip this step and continue in the UI and we'll create an example Prompt for you. If you're new to Humanloop, these will introduce our key concepts of our Prompts and Evals. If you're already on Humanloop, you can find a similar example in our updated [doc quickstarts](https://humanloop.com/docs/quickstart/evals-in-code).

Example Evaluators

As part of these improvements, Humanloop will now provide new organizations with a set of example Evaluators to showcase the range of use cases and what the different Evaluator types (AI, code, and Human) can be used for.

SDK utility

We've also continuing to extend the utilities within our SDKs, adding a `humanloop.prompts.populate_template(...)` utility function to the Python SDK to make it easier to use Prompt templates while making your own calls to the model provider. Coming to TypeScript soon. *** ## Claude 3.5 Sonnet *October 20th, 2024* We added same day support for Anthropic's new Claude 3.5 Sonnet model. This latest version is reported to have improved performance across the board over its predecessor, in particular on coding tasks - read more [here](https://www.anthropic.com/news/3-5-models-and-computer-use). ![Claude-3.5-Sonnet-in-Editor](file:386baf07-4b46-4c83-a2a0-4d4e05d34b4b) We've added support both for our Anthropic and Bedrock provider integrations. *** ## Humanloop Status Page *October 18th, 2024* We've published a public [status page](status.humanloop.com) for any incidents or maintenance that may affect the Humanloop app, api, or website moving forward. You can use this page to report problems and subscribe to timely updates on the status of our services. This is part of an ongoing initiative to maintain reliability and trust in our services as we continue to scale. ![Humanloop Status Page](file:b722048f-b5c6-4224-b0fb-a4ff1a61e5d0) *** ## Improved Dataset Upload *October 17th, 2024* We've added the ability to map your input and target columns to the columns in your .csv on upload. This provides more flexibility to users who predominately use the UI to manage Datasets. When you upload a CSV via the Dataset Editor you will see a new mapping step that allows you to select which columns will be mapped to which dataset fields. ![](file:bf423bae-0055-462a-bdd2-824ef7ee7494) To learn more about Datasets on Humanloop you can check out our [Datasets](/docs/explanation/datasets) page. *** ## Evaluate Flow Log contents *October 16th, 2024* Flow Logs allow users to represent complex multi-step apps on Humanloop. Each step can be a Prompt, Tool, or Evaluator Log; or even another Flow Log. Logs can also be nested so that you can represent your app's execution trace. Prior to now, Evaluators could only reference the inputs and outputs of Flow Logs when providing Judgments. We've now added the ability to access the entire contents of a Flow Log in an Evaluator. This allows you to write more complex Evaluators that can inspect the entire execution trace of your app.

How to use

The contents of the Flow Log are accessible via the new `children` field. Logs within the trace can also have children depending on the level of nesting in your code. For example, if your Flow Log represent a conversation between a user and a chatbot, you can now write an Evaluator that inspects the entire conversation to make a judgement. Below is a simple example of checking how many steps there were in the conversation: ```python def count_number_steps(log): """Counts the number of steps in the Flow Log.""" # This assumes there was no subsequent nesting return len(log.get('children', [])) ``` Or maybe you want to count how many Logs in the trace returned an empty output, where there may have been nesting: ```python def count_null_output_logs(log): """Count the number of logs in the trace where output is null.""" def count_null_output(log): """Helper function for recursively counting.""" null_count = 1 if log.get('output') is None else 0 for child in log.get('children', []): null_count += count_null_output(child) return null_count return count_null_output(log) ``` You can access `children` within any of the Evaluator Editors. ![Evaluate Flow Log Contents](file:89095202-2304-4806-ba38-21cbd3d74898) *** ## Export Datasets *October 14th, 2024* You can now export your Datasets on Humanloop to .csv directly from within the UI. This allows you to more rapidly iterate on your Dataset in spreadsheet tools before re-uploading. ![Export Datasets Action](file:72112f76-f9dd-42d0-a22b-bf4b332153b2) You can find this option both within the Dataset Editor and from your existing versions on the Dataset Dashboard. *** ## Evals Comparison Mode Improvements *October 12th, 2024* We're continuing to invest in our review UI to make it easier for you and your domain experts to work through review tasks more efficiently and quickly.

More easily access additional context

You can now hover over judgments in the `Review` tab to access additional context. It's especially useful to get information like instructions and additional outputs (such as when an LLM evaluator outputs rationale alongside the final judgment). You can also click to expand to the full drawer view. ![Comparison Mode Improvements - eval log details](file:1a7ec1a7-26e1-4015-b9f7-118740519167)

Share deep links to specific datapoints

You can now share links to a specific datapoint in the `Review` tab. Simply select the desired datapoint and copy the URL. Team members who open the link will be directed to the same datapoint.

Navigate to specific datapoints by index

Using the new text box at the top left of the `Review` tab, you can now jump to a specific datapoint by index. This can be helpful when you need to split up the review amongst multiple team members; each can take a different range of datapoints to review. ![Comparison Mode Improvements - navigate by index](file:f8588aff-72db-4c00-9e8d-79c336720619) *** ## Improved Dataset Upload *October 10th, 2024* We've added the ability to map your input and target columns to the columns in your .csv on upload. This provides more flexibility to users who predominately use the UI to manage Datasets. When you upload a CSV via the Dataset Editor you will see a new mapping step that allows you to select which columns will be mapped to which dataset fields. ![](file:bf423bae-0055-462a-bdd2-824ef7ee7494) To learn more about Datasets on Humanloop you can check out our [Datasets](/docs/explanation/datasets) page. *** ## Evaluations SDK Improvements *October 3rd, 2024* We've added a new `run` method for evaluations to our SDK. This provides a simpler entry point for evaluating your existing pipelines, both in your CICD and experimentation workflows. This is currently available in [Beta](https://pypi.org/project/humanloop/0.8.6b2/) on Python and will soon be added to the major versions of both Py and TS SDKs. In order to run an eval via the SDK, you need to provide: 1. A callable function that takes your inputs/messages and returns a string 2. A [Dataset](https://humanloop.com/docs/evaluation/guides/create-dataset) of inputs/message to evaluate the function against 3. A set of [Evaluators](https://humanloop.com/docs/evaluation/guides/llm-as-a-judge) to use to provide judgments Here is a toy example using a simple OpenAI call as the function to evaluate. ```python from humanloop import Humanloop from openai import OpenAI from dotenv import load_dotenv import os load_dotenv() hl = Humanloop(api_key=os.getenv("HUMANLOOP_KEY")) openai = OpenAI(api_key=os.getenv("OPENAI_KEY")) # First define the app you're evaluating def call_digital_twin(person: str, messages: list) -> str: system_message = { "role": "system", "content": f"You are {person}" } chat_completion = openai.chat.completions.create( model="gpt-3.5-turbo", messages=[system_message] + messages, ) answer = chat_completion.choices[0].message.content return answer # Then run an eval specifying the file location on Humanloop checks = hl.evaluations.run( name="Demo eval", file={ "path": "digital-twin", "callable": call_digital_twin }, evaluators=[ {"path": "Latency"}, {"path": "Cost"}, {"path": "Correctness-AI"} ], dataset={ "path": "questions", "datapoints": [ { "inputs": {"person": "Albert Einstein"}, "messages": [{"role": "user", "content": "What is your most famous theory?"}] }, { "inputs": {"person": "Serena Williams"}, "messages": [{"role": "user", "content": "What trophy did you win most recently?"}] }, { "inputs": {"person": "Marie Curie"}, "messages": [{"role": "user", "content": "What was your greatest scientific achievement?"}] }, { "inputs": {"person": "Leonardo da Vinci"}, "messages": [{"role": "user", "content": "Which of your inventions are you most proud of?"}] }, { "inputs": {"person": "Rosa Parks"}, "messages": [{"role": "user", "content": "What motivated you to refuse giving up your seat?"}] } ] }, ) ``` Running this will provide status info and an eval summary in your CLI and a new eval will appear on Humanloop at the displayed URL. Running it again under the same `name` will add runs to the existing eval. ``` Navigate to your evaluations: https://app.humanloop.com/project/fl_euUV4BHoXqKWqFyZ1YD1o/evaluations/evr_6WhFaHdkbWH8ZaoddzyRD/stats Running digital-twin flow callable over the Dataset questions using 4 workers [########################################] 5/5 (100.00%) | DONE ⏳ Evaluation Progress Total Logs: 10/10 Total Judgments: 30/30 📊 Eval results for digital-twin +----------------+----------------------------------------+----------------------------------------+ | Version ID | flv_VIP1eiemqbpWmlsr84BwN (eb37773f39) | flv_VIP1eiemqbpWmlsr84BwN (9de378a165) | +----------------+----------------------------------------+----------------------------------------+ | Added | 2024-10-08 03:46:11 | 2024-10-08 03:51:52 | +----------------+----------------------------------------+----------------------------------------+ | Evaluators | | | +----------------+----------------------------------------+----------------------------------------+ | Latency | 0.02 | 0.015 | | Correctness-AI | 1.0 | 1.0 | +----------------+----------------------------------------+----------------------------------------+ ``` It returns a set of checks you can use to determine whether the eval passed or failed.

Introduce versioning

The only thing distinguishing different eval runs under the same eval `name` so far is the time stamp they were run. It can also be helpful to record what the configuration of your system was when running the eval. You can include arbitrary config within the `version` field of the `file`. If this `version` has been used before, Humanloop will automatically associate it to your run. If the config is new, we will automatically create a new version of your file for future reference. ```python import inspect checks = hl.evaluations.run( name="Demo eval", file={ "path": "digital-twin", "callable": call_digital_twin, "version":{ "version":"0.2.4", "code": inspect.getsource(call_digital_twin) } }, dataset={...}, evaluators=[...], ) ```

Leverage native Prompts

Using `hl.evaluations.run(...)` will by default create a Flow on Humanloop. Flows have the advantage of being able to represent more complex traces, but can't be run natively within the Humanloop Editor. It's also possible to adapt the `run` call to instead evaluate [Prompts](https://humanloop.com/docs/explanation/prompts) by defining the `type` as prompt and providing valid Prompt params in the `version` field. ```python checks = hl.evaluations.run( file={ "path": "digital-twin-prompt", "type": "prompt", "version": { "model": "gpt-4o-mini", "template": [{"role": "system", "content": f"You are {{person}}"}] } }, dataset={...}, evaluators=[...], ) ```

Add Evaluator thresholds

You can also now provide a threshold value for each of your Evaluators. If provided, the `checks` return will determine whether the average performance of the Evaluator met the threshold or not. ```python checks = hl.evaluations.run( file={...}, dataset={...}, evaluators=[ {"path": "Latency"}, {"path": "Cost"}, {"path": "Correctness-AI", "threshold": 0.5} ], ) ``` *** ## Manage Directories via API *October 1st, 2024* You can now manage directories directly using our [API](https://humanloop.com/docs/api-reference/directories/create). This can be helpful for programmatically managing your workspace for bulk changes or dynamically creating folder structures. To learn more about directories on Humanloop you can check out our [Directories](/docs/explanation/directories) page. # September ## Evaluations Comparison Mode *September 25th, 2024* We've added a side-by-side comparison view to evaluations on Humanloop. This new view enables domain experts to view multiple outputs side-by-side and provide judgments with easy-to-use, configurable controls. ![Comparison Mode in Evaluations](file:6cf980b1-f7d7-4774-8a09-3f42668ccf80) To start using this new view, choose a File and click on the Evaluations submenu. Select the eval you want to view and navigate to the Review tab. This is particularly useful when trying to compare and contrast the outputs from different versions of your AI apps when providing relative judgements. *** ## Bedrock support for Llama models *September 20th, 2024* We've added support for Llama models through our AWS Bedrock integration. ![AWS Bedrock Llama models in model selection dropdown in a Prompt Editor on Humanloop](file:3af2e457-d141-404a-b26c-f15d14040371) You can now select Llama models in the model selection dropdown in the Prompt Editor and start using them in your Prompts. Humanloop supports tool calling for Llama 3.1 models, helping you to build more powerful AI applications. *** ## Evaluation Names *September 17th, 2024* You can now name your Evaluations in the UI and via the API. This is helpful for more easily identifying the purpose of your different Evaluations, especially when multiple teams are running different experiments. ![Evaluation with a name](file:f2b044fe-719d-4072-a3a2-6e769e80636d) In the API, pass in the `name` field when creating your Evaluation to set the name. Note that names must be unique for all Evaluations for a specific file. In the UI, navigate to your Evaluation and you will see an option to rename it in the header. *** ## Introducing Flows *September 15th, 2024* We've added a new key building block to our app with the first release of Flows. This release focuses on improving the code-first workflows for evaluating more complex AI applications like RAG and Agent-based apps. Flows allow you to version your whole AI application on Humanloop (as opposed to just individual Prompts and Tools) and allows you to log and evaluate the full trace of the important processing steps that occur when running your app. See our [cookbook tutorial](https://github.com/humanloop/humanloop-cookbook/blob/main/tutorials/rag/evaluate-rag-flow.ipynb) for examples on how to use Flows in your code. ![Image of a Flow with logs](file:b1de2a3d-2dc4-4935-b717-3083990e123c) **What's next** We'll soon be extending support for allowing Evaluators to access all Logs inside a trace. Additionally, we will build on this by adding UI-first visualisations and management of your Flows. We'll sunset Sessions in favour of Flows in the near future. Reach out to us for guidance on how to migrate your Session-based workflows to Flows. *** ## Bedrock support for Anthropic models *September 13th, 2024* We've introduced a Bedrock integration on Humanloop, allowing you to use Anthropic's models via the Bedrock API, leveraging your AWS-managed infrastructure. ![AWS Bedrock Claude models in model selection dropdown in a Prompt Editor on Humanloop](file:3064b0c0-49f7-4574-b689-f68833428678) To set this up, head to the API Keys tab in your Organization settings [here](https://app.humanloop.com/account/api-keys). Enter your AWS credentials and configuration. ![Bedrock keys dialog in Humanloop app](file:42bd62c7-046e-4441-9ec5-7d3c2aa862eb) Once you've set up your Bedrock keys, you can select the Anthropic models in the model selection dropdown in the Prompt Editor and start using them in your Prompts. *** ## OpenAI o1 *September 10th, 2024* We added same day support for OpenAI's new models, the o1 series. Unlike their predecessors, the o1 models have been designed to spend more time thinking before they respond. In practise this means that when you call the API, time and tokens are spent doing chain-of-thought reasoning before you receive a response back. ![o1 in the Humanloop Editor](file:d9aef246-b1a6-481e-be3c-e1f389d989b2) Read more about this new class of models in OpenAI's [release note](https://openai.com/index/introducing-openai-o1-preview/) and their [documentation](https://platform.openai.com/docs/guides/reasoning). These models are still in Beta and don't yet support streaming or tool use, the temperature has to be set to 1 and there are specific rate limits in place. *** ## Evaluations CICD Improvements *September 5th, 2024* We've expanded our [evaluations API](https://humanloop.com/docs/v5/api-reference/evaluations/get-stats) to include new fields that allow you to more easily check on progress and render summaries of your Evaluations directly in your deployment logs. The stats response now contains a `status` you can poll and `progess` and `report` fields that you can print: ``` ⏳ Evaluation Progress Total Logs: 40/40 Total Judgments: 120/120 📊 Evaluation Results for evals_demo/answer-flow +------------------------+---------------------------+---------------------------+ | Version id | flv_xo7ZxnkkvcFcDJ9pwSrA9 | flv_foxO18ZHEgxQmwYJO4bR1 | +------------------------+---------------------------+---------------------------+ | Created | 2024-09-01 14:50:28 | 2024-09-02 14:53:24 | +------------------------+---------------------------+---------------------------+ | Evaluators | | | +------------------------+---------------------------+---------------------------+ | evals_demo/exact_match | 0.8 | 0.65 | | evals_demo/levenshtein | 7.5 | 33.5 | | evals_demo/reasoning | 0.3 | 0.05 | +------------------------+---------------------------+---------------------------+ Navigate to Evaluation: https://app.humanloop.com/evaluations/evr_vXjRgufGzwuX37UY83Lr8 ❌ Latest score [0.05] below the threshold [0.5] for evaluator evals_demo/reasoning. ❌ Regression of [-0.25] for evaluator evals_demo/reasoning ``` See how you can leverage Evaluations as part of your CICD pipeline to test for regressions in your AI apps in our reference [example](https://github.com/humanloop/humanloop-cookbook/blob/main/tutorials/rag/evaluate_rag_cicd.py). # August ## Get All Deployed Versions via API *August 30th, 2024* We've introduced a new Files API in our v5 API resources that lets you query all files simultaneously. This is useful when managing your workflows on Humanloop and you wish to find all files that match specific criteria, such as having a deployment in a specific environment. Some of the supported filters to search with are file name, file type, and deployed environments. If you find there are additional access patterns you'd find useful, please reach out and let us know. *** ## Update Logs API *August 29th, 2024* We've introduced the ability to patch Logs for Prompts and Tools. This can come in useful in scenarios where certain characteristics of your Log are delayed that you may want to add later, such as the output, or if you have a process of redacting inputs that takes time. Note that not all fields support being patched, so start by referring to our [V5 API References](api-reference/prompts). From there, you can submit updates to your previously created logs. *** ## Search files by path *August 28th, 2024* We've extended our search interface to include file paths, allowing you to more easily find and navigate to related files that you've grouped under a directory. ![Search dialog showing file paths](file:6cd51714-4b50-491d-8c3e-1c2052d8783f) Bring up this search dialog by clicking "Search" near the top of the left-hand sidebar, or by pressing `Cmd+K`. *** ## Updated Gemini 1.5 models *August 24th, 2024* Humanloop supports the three newly released Gemini 1.5 models. Start using these improved models by specifying one of the following model names in your Prompts: * `gemini-1.5-pro-exp-0827` The improved Gemini 1.5 Pro model * `gemini-1.5-flash-exp-0827` The improved Gemini 1.5 Flash model * `gemini-1.5-flash-8b-exp-0827` The smaller Gemini 1.5 Flash variant More details on these models can be viewed [here](https://ai.google.dev/gemini-api/docs/models/experimental-models#available-models). *** ## Custom attributes for Files *August 20th, 2024* You can now include custom attributes to determine the unique version of your file definitions on Humanloop. This allows you to make the version depend on data custom to your application that Humanloop may not be aware of. For example, if there are feature flags or identifiers that indicate a different configuration of your system that may impact the behaviour of your Prompt or Tool. `attributes` can be submitted via the v5 API endpoints. When added, the attributes are visible on the Version Drawer and in the Editor. ![Metadata on versions](file:b80d7939-5df8-42fb-b726-5c00edd0f5cd) *** ## Improved popover UI *August 16th, 2024* We've expanded the information shown in the version popover so that it is easier to identify which version you are working with. This is particularly useful in places like the Logs table and within Evaluation reports, where you may be working with multiple versions of a Prompt, Tool, or Evaluator and need to preview the contents. ![Improved version popover](file:b0a4c497-ee6b-4749-88ee-b345cbf8508c) *** ## Evaluate uncommitted versions *August 15th, 2024* You can now evaluate versions without committing them first. This means you can draft a version of a Prompt in the editor and simultaneously evaluate it in the evaluations tab, speeding up your iteration cycle. This is a global change that allows you to load and use uncommitted versions. Uncommitted versions are created automatically when a new version of a Prompt, Tool, or Evaluator is run in their respective editors or called via the API. These versions will now appear in the version pickers underneath all your committed versions. To evaluate an uncommitted version, simply select it by using the hash (known as the "version id") when setting up your evaluation. ![Uncommitted versions in the version picker](file:3cff5649-968f-4cdc-817f-e2915678e060) *** ## Human Evaluator upgrades *August 14th, 2024* We've made significant upgrades to Human Evaluators and related workflows to improve your ability to gather Human judgments (sometimes referred to as "feedback") in assessing the quality of your AI applications. Here are some of the key improvements: * Instead of having to define a limited feedback schema tied to the settings of a specific Prompt, you can now **define your schema with a Human Evaluator file and reuse it across multiple Prompts and Tools** for both monitoring and offline evaluation purposes. * You are no longer restricted to the default types of `Rating`, `Actions` and `Issues` when defining your feedback schemas from the UI. We've introduced a **more flexible Editor interface supporting different return types** and valence controls. * We've extended the scope of Human Evaluators so that they can now **also be used with Tools and other Evaluators** (useful for validating AI judgments) in the same way as with Prompts. * We've **improved the Logs drawer UI for applying feedback** to Logs. In particular, we've made the buttons more responsive. To set up a Human Evaluator, create a new file. Within the file creation dialog, click on **Evaluator**, then click on **Human**. This will create a new Human Evaluator file and bring you to its Editor. Here, you can choose a `Return type` for the Evaluator and configure the feedback schema. ![Tone evaluator set up with options and instructions](file:cca90248-e3b1-4059-bb25-3467a5b78316) You can then reference this Human Evaluator within the `Monitoring` dropdown of Prompts, Tools, and other Evaluators, as well as when configuring reports in their `Evaluations` tab. We've set up default `Rating` and `Correction` Evaluators that will be automatically attached to all Prompts new and existing. We've migrated all your existing Prompt specific feedback schemas to Human Evaluator files and these will continue to work as before with no disruption. Check out our updated document for further details on how to use Human Evaluators: * [Create a Human Evaluator](/docs/evaluation/guides/human-evaluator) * [Capture End User Feedback](/docs/observability/guides/capture-user-feedback) * [Run a Human Evaluation](/docs/evaluation/guides/run-human-evaluation) *** ## Evaluations improvements *August 13th, 2024* We've made improvements to help you evaluate the components of your AI applications, quickly see issues and explore the full context of each evaluation.

A clearer Evaluation tab in Logs

We've given the Log drawer's Evaluation tab a facelift. You can now clearly see what the results are for each of the connected Evaluators. This means that it's now easier to debug the judgments applied to a Log, and if necessary, re-run code/AI Evaluators in-line. ![Log drawer's Evaluation tab with the "Run again" menu open](file:b43c1143-aca1-455e-94a2-59d9a4f1928f)

Ability to re-run Evaluators

We have introduced the ability to re-run your Evaluators against a specific Log. This feature allows you to more easily address and fix issues with previous Evaluator judgments for specific Logs. You can request a re-run of that Evaluator by opening the menu next to that Evaluator and pressing the "Run Again" option.

Evaluation popover

If you hover over an evaluation result, you'll now see a popover with more details about the evaluation including any intermediate results or console logs without context switching. ![Evaluation popover](file:a69ba515-4e4e-4e7e-a82d-8b581b5e8573)

Updated Evaluator Logs table

The Logs table for Evaluators now supports the functionality as you would expect from our other Logs tables. This will make it easier to filter and sort your Evaluator judgments. *** ## More Code Evaluator packages *August 7th, 2024* We have expanded the packages available in the Evaluator Python environment. The new packages we've added are: `continuous-eval`, `jellyfish`, `langdetect`, `nltk`, `scikit-learn`, `spacy`, `transformers`. The full list of packages can been seen in our [Python environment reference](/docs/reference/python-environment). We are actively improving our execution environment so if you have additional packages you'd like us to support, please do not hesitate to get in touch. *** ## OpenAI Structured Outputs *August 5th, 2024* OpenAI have introduced [Structured Outputs](https://openai.com/index/introducing-structured-outputs-in-the-api/) functionality to their API. This feature allows the model to more reliably adhere to user defined JSON schemas for use cases like information extraction, data validation, and more. We've extended our `/chat` (in v4) and `prompt/call` (in v5) endpoints to support this feature. There are two ways to trigger Structured Outputs in the API: 1. **Tool Calling:** When defining a tool as part of your Prompt definition, you can now include a `strict=true` flag. The model will then output JSON data that adheres to the tool `parameters` schema definition. ```python """ Example using our v5 API. """ from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.prompts.call( path="person-extractor", prompt={ "model": "gpt-4o", "template": [ { "role": "system", "content": "You are an information extractor.", }, ], "tools": [ { "name": "extract_person_object", "description": "Extracts a person object from a user message.", # New parameter to enable structured outputs "strict": True, "parameters": { "type": "object", "properties": { "name": { "type": "string", "name": "Full name", "description": "Full name of the person", }, "address": { "type": "string", "name": "Full address", "description": "Full address of the person", }, "job": { "type": "string", "name": "Job", "description": "The job of the person", } }, # These fields need to be defined in strict mode "required": ["name", "address", "job"], "additionalProperties": False, }, } ], }, messages=[ { "role": "user", "content": "Hey! I'm Jacob Martial, I live on 123c Victoria street, Toronto and I'm a software engineer at Humanloop.", }, ], stream=False, ) ``` 2. **Response Format:** We have expanded the `response_format` with option `json_schema` and a request parameter to also include an optional `json_schema` field where you can pass in the schema you wish the model to adhere to. ```python client.prompts.call( path="person-extractor", prompt={ "model": "gpt-4o", "template": [ { "role": "system", "content": "You are an information extractor.", }, ], # New parameter to enable structured outputs "response_format": { "type": "json_schema", "json_schema": { "name": "person_object", "strict": True, "schema": { "type": "object", "properties": { "name": { "type": "string", "name": "Full name", "description": "Full name of the person" }, "address": { "type": "string", "name": "Full address", "description": "Full address of the person" }, "job": { "type": "string", "name": "Job", "description": "The job of the person" } }, "required": ["name", "address", "job"], "additionalProperties": False } } } }, messages=[ { "role": "user", "content": "Hey! I'm Jacob Martial, I live on 123c Victoria street, Toronto and I'm a software engineer at Humanloop.", }, ], stream=False, ) ``` This new response formant functionality is only supported by the latest OpenAPI model snapshots `gpt-4o-2024-08-06` and `gpt-4o-mini-2024-07-18`. We will also be exposing this functionality in our Editor UI soon too! *** ## Improved Code Evaluator Debugging *August 1st, 2024* We've added the ability to view the Standard Output (Stdout) for your Code Evaluators. You can now use `print(...)` statements within your code to output intermediate results to aid with debugging. The Stdout is available within the Debug console as you iterate on your Code Evaluator: ![DebugConsole](file:df5dc667-1477-4f9a-8d99-f2d05717a5f8) Additionally, it is stored against the Evaluator Log for future reference: ![EvaluatorLog](file:925cdff4-ef85-4d66-9a20-172e7928e5dd) # July ## Select multiple Versions when creating an Evaluation *July 30th, 2024* Our Evaluations feature allows you to benchmark Versions of a same File. We've made the form for creating new Evaluations simpler by allowing the selection of multiple in the picker dialog. Columns will be filled or inserted as needed. As an added bonus, we've made adding and removing columns feel smoother with animations. The form will also scroll to newly-added columns. ![](file:2954d0ac-61e3-4db4-86a1-f70b2650b636) *** ## Faster log queries *July 19th, 2024* You should notice that queries against your logs should load faster and the tables should render more quickly. We're still making more enhancements so keep an eye for more speed-ups coming soon! *** ## gpt-4o-mini support *July 18th, 2024* Latest model from OpenAI, GPT-4o-mini, has been added. It's a smaller version of the GPT-4o model which shows GPT-4 level performance with a model that is 60% cheaper than gpt-3.5-turbo. * Cost: 15 cents per million input tokens, 60 cents per million output tokens * Performance: MMLU score of 82% *** ## Enhanced code Evaluators *July 10th, 2024* We've introduced several enhancements to our code Evaluator runtime environment to support additional packages, environment variables, and improved runtime output.

Runtime environment

Our Code Evaluator now logs both `stdout` and `stderr` when executed and environment variables can now be accessed via the `os.environ` dictionary, allowing you to retrieve values such as `os.environ['HUMANLOOP_API_KEY']` or `os.environ['PROVIDER_KEYS']`.

Python packages

Previously, the selection of Python packages we could support was limited. We are now able to accommodate customer-requested packages. If you have specific package requirements for your eval workflows, please let us know! # June ## Gemini 1.5 Flash support *June 30th, 2024* Gemini 1.5 Flash is Googles most efficient model to date with a long context window and great latency. While it’s smaller than 1.5 Pro, it’s highly capable of multimodal reasoning with a 1 million token length context window. Find out more about Flash's [availability and pricing](https://blog.google/technology/developers/gemini-gemma-developer-updates-may-2024/) *** ## Committing and deploying UX improvements *June 24th, 2024* We've made some improvements to the user experience around committing and deploying changes to your evaluators, tools and datasets. Now, each editor has a consistent and reliable loading and saving experience. You can choose prior versions in the dropdown, making it easier to toggle between versions. And, as you commit, you'll also get the option to immediately deploy your changes. This reduces the number of steps needed to get your changes live. Additional bug fixes: * Fixed the flickering issue on the datasets editor * Fixed the issue where the evaluator editor would lose the state of the debug drawer on commit. *** ## Claude 3.5 Sonnet support *June 20th, 2024* Claude 3.5 Sonnet is now in Humanloop! Sonnet is the latest and most powerful model from Anthropic. **2x the speed, 1/5th the cost, yet smarter than Claude 3 Opus.** Anthropic have now enabled streaming of tool calls too, which is supported in Humanloop now too. Add your Anthropic key and select Sonnet in the Editor to give it a go. ![Sonnet](file:9e0738cd-f886-41d2-a4ac-f1283dfacc2e) *** ## Prompt and Tool version drawer in Evaluation reports *June 18th, 2024* You can now click on the Prompt and Tool version tags within your Evaluation report to open a drawer with details. This helps provide the additional context needed when reasoning with the results without having to navigate awa ![Prompt drawer in Evaluation report](file:bfa0bc5f-ac26-42ac-a462-66d515df5ff8) *** ## Status of Human Evaluators *June 16th, 2024* With Humanloop Evaluation Reports, you can leverage multiple Evaluators for comparing your Prompt and Tool variations. Evaluators can be of different types: code, AI or Human and the progress of the report is dependent on collecting all the required judgements. Human judgments generally take longer than the rest and are collected async by members of your team. ![Human Evaluators](file:c8f58357-7b61-44af-b827-47f0e0b94486) To better support this workflow, we've improved the UX around monitoring the status of judgments, with a new progress bar. Your Human Evaluators can now also update the status of the report when they're done. ![Human Evaluators](file:8f8961a2-c989-49bc-a9af-801f221fef1b) We've also added the ability to cancel ongoing Evaluations that are pending or running. Humanloop will then stop generating Logs and running Evaluators for this Evaluation report. *** ## Faster Evaluations *June 10th, 2024* Following the recent upgrades around Evaluation reports, we've improved the batching and concurrency for both calling models and getting the evaluation results. This has increased the speed of Evaluation report generation by 10x and the reports now update as new batches of logs and evaluations are completed to give a sense of intermediary progress. *** ## Evaluation Comparison Reports *June 4th, 2024* We've released Evaluation reports, which allows you to easily compare the performance of your different Prompts or Tools across multiple different [Evaluator](/docs/evaluators) criteria. This generalises our previous concept of Evaluation runs, extending it with multiple complimentary changes with getting more from your evals. All your existing Evaluation runs have been migrated to Evaluation reports with a single evaluated Prompt or Tool. You can easily extend these existing runs to cover additional Evaluators and Prompts/Tools with out having to regenerate existing logs.

Feature breakdown

We've introduced a new **stats comparison view**, including a radar chart that gives you a quick overview of how your versions compare across all Evaluators. Below it, your evaluated versions are shown in columns, forming a grid with a row per Evaluator you've selected. The performance of each version for a given Evaluator is shown in a chart, where bar charts are used for boolean results, while box plots are used for numerical results providing an indication of variance within your Dataset. Evaluation reports also introduce an **automatic deduplication** feature, which utilizes previous logs to avoid generating new logs for the same inputs. If a log already exists for a given evaluated-version-and-datapoint pair, it will automatically be reused. You can also override this behavior and force the generation of new logs for a report by creating a **New Batch** in the setup panel.

How to use Evaluation reports

To get started, head over to the Evaluations tab of the Prompt you'd like to evaluate, and click **Evaluate** in the top right. This will bring you to a page where you can set up your Evaluation, choosing a Dataset, some versions to Evaluate and compare, and the Evaluators you'd like to use. ![](file:e9130e60-1a67-4c49-a53e-5a10c0f0b08c) When you click **Save**, the Evaluation report will be created, and any missing Logs will be generated.

What's next

We're planning on improving the functionality of Evaluation reports by adding a more comprehensive detailed view, allowing you to get a more in-depth look at the generations produced by your Prompt versions. Together with this, we'll also be improving Human evaluators so you can better annotate and aggregate feedback on your generations. # May ## Azure Model Updates *May 28th, 2024* You can now access the latest versions of GPT-4 and GPT-4o hosted on Azure in the Humanloop Editor and via our Chat endpoints. Once you've configured your Azure key and endpoint in your organization's provider settings, the model versions will show up in the Editor dropown as follows: For more detail, please see the [API documentation](https://docs.humanloop.com/reference/logs_list) on our Logs endpoints. ![](file:75aaa3bb-5462-4061-a757-c42b3ed92eb7) *** ## Improved Logs Filtering *May 20th, 2024* We've improved the ability to filter logs by time ranges. The API logs filter parameters for `start_date` and `end_date` now supports querying with more granularity. Previously the filters were limited to dates, such as **2024-05-22**, now you can use hourly ranges as well, such as **2024-05-22 13:45**. For more detail, please see the [API documentation](https://docs.humanloop.com/reference/logs_list) on our Logs endpoints. *** ## Monitoring with deployed Evaluators *May 15th, 2024* You can now connect deployed Evaluator versions for online monitoring of your Prompts and Tools. This enables you to update Evaluators for multiple Prompt or Tools when you deploy a new Evaluator version. *** ## GPT-4o *May 13th, 2024* Same day support for OpenAIs new GPT4-Omni model! You can now use this within the Humanloop Editor and chat APIs. Find out more from OpenAI [here](https://openai.com/index/hello-gpt-4o/). *** ## Logs for Evaluators *May 12th, 2024* For AI and Code Evaluators, you can now inspect and reference their logs as with Prompts and Tools. This provides greater transparency into how they are being used and improves the ability to debug and improve. Further improvements to Human Evaluators are coming very soon... Creating a new Evaluator file *** ## Improved Evaluator management *May 8th, 2024* Evaluators are now first class citizens alongside Prompts, Tools and Datasets. This allows for easier re-use, version control and helps with organising your workspace within directories. You can create a new Evaluator by choosing **Evaluator** in the File creation dialog in the sidebar or on your home page. Creating a new Evaluator file

Migration and backwards compatibility

We've migrated all of your Evaluators previously managed within **Prompts > Evaluations > Evaluators** to new Evaluator files. All your existing Evaluation runs will remain unchanged and online Evaluators will continue to work as before. Moving forward you should use the new Evaluator file to make edits and manage versions. # April ## Log drawer in Editor *April 30th, 2024* You can now open up the Log drawer directly in the Editor. This enables you to see exactly what was sent to the provider as well as the tokens used and cost. You can also conveniently add feedback and run evaluators on that specific Log, or add it to a dataset. To show the Logs just click the arrow icon beside each generated message or completion. *** ## Groq support (Beta) *April 26th, 2024* We have introduced support for models available on Groq to Humanloop. You can now try out the blazingly fast generations made with the open-source models (such as Llama 3 and Mixtral 8x7B) hosted on Groq within our Prompt Editor. Groq achieves [faster throughput](https://artificialanalysis.ai/models/llama-3-instruct-70b/providers) using specialized hardware, their LPU Inference Engine. More information is available in their [FAQ](https://wow.groq.com/why-groq/) and on their website.
Note that their API service, GroqCloud, is still in beta and low rate limits are enforced. *** ## Llama 3 *April 23rd, 2024* [Llama 3](https://llama.meta.com/llama3/), Meta AI's latest openly-accessible model, can now be used in the Humanloop Prompt Editor. Llama 3 comes in two variants: an 8-billion parameter model that performs similarly to their previous 70-billion parameter Llama 2 model, and a new 70-billion parameter model. Both of these variants have an expanded context window of 8192 tokens. More details and benchmarks against other models can be found on their [blog post](https://ai.meta.com/blog/meta-llama-3/) and [model card](https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md). Humanloop supports Llama 3 on the Replicate model provider, and on the newly-introduced Groq model provider. *** ## Anthropic tool support (Beta) *April 18th, 2024* Our Editor and deployed endpoints now supports tool use with the Anthropic's Claude3 models. Tool calling with Anthropic is still in Beta, so streaming is not important. In order to user tool calling for Claude in Editor you therefore need to first turn off streaming mode in the menu dropdown to the right of the load button. *** ## Cost, Tokens and Latency *April 16th, 2024* We now compute Cost, Tokens and Latency for all Prompt logs by default across all model providers. These values will now appear automatically as graphs in your Dashboard, as columns in your logs table and will be displayed in our Version and Log drawers. *** ## Cohere Command-r *April 13th, 2024* We've expanded the Cohere models with the latest command-r suite. You can now use these models in our Editor and via our APIs once you have set your Cohere API key. More details can be found on their [blog post](https://cohere.com/blog/command-r-plus-microsoft-azure). *** ## Dataset Files & Versions *April 5th, 2024* In our recent release, we promoted **Datasets** from being attributes managed within the context of a single Prompt, to a **first-class Humanloop file type** alongside Prompts and Tools. This means you can curate Datasets and share them for use across any of the Prompts in your organization. It also means you get the full power of our **file versioning system**, allowing you **track and commit every change** you make Datasets and their Datapoints, with attribution and commit messages inspired by Git. It's now easy to understand which version of a Dataset was used in a given Evaluation run, and whether the most recent edits to the Dataset were included or not. Read more on how to get started with datasets [here](/docs/datasets). This change lays the foundation for lots more improvements we have coming to Evaluations in the coming weeks. Stay tuned! # March ## Mixtral 8x7B *March 25th, 2024* Keeping you up to date with the latest open models, we've added support for Mixtral 8x7B to our Editor with a [Replicate integration](https://replicate.com/). Mixtral 8x7B outperforms LLaMA 2 70B (already supported in Editor) with faster inference, with performance comparable to that of GPT-3.5. More details are available in its [release announcement](https://mistral.ai/news/mixtral-of-experts/). *** ## Additional Replicate models support via API *March 18th, 2024* Through the Replicate model provider additional open models can be used by specifying a model name via the API. The model name should be of a similar form as the ref used when calling `replicate.run(ref)` using [Replicate's Python SDK](https://github.com/replicate/replicate-python). For example, Vicuna, an open-source chatbot model based on finetuning LLaMA can be used with the following model name alongside `provider: "replicate"` in your Prompt version.\ `replicate/vicuna-13b:6282abe6a492de4145d7bb601023762212f9ddbbe78278bd6771c8b3b2f2a13b` *** ## Surfacing uncommitted Versions *March 18th, 2024* We now provide the ability to access your uncommitted Prompt Versions and associated Logs. Adding to our recent changes around the [Commit flow for Versions](https://docs.humanloop.com/changelog/2024/02#committing-prompt-versions), we've added the ability to view any uncommitted versions in your Versions and Logs tables. This can be useful if you need to recover or compare to a previous state during your Prompt engineering and Evaluation workflows. Uncommitted Versions are created when you make generations in our Editor without first committing what you are working on. In future, it will also be possible to create uncommitted versions when logging or generating using the API. We've added new filter tabs to the Versions and Logs table to enable this: New **All** and From **Committed By Versions** filter tabs on the logs table. New **Committed** and **Uncommitted** tabs on the Versions table of your Prompt dashboard. *** ## Improved navigation & sidebar *March 7th, 2024* We've introduced a sidebar for easier navigation between your Prompts and Tools. As new language models unlock more complex use cases, you'll be setting up and connecting Prompts, Tools, and Evaluators. The new layout better reflects these emerging patterns, and switching between your files is now seamless with the directory tree in the sidebar. ![](file:d8ad4ecd-4713-42ca-9c55-cef001ec3879) You can also bring up the search dialog with **Cmd+K** and switch to another file using only your keyboard. *** ## Claude 3 *March 6th, 2024* Introducing same day support for the Claude 3 - Anthropics new industry leading models. Read more about the release [here](https://www.anthropic.com/news/claude-3-family). The release contains three models in ascending order of capability: *Haiku*, *Sonnet*, and *Opus*. This suite provides users with the different options to balance intelligence, speed, and cost for their specific use-cases. **Key take aways:** 1. **Performance** - a new leader. The largest of the 3 models, Opus, is claimed to outperform GPT-4 and Gemini Ultra on key benchmarks such as MMLU and Hellaswag. It even reached 84.9% on the Humaneval coding test set (vs GPT-4’s 67%) 🤯 2. **200k context window** with near-perfect recall on selected benchmarks. Opus reports 99% accuracy on the NIAH test, which measures how accurately a model can recall information given to it in a large corpus of data. 3. **Opus has vision**. Anthropic claim that performance here is on par with that of other leading models (ie GPT-4 and Gemini). They say it’s most useful for inputting graphs, slides etc. in an enterprise setting. 4. **Pricing** - as compared to OpenAI: Opus - $75 (2.5x GPT-4 Turbo) Sonnet - $15 (50% of GPT-4 Turbo)\ Haiku - \$1.25 (1.6x GPT-3.5) 5. **How you can use it**: The Claude 3 family is now available on Humanloop. Bring your API key to test, evaluate and deploy the publicly available models - Opus and Sonnet. # February ## New Tool creation flow *February 26th, 2024* You can now create Tools in the same way as you create Prompts and Directories. This is helpful as it makes it easier to discover Tools and easier to quickly create new ones. ![](file:2cc0866c-a4e0-4a1d-bb36-0aa5823a4d6b) To create a new Tool simply press the New button from the directory of your choice and select one of our supported Tools, such as JSON Schema tool for function calling or our Pinecone tool to integrate with your RAG pipelines. *** ## Tool editor and deployments *February 26th, 2024* You can now manage and edit your Tools in our new Tool Editor. This is found in each Tool file and lets you create and iterate on your tools. As well, we have introduced deployments to Tools, so you can better control which versions of a tool are used within your Prompts. ![](file:f610033e-1e77-41ed-afbb-a2c852f3626d)

Tool Editor

This replaces the previous Tools section which has been removed. The editor will let you edit any of the tool types that Humanloop supports (JSON Schema, Google, Pinecone, Snippet, Get API) and commit new Versions. ![](file:f54d905c-5f64-4d19-8498-14cf7c4f6a4d)

Deployment

Tools can now be deployed. You can pick a version of your Tool and deploy it. When deployed it can be used and referenced in a Prompt editor. And example of this, if you have a version of a Snippet tool with the signature `snippet(key)` with a key/value pair of "*helpful*"/"*You are a helpful assistant*". You decide you would rather change the value to say "*You are a funny assistant*", you can commit a version of the Tool with the updated key. This wont affect any of your prompts that reference the Snippet tool until you Deploy the second version, after which each prompt will automatically start using the funny assistant prompt. *** ## Prompt labels and hover cards *February 26th, 2024* We've rolled out a unified label for our Prompt Versions to allow you to quickly identify your Prompt Versions throughout our UI. As we're rolling out these labels across the app, you'll have a consistent way of interacting with and identifying your Prompt Versions. Label and hover card for a deployed Prompt Version The labels show the deployed status and short ID of the Prompt Version. When you hover over these labels, you will see a card that displays the commit message and authorship of the committed version. You'll be able to find these labels in many places across the app, such as in your Prompt's deployment settings, in the Logs drawer, and in the Editor. The Prompt Version label and hover card in a Prompt Editor As a quick tip, the color of the checkmark in the label indicates that this is a version that has been deployed. If the Prompt Version has not been deployed, the checkmark will be black. A Prompt Version that has not been deployed *** ## Committing Prompt Versions *February 26th, 2024* Building on our terminology improvements from Project -> Prompt, we've now updated Model Configs -> Prompt Versions to improve consistency in our UI. This is part of a larger suite of changes to improve the workflows around how entities are managed on Humanloop and to make them easier to work with and understand. We will also be following up soon with a new and improved major version of our API that encapsulates all of our terminology improvements. In addition to just the terminology update, we've improved our Prompt versioning functionality to now use `commits` that can take `commit messages`, where you can describe how you've been iterating on your Prompts. We've removed the need for names (and our auto-generated placeholder names) in favour of using explicit commit messages. We'll continue to improve the version control and file types support over the coming weeks. Let us know if you have any questions around these changes! *** ## Online evaluators for monitoring Tools *February 14th, 2024* You can now use your online evaluators for monitoring the logs sent to your Tools. The results of this can be seen in the graphs on the Tool dashboard as well as on the Logs tab of the Tool. ![](file:d5af2f67-43e9-4f30-a723-9a189ed5cdb7) To enable Online Evaluations follow the steps seen in our [Evaluate models online](/docs/guides/evaluate-models-online) guide. *** ## Logging token usage *February 14th, 2024* We're now computing and storing the number of tokens used in both the requests to and responses from the model. This information is available in the logs table UI and as part of the [log response](/docs/api-reference/logs/get) in the API. Furthermore you can use the token counts as inputs to your code and LLM based evaluators. The number of tokens used in the request is called `prompt_tokens` and the number of tokens used in the response is called `output_tokens`. This works consistently across all model providers and whether or not you are you are streaming the responses. OpenAI, for example, do not return token usage stats when in streaming mode. *** ## Prompt Version authorship *February 13th, 2024* You can now view who authored a Prompt Version. Prompt Version authorship in the Prompt Version slideover We've also introduced a popover showing more Prompt Version details that shows when you mouseover a Prompt Version's ID. Prompt Version popover in the Logs slideover Keep an eye out as we'll be introducing this in more places across the app. *** ## Filterable and sortable evaluations overview *February 9th, 2024* We've made improvements to the evaluations runs overview page to make it easier for your team to find interesting or important runs. ![](file:8996f6e5-4cba-4849-a40e-3e8ece2d6dbb) The charts have been updated to show a single datapoint per run. Each chart represents a single evaluator, and shows the performance of the prompt tested in that run, so you can see at a glance how the performance your prompt versions have evolved through time, and visually spot the outliers. Datapoints are color-coded by the dataset used for the run. The table is now paginated and does not load your entire project's list of evaluation runs in a single page load. The page should therefore load faster for teams with a large number of runs. The columns in the table are now filterable and sortable, allowing you to - for example - filter just for the completed runs which test two specific prompt versions on a specific datasets, sorted by their performance under a particular evaluator. Here, we've filtered the table on completed runs that tested three specific prompt versions of interest, and sorted to show those with the worst performance on the Valid JSON evaluator. *** ## Projects rename and file creation flow *February 8th, 2024* We've renamed `Projects` to `Prompts` and `Tools` as part of our move towards managing `Prompts`, `Tools`, `Evaluators` and `Datasets` as special-cased and strictly versioned files in your Humanloop directories. This is a purely cosmetic change for now. Your Projects (now Prompts and Tools) will continue to behave exactly the same. This is the first step in a whole host of app layout, navigation and API improvements we have planned in the coming weeks. If you are curious, please reach out to learn more. **New creation flow** We've also updated our file creation flow UI. When you go to create projects you'll notice they are called Prompts now. ![](file:de5895d2-0b56-4c1f-a543-07867a2679f8) *** ## Control logging level *February 2nd, 2024* We've added a `save` flag to all of our endpoints that generate logs on Humanloop so that you can control whether the request and response payloads that may contain sensitive information are persisted on our servers or not. If `save` is set to `false` then no `inputs`, `messages` our `outputs` of any kind (including the raw provider request and responses) are stored on our servers. This can be helpful for sensitive use cases where you can't for example risk PII leaving your system. Details of the model configuration and any metadata you send are still stored. Therefore you can still benefit from certain types of evaluators such as human feedback, latency and cost, as well as still track important metadata over time that may not contain sensitive information. This includes all our [chat](/docs/api-reference/chats/create) and [completion](/docs/api-reference/completions/create) endpoint variations, as well as our explicit [log](/docs/api-reference/logs/log) endpoint. ```python from humanloop import Humanloop # You need to initialize the Humanloop SDK with your API Keys humanloop = Humanloop(api_key="") # humanloop.complete_deployed(...) will call the active model config on your project. # You can optionally set the save flag to False complete_response = humanloop.complete_deployed( save=False, project="", inputs={"question": "I have inquiry about by life insurance policy. Can you help?"}, ) # You can still retrieve the data_id and output as normal data_id = complete_response.data[0].id output = complete_response.data[0].output # And log end user feedback that will still be stored humanloop.feedback(data_id=data_id, type="rating", value="good") ``` *** ## Logging provider request *February 2nd, 2024* We're now capturing the raw provider request body alongside the existing provider response for all logs generated from our [deployed endpoints](/docs/guides/chat-using-the-sdk). This provides more transparency into how we map our provider agnostic requests to specific providers. It can also effective for helping to troubleshoot the cases where we return well handled provider errors from our API. # January ## Add Evaluators to existing runs *January 30th, 2024* You can now add an evaluator to any existing evaluation run. This is helpful in situations where you have no need to regenerate logs across a dataset, but simply want to run new evaluators across the existing run. By doing this instead of launching a fresh run, you can the save significant time & costs associated with unnecessarily regenerating logs, especially when working with large datasets. Use the **Add Evaluator** button to run more evaluators across the logs in an existing evaluation run. This can be done on any runs, including those still running or already completed. *** ## Improved Evaluation Debug Console *January 30th, 2024* We've enhanced the usability of the debug console when creating and modifying evaluators. Now you can more easily inspect the data you are working with, and understand the root causes of errors to make debugging quicker and more intuitive. ![](file:4f25785f-7928-43cd-bfbc-54d898a74241) On any row in the debug console, click the arrow next to a testcase to inspect the full entity in a slideover panel. After clicking **Run** to generate a log from a testcase, you can inspect the full log right from the debug console, giving you clearer access to error messages or the model-generated content, as in the example below. ![](file:2ec00370-c1f6-4cf7-b3ac-09c511aea2f0) *** ## LLM Evaluators *January 30th, 2024* We expect this feature to be most useful in the case of creating and debugging LLM evaluators. You can now inspect the log of the LLM evaluation itself right from the debug console, along with the original testcase and model-generated log, as described above. After clicking **Run** on a testcase in the debug console, you'll see the **LLM Evaluation Log** column populated with a button that opens a full drawer. ![](file:d902e918-6d69-44ed-9d3e-869114b5e21f) This is particularly helpful to verify that your evaluation prompt was correctly populated with data from the underlying log and testcase, and to help understand why the LLM's evaluation output may not have been parsed correctly into the output values. ![](file:cc6932a4-0268-45aa-b453-63b4fc234690) *** ## Tool projects *January 30th, 2024* We have upgraded projects to now also work for tools. Tool projects are automatically created for tools you define as part of your model config [in the Editor](/docs/guides/create-a-tool-in-the-editor) as well as tools [managed at organization level](/docs/guides/link-a-jsonschema-tool). It is now easier to access the logs from your tools and manage different versions like you currently do for your prompts. ![](file:7b88a6b1-f909-4c14-bafb-b8ad19869c7f)

Tool versioning

In the dashboard view, you can see the different versions of your tools. This will soon be expanded to link you to the source config and provide a more comprehensive view of your tool's usage.

Logs

Any logs submitted via the SDK that relate to these tools will now appear in the Logs view of these projects. You can see this by following our [sessions guide](https://dash.readme.com/project/humanloop/v4.0/docs/logging-session-traces) and logging a new tool via the SDK. This also works natively with online Evaluators, so you can start to layer in observability for the individual non-LLM components of your session

Offline Evaluations via SDK

You can trigger evaluations on your tools projects similar to how you would for an LLM project with model configs. This can be done by logging to the tool project, creating a dataset, and triggering an evaluation run. A good place to start would be the [Set up evaluations using API](/docs/guides/evaluations-using-api) guide. *** ## Support for new OpenAI Models *January 30th, 2024* Following [OpenAI's latest model releases](https://openai.com/blog/new-embedding-models-and-api-updates), you will find support for all the latest models in our **Playground** and **Editor**.

GPT-3.5-Turbo and GPT-4-Turbo

If your API key has access to the models, you'll see the new release `gpt-4-0125-preview` and `gpt-3.5-turbo-0125` available when working in Playground and Editor. These models are more capable and cheaper than their predecessors - see the OpenAI release linked above for full details. ![](file:632a7eb3-ea5b-478a-84dd-34f716143aa7) We also support the new `gpt-4-turbo-preview` model alias, which points to the latest `gpt-4-turbo` model without specifying a specific version.

Embedding Models

Finally, the new embedding models - `text-embedding-3-small` and `text-embedding-3-large` are also available for use via Humanloop. The `small` model is 5x cheaper than the previous generation `ada-002` embedding model, while the larger model significantly improves performance and maps to a much larger embedding space. *** ## Improved evaluation run launcher *January 19th, 2024* We've made some usability enhancements to the launch experience when setting up batch generation & evaluation runs. It's now clearer which model configs, datasets and evaluators you've selected. It's also now possible to specify whether you want the logs to be generated in the Humanloop runtime, or if you're going to post the logs from your own infrastructure via the API. ![](file:80ea4aa0-916f-4799-9f4f-ab95f2e2586d)

Cancellable evaluation runs

Occasionally, you may launch an evaluation run and then realise that you didn't configure it quite the way you wanted. Perhaps you want to use a different model config or dataset, or would like to halt its progress for some other reason. We've now made evaluation runs cancellable from the UI - see the screenshot below. This is especially helpful if you're running evaluations over large datasets, where you don't want to unnecessarily consume provider credits. Cancellation button in the evaluation run page. *** ## Faster offline evaluations *January 12th, 2024* We've introduced batching to our offline Evaluations to significantly speed up runtime performance and also improved the robustness to things going wrong mid-run. In addition to our recent [enhancements to the Evaluations API](changelog:evaluation-api-enhancements), we've also made some significant improvements to our underlying orchestration framework which should mean your evaluation runs are now faster and more reliable. In particular, we now **batch generations** across the run - by default in groups of five, being conscious of potential rate limit errors (though this will soon be configurable). Each batch runs its generations concurrently, so you should see much faster completion times - especially in runs across larger datasets. *** ## Evaluation API enhancements *January 11th, 2024* We've started the year by enhancing our evaluations API to give you more flexibility for self-hosting whichever aspects of the evaluation workflow you need to run in your own infrastructure - while leaving the rest to us!

Mixing and matching the Humanloop-runtime with self-hosting

Conceptually, evaluation runs have two components: 1. Generation of logs for the datapoints using the version of the model you are evaluating. 2. Evaluating those logs using Evaluators. Now, using the Evaluations API, Humanloop offers the ability to generate logs either within the Humanloop runtime, or self-hosted (see our [guide on external generations for evaluations](/docs/guides/evaluating-externally-generated-logs)). Similarly, evaluating of the logs can be performed in the Humanloop runtime (using evaluators that you can define in-app), or self-hosted (see our [guide on self-hosted evaluations](/docs/guides/self-hosted-evaluations)). It is now possible to mix-and-match self-hosted and Humanloop-runtime logs and evaluations in any combination you wish. When creating an Evaluation (via the improved UI dialogue or via the API), you can set the new `hl_generated` flag to `False` to indicate that you are posting the logs from your own infrastructure. You can then also include an evaluator of type `External` to indicate that you will post evaluation results from your own infrastructure. You can now also include multiple evaluators on any run, and these can include a combination of `External` (i.e. self-hosted) and Humanloop-runtime evaluators. # December ## Human Evaluators *December 22nd, 2023* We've introduced a new special type of 'Human' Evaluator to compliment our existing code and AI based Evaluators. There are many important evaluation use cases that require input from your internal domain experts, or product teams. Typically this is where you would like a gold standard judgement of how your LLM app is performing. Our new Human Evaluator allows you to trigger a batch evaluation run as normal (from our UI as part of your prompt engineering process, or using our SDK as part of your CI/CD pipeline) and then queues the results ready for a human to provide feedback. Once completed, the feedback is aggregated to give a top-line summary of how the model is performing. It can also be combined with automatic code and AI evaluators in a single run. Set up your first Human Evaluator run by following [our guide.](/docs/guides/evaluating-with-human-feedback) *** ## Return inputs flag *December 22nd, 2023* We've introduced a `return_inputs` flag on our chat and completion endpoints to improve performance for larger payloads. As context model windows get increasingly larger, for example [Claude with 200k tokens](https://www.anthropic.com/index/claude-2-1), it's important to make sure our APIs remain performant. A contributor to response times is the size of the response payload being sent over the wire. When you set this new flag to false, our responses will no longer contain the `inputs` that were sent to the model and so can be significantly smaller. This is the first in a sequence of changes to add more control to the caller around API behaviour. As always, we welcome suggestions, requests, and feedback should you have any. *** ## Gemini *December 22nd, 2023* You can now use Google's latest LLMs, [Gemini](https://blog.google/technology/ai/google-gemini-ai/), in Humanloop.

Setup

To use Gemini, first go to [https://makersuite.google.com/app/apikey](https://makersuite.google.com/app/apikey) and generate an API key. Then, save this under the "Google" provider on [your API keys page](https://app.humanloop.com/account/api-keys). Head over to the playground, and you should see `gemini-pro` and `gemini-pro-vision` in your list of models. You can also now use Gemini through the Humanloop API's `/chat`endpoints.

Features

Gemini offers support for multi-turn chats, tool calling, and multi-modality. However, note that while `gemini-pro` supports multi-turn chats and tool calling, it does not support multi-modality. On the other hand, `gemini-pro-vision` supports multi-modality but not multi-turn chats or tool calling. Refer to [Gemini's docs](https://ai.google.dev/models/gemini) for more details. When providing images to Gemini, we've maintained compatibility with OpenAI's API. This means that when using Humanloop, you can provide images either via a HTTP URL or with a base64-encoded data URL. *** ## Chat sessions in Editor *December 21st, 2023* Your chat messages in Editor are now recorded as part of a session so you can more easily keep track of conversations. After chatting with a saved prompt, go to the sessions tab and your messages will be grouped together. If you want to do this with the API, it can be as simple as setting the `session_reference_id`– see [docs on sessions](/docs/guides/logging-session-traces). *** ## Environment logs *December 13th, 2023* Logs for your deployed prompts will now be tagged with the corresponding [environment](/docs/guides/deploy-to-an-environment). In your logs table, you can now filter your logs based on environment: You can now also pass an `environment` tag when using the explicit [/log ](/docs/api-reference/logs/log) endpoint; helpful for use cases such as [orchestrating your own models](/docs/guides/use-your-own-model-provider). *** ## Improved Evaluator UI *December 12th, 2023* We've improved the experience of creating and debugging your evaluators. Now that you can [access any property of the objects you're testing](#llm-evals---improved-data-access) we've cleaned up the debug panel to make easier to view the testcases that you load from a dataset or from your projects. We've also clarified what the return types are expected as you create your evaluators. *** ## Prompt diffs *December 12th, 2023* Following our recent [introduction of our .prompt file](/docs/guides/prompt-file-format), you can now compare your model configs within a project with our new 'diff' view. ![](file:ebd20753-2ccf-48af-b644-432bd427a83e) As you modify and improve upon your model configs, you might want to remind yourself of the changes that were made between different versions of your model config. To do so, you can now select 2 model configs in your project dashboard and click **Compare** to bring up a side-by-side comparison between them. Alternatively, open the actions menu and click **Compare to deployed**. This diff compares the .prompt files representing the two model configs, and will highlight any differences such as in the model, hyperparameters, or prompt template. *** ## LLM evals - improved data access *December 12th, 2023* In order to help you write better LLM evaluator prompts, you now have finer-grained access to the objects you are evaluating. It's now possible to access any part of the `log` and `testcase` objects using familiar syntax like `log.messages[0].content`. Use the debug console to help understand what the objects look like when writing your prompts. ![](file:441f64d7-a2da-48cb-9e13-afc71667d338) *** ## Tool linking *December 5th, 2023* It's now possible to manage tool definitions globally for your organization and re-use them across multiple projects by linking them to your model configs. Prior to this change, if you wanted to re-use the same tool definition across multiple model configs, you had to copy and paste the JSON schema snippet defining the name, description and parameters into your Editor for each case. And if you wanted to make changes to this tool, you would have to recall which model configs it was saved to prior and update them inline 1 by 1. You can achieve this tool re-use by first defining an instance of our new `JsonSchema` tool available as another option in your global `Tools` tab. Here you can define a tool once, such as `get_current_weather(location: string, unit: 'celsius' | 'fahrenheit')`, and then link that to as many model configs as you need within the Editor as shown below. Importantly, updates to the `get_current_weather` `JsonSchema` tool defined here will then propagate automatically to all the model configs you've linked it to, without having to publish new versions of the prompt. The old behaviour of defining the tool inline as part of your model config definition is still available for the cases where you do want changes in the definition of the tool to lead to new versions of the model-config.

Set up the tool

Navigate to the [tools tab](https://app.humanloop.com/hl-test/tools) in your organisation and select the JsonSchema tool card. ![](file:31238313-523e-434e-ae09-f9dd7fb48a39) With the dialog open, define your tool with `name`, `description`, and `parameters` values. Our guide for using [OpenAI Function Calling in the playground](/docs/guides/create-a-tool-in-the-editor) can be a useful reference in this case.

Using the tool

In the editor of your target project, link the tool by pressing the `Add Tool` button and selecting your `get_current_weather` tool. ![](file:5d68bc89-c6a4-4a23-b137-27a87cf781f2) *** ## Improved log table UI *December 4th, 2023* We've updated how we show logs and datapoints in their respective tables. You can now see the stack of inputs and messages in a cleaner interface rather than having them spread into separate columns. Part of the updated Log Table. Inputs are now stacked with a more consistent and less-busy UI. There will be more updates soon to improve how logs and prompts are shown in tables and the drawers soon, so if you have ideas for improvements please let us know. *** ## Introducing .prompt files *December 4th, 2023* We're introducing a .prompt file format for representing model configs in a format that's both human-readable and easy to work with. For certain use cases it can be helpful for engineers to also store their prompts alongside their app's source code in their favourite version control system. The .prompt file is the appropriate artefact for this. These .prompt files can be retrieved through both the API and through the Humanloop app.

Exporting via API

To fetch a .prompt file via the API, make `POST` request to `https://api.humanloop.com/v4/model-configs/{id}/export`, where `{id}` is the ID of the model config (beginning with `config_`).

Export from Humanloop

You can also export an existing model config as a .prompt file from the app. Find the model config within the project's dashboard's table of model configs and open the actions menu by clicking the three dots. Then click **Export .prompt**. (You can also find this button within the drawer that opens after clicking on on a model config's row).

Editor

Additionally, we've added the ability to view and edit your model configs in a .prompt file format when in Editor. Press **Cmd-Shift-E** when in editor to swap over to a view of your .prompt file. More details on our .prompt file format are available [here](/docs/guides/prompt-file-format). We'll be building on this and making it more powerful. Stay tuned. # November ## Improved RBACs *November 28th, 2023* We've introduced more levels to our roles based access controls (RBACs). We now distinguish between different roles to help you better manage your organization's access levels and permissions on Humanloop. This is the first in a sequence of upgrades we are making around RBACs.

Organization roles

Everyone invited to the organization can access all projects currently (controlling project access coming soon). A user can be one of the following rolws: \*\*Admin:\*\*The highest level of control. They can manage, modify, and oversee the organization's settings and have full functionality across all projects. **Developer:**(Enterprise tier only) Can deploy prompts, manage environments, create and add API keys, but lacks the ability to access billing or invite others. **Member:**(Enterprise tier only) The basic level of access. Can create and save prompts, run evaluations, but not deploy. Can not see any org-wide API keys.

RBACs summary

Here is the full breakdown of roles and access: | Action | Member | Developer | Admin | | :----------------------------- | :----- | :-------- | :---- | | Create and manage Prompts | ✔️ | ✔️ | ✔️ | | Inspect logs and feedback | ✔️ | ✔️ | ✔️ | | Create and manage evaluators | ✔️ | ✔️ | ✔️ | | Run evaluations | ✔️ | ✔️ | ✔️ | | Create and manage datasets | ✔️ | ✔️ | ✔️ | | Create and manage API keys | | ✔️ | ✔️ | | Manage prompt deployments | | ✔️ | ✔️ | | Create and manage environments | | ✔️ | ✔️ | | Send invites | | | ✔️ | | Set user roles | | | ✔️ | | Manage billing | | | ✔️ | | Change organization settings | | | ✔️ | *** ## Self hosted evaluations *November 28th, 2023* We've added support for managing [evaluations](/docs/guides/evaluate-your-model) outside of Humanloop in your own code. There are certain use cases where you may wish to run your evaluation process outside of Humanloop, where the evaluator itself is defined in your code as opposed to being defined using our Humanloop runtime. For example, you may have implemented an evaluator that uses your own custom model, or has to interact with multiple systems. In which case, it can be difficult to define these as a simple code or [LLM evaluator](/docs/guides/use-llms-to-evaluate-logs) within your Humanloop project. With this kind of setup, our users have found it very beneficial to leverage the datasets they have curated on Humanloop, as well as consolidate all of the results alongside the prompts stored on Humanloop. To better support this setting, we're releasing additional API endpoints and SDK utilities. We've added endpoints that allow you to: * Retrieve your curated datasets * Trigger evaluation runs * Send evaluation results for your datasets generated using your custom evaluators Below is a code snippet showing how you can use the latest version of the Python SDK to log an evaluation run to a Humanloop project. For a full explanation, see our [guide](/docs/guides/self-hosted-evaluations) on self-hosted evaluations. ```python from humanloop import Humanloop API_KEY = ... humanloop = Humanloop(api_key=API_KEY) # 1. Retrieve a dataset DATASET_ID = ... datapoints = humanloop.datasets.list_datapoints(DATASET_ID).records # 2. Create an external evaluator evaluator = humanloop.evaluators.create( name="My External Evaluator", description="An evaluator that runs outside of Humanloop runtime.", type="external", arguments_type="target_required", return_type="boolean", ) # Or, retrieve an existing one: # evaluator = humanloop.evaluators.get(EVALUATOR_ID) # 3. Retrieve a model config CONFIG_ID = ... model_config = humanloop.model_configs.get(CONFIG_ID) # 4. Create the evaluation run PROJECT_ID = ... evaluation_run = humanloop.evaluations.create( project_id=PROJECT_ID, config_id=CONFIG_ID, evaluator_ids=[EVALUATOR_ID], dataset_id=DATASET_ID, ) # 5. Iterate the datapoints and trigger generations logs = [] for datapoint in datapoints: log = humanloop.chat_model_config( project_id=PROJECT_ID, model_config_id=model_config.id, inputs=datapoint.inputs, messages=[ {key: value for key, value in dict(message).items() if value is not None} for message in datapoint.messages ], source_datapoint_id=datapoint.id, ).data[0] logs.append((log, datapoint)) # 6. Evaluate the results. # In this example, we use an extremely simple evaluation, checking for an exact # match between the target and the model's actual output. for (log, datapoint) in logs: # The datapoint target tells us the correct answer. target = str(datapoint.target["answer"]) # The log output is what the model said. model_output = log.output # The evaluation is a boolean, indicating whether the model was correct. result = target == model_output # Post the result back to Humanloop. evaluation_result_log = humanloop.evaluations.log_result( log_id=log.id, evaluator_id=evaluator.id, evaluation_run_external_id=evaluation_run.id, result=result, ) # 7. Complete the evaluation run. humanloop.evaluations.update_status(id=evaluation_run.id, status="completed") ``` ## Chat response We've updated the response models of all of our [/chat](/docs/api-reference/chats/create) API endpoints to include an output message object. Up to this point, our `chat` and `completion` endpoints had a unified response model, where the `content` of the assistant message returned by OpenAI models was provided in the common `output` field for each returned sample. And any tool calls made were provided in the separate `tool_calls` field. When making subsequent chat calls, the caller of the API had to use these fields to create a message object to append to the history of messages. So to improve this experience we now added an `output_message` field to the chat response. This is additive and does not represent a breaking change. **Before:** ```json { "project_id": "pr_GWx6n0lv6xUu3HNRjY8UA", "data": [ { "id": "data_Vdy9ZoiFv2B7iYLIh15Jj", "index": 0, "output": "Well, I gotta say, ...", "raw_output": "Well, I gotta say...", "finish_reason": "length", "model_config_id": "config_VZAPd51sJH7i3ZsjauG2Q", "messages": [ { "content": "what's your best guess...", "role": "user", } ], "tool_calls": null } ], ... ... ... } ``` **After:** ```json { "project_id": "pr_GWx6n0lv6xUu3HNRjY8UA", "data": [ { "id": "data_Vdy9ZoiFv2B7iYLIh15Jj", "output_message": { "content": "Well, I gotta say, ...", "name": null, "role": "assistant", "tool_calls": null }, "index": 0, "output": "Well, I gotta say, ...", "raw_output": "Well, I gotta say...", "finish_reason": "length", "model_config_id": "config_VZAPd51sJH7i3ZsjauG2Q", "messages": [ { "content": "what's your best guess...", "role": "user", } ], "tool_calls": null, } ], ... ... ... } ``` *** ## Snippet tool *November 28th, 2023* We've added support for managing common text 'snippets' (or 'passages', or 'chunks') that you want to reuse across your different prompts. This functionality is provided by our new *Snippet tool*. A Snippet tool acts as a simple key/value store, where the key is the name of the common re-usable text snippet and the value is the corresponding text. For example, you may have some common persona descriptions that you found to be effective across a range of your LLM features. Or maybe you have some specific formatting instructions that you find yourself re-using again and again in your prompts. Before now, you would have to copy and paste between your editor sessions and keep track of which projects you edited. Now you can instead inject the text into your prompt using the Snippet tool.

Set up the tool

Navigate to the [tools tab](https://app.humanloop.com/hl-test/tools) in your organisation and select the Snippet tool card. ![](file:3a3c08c8-ec4b-4bc7-9e16-99c07ad17fa6) When the dialog opens, start adding your key/value pairs. In the example below we've defined an Assistants snippet tool that can be used manage some common persona descriptions we feed to the LLM. You can have up to 10 key/value snippets in a single snippet tool. The **name** field will be how you'll access this tool in the editor. By setting the value as *assistant* below it means in the editor you'll be able to access this specific tool by using the syntax `{{ assistant(key) }}`. The **key** is how you'll access the snippet later, so it's recommended to choose something short and memorable. The **value** is the passage of text that will be included in your prompt when it is sent to the model. ![](file:22ffee95-1993-4c43-816f-aff3b186929e)

Use the tool

Now your Snippets are set up, you can use it to populate strings in your prompt templates across your projects. Double curly bracket syntax is used to call a tool in the template. Inside the curly brackets you call the tool. ![](file:273f7006-031b-4709-9eb7-689edd1d5232) The tool requires an input value to be provided for the key. In our [editor environment](https://app.humanloop.com/playground) the result of the tool will be shown populated top right above the chat. Above we created an Assistants tool. To use that in an editor you'd use the `{{ (key) }}` so in this case it would be `{{ assistant(key) }}`. When adding that you get an inputs field appear where you can specify your `key`, in the screenshot above we used the `helpful` key to access the `You are a helpful assistant. You like to tell jokes and if anyone asks your name is Sam.`string. This input field can be used to experiment with different key/value pairs to find the best one to suit your prompt. If you want to see the corresponding snippet to the key you either need to first run the conversation to fetch the string and see it in the preview. If you have a specific key you would like to hardcode in the prompt, you can define it using the literal key value: `{{ ("key") }}`, so in this case it would be `{{ assistant("helpful") }}`. ![](file:1c9dea2e-cab5-44ab-8ee6-68d7458388fc) This is particularly useful because you can define passages of text once in a snippet tool and reuse them across multiple prompts, without needing to copy/paste them and manually keep them all in sync.

What's next

Explore our other tools such as the Google or Pinecone Search. If you have other ideas for helpful integrations please reach out and let us know. *** ## Quality-of-life app improvements *November 22nd, 2023* We've been shipping some quality-of-life "little big things" to improve your every day usage of the platform.

Project switcher throughout the app

We've added the project switcher throughout the app so its easier to jump between Projects from anywhere The project switcher is now available everywhere.

We've tidied up the Editor

With all the new capabilities and changes (tools, images and more) we need to keep a tight ship to stop things from becoming too busy. We're unifying how we show all your logged generations, in the editor, and in the logs and sessions. We've also changed the font to Inter to be legible at small font sizes. The Editor and other places have had a clean up to aid the new capabilites of tool calling and vision.

No more accidental blank messages

We've also fixed issues where empty messages would get appended to the chat.

We've improved keyboard navigation

The keyboard shortcuts have been updated so its now easier to navigate in the log tables (up/down keys), and to run generations in Editor (cmd/ctrl + enter). **Thanks for all your requests and tips. Please keep the feedback coming!** *** ## Claude 2.1 *November 21st, 2023* Today, Anthropic released its latest model, **Claude 2.1**, and we've added support for it in the Humanloop app. The new model boasts a 200K context window and a reported 2x decrease in hallucination rates. Additionally, this model introduces tool use to the line-up of Anthropic models. The feature is presently in beta preview, and we'll be adding support for it to Humanloop in the coming days. Read more about Claude 2.1 in the [official release notes](https://www.anthropic.com/index/claude-2-1). *** ## Parallel tool calling *November 20th, 2023* We've added support for parallel tool calls in our Editor and API. With the release of the latest OpenAI turbo models, the model can choose to respond with more than one tool call for a given query; this is referred to as [parallel tool calling](https://platform.openai.com/docs/guides/function-calling/parallel-function-calling).

Editor updates

You can now experiment with this new feature in our Editor: * Select one of the [new turbo models](/changelog/2023/11#new-openai-turbos) in the model dropdown. * Specify a tool in your model config on the left hand side. * Make a request that would require multiple calls to answer correctly. * As shown here for a weather example, the model will respond with multiple tool calls in the same message

API implications

We've added an additional field `tool_calls` to our chat endpoints response model that contains the array of tool calls returned by the model. The pre-existing `tool_call` parameter remains but is now marked as deprecated. Each element in the `tool_calls` array has an id associated to it. When providing the tool response back to the model for one of the tool calls, the `tool_call_id` must be provided, along with `role=tool` and the `content` containing the tool response. ```python from humanloop import Humanloop # Initialize the Humanloop SDK with your API Keys humanloop = Humanloop(api_key="") # form of message when providing the tool response to the model chat_response = humanloop.chat_deployed( project_id="", messages: [ { "role": "tool", "content": "Horribly wet" "tool_call_id": "call_dwWd231Dsdw12efoOwdd" } ] ) ``` *** ## Python SDK improvements *November 20th, 2023* We've improved the response models of our [Python SDK](https://github.com/humanloop/humanloop-python#raw-http-response) and now give users better control over HTTPs timeout settings.

Improved response model types

As of **versions >= 0.6.0**, our Python SDK methods now return [Pydantic](https://docs.pydantic.dev/latest/) models instead of typed dicts. This improves developer ergonomics around typing and validations. * Previously, you had to use the \[...] syntax to access response values: ```python chat_response = humanloop.chat( # parameters ) print(chat_response.project_id) ``` * With Pydantic-based response values, you now can use the . syntax to access response values. To access existing response model from \< 0.6.0, use can still use the .raw namespace as specified in the [Raw HTTP Response section](https://github.com/humanloop/humanloop-python#raw-http-response). ```python chat_response = humanloop.chat( # parameters ) print(chat_response.project_id) ``` > 🚧 Breaking change > > Moving to >= 0.6.0 does represent a breaking change in the SDK. The underlying API remains unchanged.

Support for timeout parameter

The default timeout used by [aiohttp](https://docs.aiohttp.org/en/stable/), which our SDK uses is 300 seconds. For very large prompts and the latest models, this can cause timeout errors to occur. In the latest version of Python SDKs, we've increased the default timeout value to 600 seconds and you can update this configuration if you are still experiencing timeout issues by passing the new timeout argument to any of the SDK methods. For example passing`timeout=1000` will override the timeout to 1000 seconds. ## Multi-modal models *November 20th, 2023* We've introduced support for multi-modal models that can take both text and images as inputs! We've laid the foundations for multi-modal model support as part of our Editor and API. The first model we've configured is OpenAI's [GPT-4 with Vision (GPT-4V)](https://platform.openai.com/docs/guides/vision/vision). You can now select `gpt-4-vision-preview` in the models dropdown and add images to your chat messages via the API. Let us know what other multi-modal models you would like to see added next!

Editor quick start

To get started with GPT-4V, go to the Playground, or Editor within your project. * Select `gpt-4-vision-preview` in the models dropdown. * Click the **Add images** button within a user's chat message. * To add an image, either type a URL into the Image URL textbox or select "Upload image" to upload an image from your computer. If you upload an image, it will be converted to a Base64-encoded data URL that represents the image. * Note that you can add multiple images To view the images within a log, find the log within the logs table and click on it to open it in a drawer. The images in each chat message be viewed within this drawer.

API quick start

Assuming you have deployed your `gpt-4-vision-preview` based model config, you can now also include images in messages via the API. ```python from humanloop import Humanloop # Initialize the Humanloop SDK with your API Keys humanloop = Humanloop(api_key="") # humanloop.chat_deployed(...) will call the active model config on your project. chat_response = humanloop.chat_deployed( project_id="", messages: [ { "role": "user", "content": [ { "type": "image_url", "image_url": { "detail": "high", "url": "https://www.acomaanimalclinictucson.com/wp-content/uploads/2020/04/AdobeStock_288690671-scaled.jpeg" } } ] ) ``` Any generations made will also be viewable from within your projects logs table.

Limitations

There are some know limitations with the current preview iteration of OpenAI's GPT-4 model to be aware of: * Image messages are only supported by the `gpt-4-vision-preview` model in chat mode. * GPT-4V model does not support tool calling or JSON mode. * You cannot add images to the first `system` message. *** ## JSON mode and seed parameters *November 20th, 2023* We've introduced new model config parameters for **JSON mode** and **Seed** in our Editor and API. With the introduction of the new [OpenAI turbo models](/docs/changelog/2024/01#support-for-new-openai-models) you can now set additional properties that impact the behaviour of the model; `response_format` and `seed`. > See further guidance from OpenAI on the JSON response format [here](https://platform.openai.com/docs/guides/text-generation/json-mode) and reproducing outputs using the seed parameter [here](https://platform.openai.com/docs/guides/text-generation/reproducible-outputs). These new parameters can now optionally contribute to your model config in our Editor and API. Updated values for `response_format` or `seed` will constitute new versions of your model on Humanloop. When using JSON mode with the new turbo models, you should still include formatting instructions in your prompt. In fact, if you do not include the word 'json' anywhere in your prompt, OpenAI will return a validation error currently. *** ## LLM Evaluators *November 17th, 2023* Until now, it's been possible to trigger LLM-based evaluations by writing Python code that uses the Humanloop API to trigger the LLM generations. Today, in order to make this increasingly important workflow simpler and more intuitive, we're releasing **LLM Evaluators**, which require no Python configuration. From the Evaluations page, click **New Evaluator** and select LLM Evaluator. You can now choose between the existing Python Evaluators and our new LLM Evaluators. Instead of a code editor, the right hand side of the page is now a prompt editor for defining instructions to the LLM Evaluator. Underneath the prompt, you can configure the parameters of the Evaluator (things like model, temperature etc.) just like any normal model config. LLM Evaluator Editor. In the prompt editor, you have access to a variety of variables that correspond to data from the underlying Log that you are trying to evaluate. These use the usual `{{ variable }}` syntax, and include: * `log_inputs` - the input variables that were passed in to the prompt template when the Log was generated * `log_prompt` - the fully populated prompt (if it was a completion mode generation) * `log_messages` - a JSON representation of the messages array (if it was a chat mode generation) * `log_output` - the output produced by the model * `log_error` - if the underlying Log was an unsuccessful generation, this is the error that was produced * `testcase` - when in offline mode, this is the testcase that was used for the evaluation. Take a look at some of the presets we've provided on the left-hand side of the page for inspiration. LLM Evaluator presets. You'll likely need to tweak these to fit your use case. At the bottom of the page you can expand the debug console - this can be used verify that your Evaluator is working as intended. We've got further enhancements coming to this part of the Evaluator Editor very soon. Since an LLM Evaluator is just another model config managed within Humanloop, it gets its own project. When you create an LLM Evaluator, you'll see that a new project is created in your organisation with the same name as the Evaluator. Every time the Evaluator produces a Log as part of its evaluation activity, that output will be visible in the Logs tab of that project. *** ## Improved evaluator editor *November 17th, 2023* Given our current focus on delivering a best-in-class evaluations experience, we've promoted the Evaluator editor to a full-page screen in the app. ![](file:d7cc1b61-7809-4f32-be3b-95e29ae08adf) In the left-hand pane, you'll find drop-downs to: * Select the mode of the Evaluator - either Online or Offline, depending on whether the Evaluator is intended to run against pre-defined testcases or against live production Logs * Select the return type of the Evaluator - either boolean or number Underneath that configuration you'll find a collection of presets. Preset selector. *** ## Evaluation comparison charts *November 10th, 2023* We've added comparison charts to the evaluation runs page to help you better compare your evaluation results. These can be found in the evaluations run tab for each of your projects. ![](file:1a98b753-084b-4f73-9fa4-c95659fc6e52)

Comparing runs

You can use this to compare specific evaluation runs by selecting those in the runs table. If you don't select any specific rows the charts show an averaged view of all the previous runs for all the evaluators. ![](file:5ecd0a84-9e67-43f6-84ba-b0832bcd1f68)

Hiding a chart

To hide a chart for a specific evaluator you can hide the column in the table and it will hide the corresponding chart. ![](file:811b4f4e-9052-4bcb-b9d9-5e359e04d068) ## Comparison mode in Editor *November 9th, 2023* You can now compare generations across Model Configs and inputs in Editor! ![](file:d6fbacc4-4293-4767-b0bd-cd7f6fabc305)

Quick start

To enter comparison mode, click **New panel** in the dropdown menu adds a new blank panel to the right. **Duplicate panel** adds a new panel containing the same information as your current panel. \[Clicking **New panel** in the dropdown menu... ... will open a new panel to the right. Each panel is split into two section: a Model Config section at the top and an Inputs & Chat section at the bottom. These can be collapsed and resized to suit your experimentation. If you've made changes in one panel, you can copy the changes you've made using the **Copy** button in the subsection's header and paste it in the target panel using its corresponding **Paste** button. The **Copy** button on the left panel will copy the new chat template... ... and the **Paste** button on the right panel will then update its chat template.

Other changes

Our recently-introduced local history has also been upgraded to save your full session even when you have multiple panels open. The toggle to completion mode and the button to open history have now been moved into the new dropdown menu. *** ## Improved evaluation runs *November 8th, 2023* You can now trigger runs against multiple model configs simultaneously. This improves your ability to compare and evaluate changes across your prompts. We've also removed the summary cards. In their place, we've added a table that supports sorting and rearranging of columns to help you better interrogate results.

Multiple model configs

To run evaluations against multiple model configs it's as simple as selecting the targeted model configs in the run dialog, similar to before, but multiple choices are now supported. This will trigger multiple evaluation runs at once, with each model config selected as a target. ![](file:93c6ae70-c1a7-4041-89bc-c549f77c3d7c)

Evaluation table

We've updated our evaluation runs with a table to help view the outcomes of runs in a more condensed form. It also allows you to sort results and trigger re-runs easier. As new evaluators are included, a column will be added automatically to the table. ![](file:88fed500-f79e-4d08-90ff-54842ae656c0)

Re-run previous evaluations

We've exposed the re-run option in the table to allow you to quickly trigger runs again, or use older runs as a way to preload the dialog and change the parameters such as the target dataset or model config. ![](file:ef2abd44-a674-4640-8331-3f160c1e13d6) ## New OpenAI turbos Off the back of OpenAI's [dev day](https://devday.openai.com/) we've added support for the new turbo [models](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo) that were announced: * **gpt-4-1106-preview** * **gpt-3.5-turbo-1106** Both of these models add a couple of nice capabilities: * Better instruction following performance * JSON mode that forces the model to return valid JSON * Can call multiple tools at once * Set a seed for reproducible outputs You can now access these in your Humanloop Editor and via the API. *** ## Improved logs drawer *November 1st, 2023* You can now resize the message section in the Logs and Session drawers, allowing you to review your logs more easily. ![](file:fbb1b743-be55-483c-951e-d9927aefe4c7) To resize the message section we've introduced a resize bar that you can drag up or down to give yourself the space needed. To reset the layout back to default just give the bar a double click. # October ## Local editor history *October 30th, 2023* The Humanloop playground and editor now save history locally as you make edits, giving you complete peace of mind that your precisely-crafted prompts will not be lost due to an accidental page reload or navigating away. ![](file:7bafe41f-dcfe-462f-83ee-d945e199724b) Local history entries will be saved as you use the playground (e.g. as you modify your model config, make generations, or add messages). These will be visible under the **Local** tab within the history side panel. Local history is saved to your browser and is only visible to you. Our shared history feature, where all playground generations are saved, has now been moved under the **Shared** tab in the history side panel. *** ## Project folders *October 17th, 2023* You can now organize your projects into folders! Logging in to Humanloop will bring you to the new page where you can start arranging your projects. ![](file:da22ab08-5a4b-4481-83d1-b6f6b71c028c) Navigate into folders and open projects by clicking on the row. To go back to a parent folder, click on the displayed breadcrumbs (e.g. "Projects" or "Development" in the above screenshot). ***

Search

Searching will give you a list of directories and projects with a matching name. ![](file:5efdcb89-c29c-45a7-a6e2-3d160c51e82e)

Moving multiple projects

You can move a group of projects and directories by selecting them and moving them together. 1. Select the projects you want to move.\ Tip: Put your cursor on a project row and press \[x] to select the row. 2. To move the selected projects into a folder, drag and drop them into the desired folder. ![](file:0d815007-93ad-4abb-b1f6-e2c510e77827) To move projects out of a folder and into a parent folder, you can drag and drop them onto the parent folder breadcrumbs: ![](file:cb5c83b0-19bb-4e50-a617-25454f60fe61) To move projects into deeply nested folders, it might be easier to select your target directory manually. To do so, select the projects you wish to move and then click the blue **Actions** button and then click **Move ...** to bring up a dialog allowing you to move the selected projects. ![](file:1fd075b9-c455-4acd-91af-11392a6befba) ![](file:82a48bdc-5f4d-459c-8c71-ef15d9d0ec20) *** If you prefer the old view, we've kept it around for now. Let us know what you're missing from the new view so we can improve it. The [Go to old layout] button will take you to the previous view without folders. *** ## Datasets *October 16th, 2023* We've introduced **Datasets** to Humanloop. Datasets are collections of **Datapoints**, which represent input-output pairs for an LLM call. We recently released **Datasets** in our Evaluations beta, by the name **Evaluation Testsets**. We're now promoting the concept to a first-class citizen within your projects. If you've previously been using testsets in the evaluations beta, you'll see that your testsets have now automatically migrated to datasets. Datasets can be created via CSV upload, converting from existing Logs in your project, or by API requests. See our [guides on datasets](/docs/guides/datasets), which show how to upload from CSV and perform a batch generation across the whole dataset. A single dataset that has been added to a project, with 9 datapoints. Clicking into a dataset, you can explore its datapoints. Datapoints are pre-defined input-output pairs. A dataset contains a collection of prompt variable **inputs** (the dynamic values which are interpolated into your model config prompt template at generation-time), as well as a collection of **messages** forming the chat history, and a **target** output with data representing what we expect the model to produce when it runs on those inputs. Datasets are useful for evaluating the behaviour of you model configs across a well-defined collection of test cases. You can use datasets to check for regressions as you iterate your model configs, knowing that you are checking behaviour against a deterministic collection of known important examples. Datasets can also be used as collections of input data for **fine-tuning** jobs. *** ## GET API tool *October 10th, 2023* We've added support for a tool that can make GET calls to an external API. This can be used to dynamically retrieve context for your prompts. For example, you may wish to get additional information about a user from your system based on their ID, or look up additional information based on a query from a user. To set up the tool you need to provide the following details for your API: | Tool parameter | Description | Example | | ---------------- | --------------------------------------------------------------------------- | -------------------------------------------------------------------- | | Name | A unique tool name to reference as a call signature in your prompts | `get_api_tool` | | URL | The URL for your API endpoint | [https://your-api.your-domain.com](https://your-api.your-domain.com) | | API Key Header | The authentication header required by your endpoint. | `X-API-KEY` | | API Key | The API key value to use in the authentication header. | `sk_1234567891011121314` | | Query parameters | A comma delimited list of the query parameters to set when making requests. | user\_query, client\_id |

Define your API

First you will need to define your API. For demo purposes, we will create a [mock endpoint in postman](https://learning.postman.com/docs/designing-and-developing-your-api/mocking-data/setting-up-mock/). Our [mock endpoint](https://www.postman.com/humanloop/workspace/humanloop/request/12831443-9c48e591-b7b2-4a17-b56a-8050a133e1b5) simply returns details about a mock user given their `user_id`. A call to our Mock API in Python is as follows; note the query parameter `user_id` ```python import requests url = "https://01a02b84-08c5-4e53-b283-a8c2beef331c.mock.pstmn.io/users?user_id=01234567891011" headers = { 'X-API-KEY': '' } response = requests.request("GET", url, headers=headers) print(response.text) ``` And returns the response: ```json { "user_id", "012345678910", "name": "Albert", "company": "Humanloop", "role": "Engineer" } ``` We can now use this tool to inject information for a given user into our prompts.

Set up the tool

Navigate to the [tools tab](https://app.humanloop.com/hl-test/tools) in your organisation and select the `Get API Call ` tool card: Configure the tool with your API details:

Use the tool

Now your API tool is set up, you can use it to populate input variables in your prompt templates. Double curly bracket syntax is used to call a tool in the template. The call signature is the unique tool name with arguments for the query parameters defined when the tool was set up. In our mock example, the signature will be: `get_user_api(user_id)`. An example prompt template using this tool is: ```shell You are a helpful assistant. Please draft an example job role summary for the following user: User details: {{ get_user_api(user_id) }} Keep it short and concise. ``` The tool requires an input value to be provided for user\_id. In our [playground environment](https://app.humanloop.com/playground) the result of the tool will be shown populated top right above the chat:

What's next

Explore more complex examples of context stuffing such as defining your own custom RAG service. # September ## Evaluations improvements *September 15th, 2023* We've released a couple of minor useability improvements in the evaluations workflow.

Summary statistics for evaluation runs

When reviewing past runs of evaluations, you can now see summary statistics for each evaluator before clicking into the detail view, allowing for easier comparison between runs. ![](file:e577aa7b-670d-422b-9139-c2e03bf0fb75)

Re-running evaluations

To enable easier re-running of past evaluations, you can now click the **Re-run** button in the top-right of the evaluation detail view. ![](file:dde0647e-9a75-47a2-a4bb-15fa7629eefb) *** ## Editor - copy tools *September 15th, 2023* Our Editor environment let's users incorporate [OpenAI function calling](https://openai.com/blog/function-calling-and-other-api-updates) into their prompt engineering workflows by defining tools. Tools are made available to the model as functions to call using the same universal JSON schema format. As part of this process it can be helpful to copy the full JSON definition of the tool for quickly iterating on new versions, or copy and pasting it into code. You can now do this directly from the tool definition in Editor: Selecting the Copy button adds the full JSON definition of the tool to your clipboard: ```json { "name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": { "type": "object", "properties": { "location": { "type": "string", "name": "Location", "description": "The city and state, e.g. San Francisco, CA" }, "unit": { "type": "string", "name": "Unit", "enum": [ "celsius", "fahrenheit" ] } }, "required": [ "location" ] } } ``` *** ## Single sign on (SSO) *September 15th, 2023* We've added support for SOO to our signup, login and invite flows. By default users can now use their Gmail accounts to access Humanloop. For our enterprise customers, this also unlocks the ability for us to more easily support their SAML-based single sign-on (SSO) set ups. *** ## Organization slug in URLs *September 13th, 2023* We have altered routes specific to your organization to include the organization slug. The organization slug is a unique value that was derived from your organization name when your organization was created. For project paths we've dropped the `projects` label in favour of a more specific `project` label. An example of what this looks like can be seen below: When a request is made to one of the legacy URL paths, we'll redirect it to the corresponding new path. Although the legacy routes are still supported, we encourage you to update your links and bookmarks to adopt the new naming scheme.

Updating your organization slug

The organization slug can be updated by organization administrators. This can be done by navigating to the [general settings](https://app.humanloop.com/account/organization) page. Please exercise caution when changing this, as it will affect the URLs across the organization. ![](file:b4cf0782-00a8-4ec1-97e3-645c789f356d) # August ## Allow trusted email domains *August 31st, 2023* You can now add **trusted email domains** to your organization. Adding trusted email domains allows new users, when creating an account with a matching email, to join your organization without requiring an invite.

Managing trusted domains

Adding and removing trusted email domains is controlled from your organizations [General settings](https://app.humanloop.com/account/organization) page. Only Admins can manage trusted domains for an organization. To add a new trusted domain press the **Add domain** button and enter the domains trusted by your organization. The domains added here will check against new users signing up to Humanloop and if there is a match those users will be given the option to join your organization.

Signup for new users

New users signing up to Humanloop will see the following screen when they signup with an email that matches and organizations trusted email domain. By pressing Join they will be added to the matching organization. *** ## Editor - insert new message within existing chat *August 21st, 2023* You can now insert a new message within an existing chat in our Editor. Click the plus button that appears between the rows. *** ## Claude Instant 1.2 *August 15th, 2023* We've added support for Anthropic's latest model Claude instant 1.2! Claude Instant is the faster and lower-priced yet still very capable model from Anthropic, great for use cases where low latency and high throughput are required. You can use Claude instant 1.2 directly within the Humanloop playground and deployment workflows. Read more about the latest Claude instant model [here](https://www.anthropic.com/index/releasing-claude-instant-1-2). *** ## Offline evaluations with testsets *August 14th, 2023* We're continuing to build and release more functionality to Humanloop's evaluations framework! Our first release provided the ability to run **online evaluators** in your projects. Online evaluators allow you to monitor the performance of your live deployments by defining functions which evaluate all new datapoints in real time as they get logged to the project. Today, to augment online evaluators, we are releasing **offline evaluators** as the second part of our evaluations framework. Offline evaluators provide the ability to test your prompt engineering efforts rigorously in development and CI. Offline evaluators test the performance of your model configs against a pre-defined suite of **testcases** - much like unit testing in traditional programming. With this framework, you can use test-driven development practices to iterate and improve your model configs, while monitoring for regressions in CI. To learn more about how to use online and offline evaluators, check out the [Evaluate your model](/docs/guides/evaluate-your-model) section of our guides. ![](file:c4258aaa-fc78-4b01-bbfc-cf4ba42178da) # July ## Improved error handling *July 30th, 2023* We've unified how errors returned by model providers are handled and enabled error monitoring using [eval functions](/docs/guides/evaluate-your-model). A common production pain point we see is that hosted SOTA language models can still be flaky at times, especially at real scale. With this release, Humanloop can help users better understand the extent of the problem and guide them to different models choices to improve reliability.

Unified errors

Our users integrate the Humanloop `/chat` and `/completion` API endpoints as a unified interface into all the popular model providers including OpenAI, Anthropic, Azure, Cohere, etc. Their Humanloop projects can then be used to manage model experimentation, versioning, evaluation and deployment. Errors returned by these endpoints may be raised by the model provider's system. With this release we've updated our API to map all the error behaviours from different model providers to a unified set of [error response codes](/docs/api-reference/errors#http-error-codes). We've also extended our error responses to include more details of the error with fields for `type`, `message`, `code` and `origin`. The `origin` field indicates if the error originated from one of the integrated model providers systems, or directly from Humanloop. For example, for our `/chat ` endpoint where we attempt to call OpenAI with an invalid setting for `max_tokens`, the message returned is that raised by OpenAI and the origin is set to OpenAI. ```json { "type": "unprocessable_entity_error", "message": "This model's maximum context length is 4097 tokens. However, you requested 10000012 tokens (12 in the messages, 10000000 in the completion). Please reduce the length of the messages or completion.", "code": 422, "origin": "OpenAI" } ```

Monitor model reliability with evals

With this release, all errors returned from the different model providers are now persisted with the corresponding input data as datapoints on Humanloop. Furthermore this error data is made available to use within [evaluation functions](/docs/guides/evaluate-your-model). You can now turn on the **Errors** eval function, which tracks overall error rates of the different model variations in your project. Or you can customise this template to track more specific error behaviour. Errors evaluation function template now available *** ## OpenAI functions in Playground *July 25th, 2023* We've added support for [OpenAI functions](https://platform.openai.com/docs/guides/gpt/function-calling) to our playground! This builds on our [API support](https://humanloop.com/docs/changelog/2023/07#openai-functions-api) and allows you to easily experiment with OpenAI functions within our playground UI. OpenAI functions are implemented as [tools](https://humanloop.com/docs/guides/set-up-semantic-search) on Humanloop. Tools follow the same universal [json-schema](https://json-schema.org/) definition as OpenAI functions. You can now define tools as part of your model configuration in the playground. These tools are sent as OpenAI functions when running the OpenAI chat models that support function calling. The model can choose to return a JSON object containing the arguments needed to call a function. This object is displayed as a special assistant message within the playground. You can then provide the result of the call in a message back to the model to consider, which simulates the function calling workflow.

Use tools in Playground

Take the following steps to use tools for function calling in the playground: 1. **Find tools:** Navigate to the playground and locate the `Tools` section. This is where you'll be able to manage your tool definitions. ![](file:c5b72921-1e44-42b8-a88c-91dd1f2338f5) 2. **Create a new tool:** Click on the "Add Tool" button. There are two options in the dropdown: create a new tool or to start with one of our examples. You define your tool using the [json-schema](https://json-schema.org/) syntax. This represents the function definition sent to OpenAI. ![](file:2323029e-e88b-494b-9338-a4bcb7f2749b) 3. **Edit a tool:** To edit an existing tool, simply click on the tool in the Tools section and make the necessary changes to its json-schema definition. This will result in a new model configuration. ![](file:552110ee-08a4-4e25-91ba-a0a2333831d1) 4. **Run a model with tools:** Once you've defined your tools, you can run the model by pressing the "Run" button. 1. If the model chooses to call a function, an assistant message will be displayed with the corresponding tool name and arguments to use. 2. A subsequent `Tool` message is then displayed to simulate sending the results of the call back to the model to consider. ![](file:0467449a-d8f7-4a81-87ab-02e300a8a2d5) 5. **Save your model config with tools** by using the **Save** button. Model configs with tools defined can then deployed to [environments](/docs/guides/deploy-to-an-environment) as normal. **Coming soon** Provide the runtime for your tool under the existing pre-defined [Tools section ](https://app.humanloop.com/tools) of your organization on Humanloop. *** ## Llama 2 *July 24th, 2023* We've added support for Llama 2! You can now select `llama70b-v2` from the model dropdown in the Playground and Editor. You don't currently need to provide an API key or any other special configuration to get Llama 2 access via Humanloop. Llama 2 is available in Playground and Editor for all Humanloop users. Read more about the latest version of Llama [here](https://ai.meta.com/llama/) and in the [original announcement](https://about.fb.com/news/2023/07/llama-2/). *** ## Claude 2 *July 17th, 2023* We've added support for Anthropic's latest model Claude 2.0! Read more about the latest Claude [here](https://www.anthropic.com/index/claude-2). *** ## Evaluators *July 7th, 2023* We've added **Evaluators** to Humanloop in beta! Evaluators allow you to quantitatively define what constitutes a good or bad output from your models. Once set up, you can configure an Evaluators to run automatically across all new datapoints as they appear in your project; or, you can simply run it manually on selected datapoints from the **Data** tab. We're going to be adding lots more functionality to this feature in the coming weeks, so check back for more!

Create an Evaluator

If you've been given access to the feature, you'll see a new **Evaluations** tab in the Humanloop app. To create your first evaluation function, select **+ New Evaluator**. In the dialog, you'll be presented with a library of example Evaluators, or you can start from scratch. We offer a library of example Evaluators to get you started. We'll pick **Valid JSON** for this guide. Evaluator editor. In the editor, provide details of your function's name, description and return type. In the code editor, you can provide a function which accepts a `datapoint` argument and should return a value of the chosen type. Currently, the available return types for an Evaluators are `number` and `boolean`. You should ensure that your function returns the expected data type - an error will be raised at runtime if not.

The `Datapoint` argument

The `datapoint` passed into your function will be a Python `dict` with the following structure. ```python { "id":"data_XXXX", # Datapoint id "model_config": {...}, # Model config used to generate the datapoint "inputs": {...}, # Model inputs (interpolated into the prompt) "output": "...", # Generated output from the model "provider_latency": 0.6, # Provider latency in seconds "metadata": {...}, # Additional metadata attached to the logged datapoint "created_at": "...", # Creation timestamp "feedback": [...] # Array of feedback provided on the datapoint } ``` To inspect datapoint dictionaries in more detail, click **Random selection** in the debug console at the bottom of the window. This will load a random set of five datapoints from your project, exactly as they will be passed into the Evaluation Function. The debug console - load datapoints to inspect the argument passed into Evaluators. For this demo, we've created a prompt which asks the model to produce valid JSON as its output. The Evaluator uses a simple `json.loads` call to determine whether the output is validly formed JSON - if this call raises an exception, it means that the output is not valid JSON, and we return `False`. ```python import json def check_valid_json(datapoint): try: return json.loads(datapoint["output"]) is not None except: return False ```

Debugging

Once you have drafted a Python function, try clicking the run button next to one of the debug datapoints in the debug console. You should shortly see the result of executing your function on that datapoint in the table. A `True` result from executing the **Valid JSON** Evaluators on the datapoint. If your Evaluator misbehaves, either by being invalid Python code, raising an unhandled exception or returning the wrong type, an error will appear in the result column. You can hover this error to see more details about what went wrong - the exception string is displayed in the tooltip. Once you're happy with your Evaluator, click **Create** in the bottom left of the dialog.

Activate / Deactivate an Evaluator

Your Evaluators are available across all your projects. When you visit the **Evaluations** tab from a specific project, you'll see all Evaluators available in your organisation. Each Evaluator has a toggle. If you toggle the Evaluator **on**, it will run on every new datapoint that gets logged to **that** project. (Switch to another project and you'll see that the Evaluator is not yet toggled on if you haven't chosen to do so). You can deactivate an Evaluator for a project by toggling it back off at any time.

Aggregations and Graphs

At the top of the **Dashboard** tab, you'll see new charts for each activated Evaluation Function. These display aggregated Evaluation results through time for datapoints in the project. At the bottom of the **Dashboard** tab is a table of all the model configs in your project. That table will display a column for each activated Evaluator in the project. The data displayed in this column is an aggregation of all the Evaluation Results (by model config) for each Evaluator. This allows you to assess the relative performance of your models. Evaluation Results through time, by model config. In this example, one of the model configs is not producing Valid JSON outputs, while the other is about 99% of the time.

Aggregation

For the purposes of both the charts and the model configs table, aggregations work as follows for the different return types of Evaluators: * `Boolean`: percentage returning `True` of the total number of evaluated datapoints * `Number`: average value across all evaluated datapoints

Data logs

In the **Data** tab, you'll also see that a column is visible for each activated Evaluator, indicating the result of running the function on each datapoint. The **Data** tab for a project, showing the **Valid JSON** Evaluation Results for a set of datapoints. From this tab, you can choose to re-run an Evaluator on a selection of datapoints. Either use the menu at the far right of a single datapoint, or select multiple datapoints and choose **Run evals** from the **Actions** menu in the top right.

Available Modules

The following Python modules are available to be imported in your Evaluation Function: * `math` * `random` * `datetime` * `json` (useful for validating JSON grammar as per the example above) * `jsonschema` (useful for more fine-grained validation of JSON output - see the in-app example) * `sqlglot` (useful for validating SQL query grammar) * `requests` (useful to make further LLM calls as part of your evaluation - see the in-app example for a suggestion of how to get started). Let us know if you would like to see more modules available. *** ## Chain LLM calls *July 5th, 2023* We've introduced sessions to Humanloop, allowing you to link multiple calls together when building a chain or agent. Using sessions with your LLM calls helps you troubleshoot and improve your chains and agents. Trace of an Agent's steps logged as a session

Adding a datapoint to a session

To log your LLM calls to a session, you just need to define a unique identifier for the session and pass it into your Humanloop calls with `session_reference_id`. For example, using `uuid4()` to generate this ID, ```python import uuid session_reference_id = str(uuid.uuid4()) response = humanloop.complete( project="sessions_example_assistant", model_config={ "prompt_template": "Question: {{user_request}}\nGoogle result: {{google_answer}}\nAnswer:\n", "model": "text-davinci-002", "temperature": 0, }, inputs={"user_request": user_request, "google_answer": google_answer}, session_reference_id=session_reference_id, ) ``` Similarly, our other methods such as `humanloop.complete_deployed()`, `humanloop.chat()`, and `humanloop.log()` etc. support `session_reference_id`. If you're using our API directly, you can pass `session_reference_id` within the request body in your `POST /v4/completion` etc. endpoints.

Further details

For a more detailed walkthrough on how to use `session_reference_id`, check out [our guide](/docs/guides/logging-session-traces) that runs through how to record datapoints to a session in an example script. *** ## Introducing Tools *July 3rd, 2023* Today we’re announcing Tools as a part of Humanloop. Tools allow you to connect an LLM to any API and to an array of data sources to give it extra capabilities and access to private data. Under your organization settings on Humanloop you can now configure and manage tools in a central place. Read more on [our blog](https://humanloop.com/blog/announcing-tools) and see an example of setting up a [tool for semantic search](/docs/guides/set-up-semantic-search). *** ## OpenAI functions API *July 3rd, 2023* We've updated our APIs to support [OpenAI function calling](https://platform.openai.com/docsgpt/function-calling). OpenAI functions are now supported as tools on Humanloop. This allows you to pass tool definitions as part of the model configuration when calling our `chat` and `log` endpoints. For the latest OpenAI models `gpt-3.5-turbo-0613` and `gpt-4-0613` the model can then choose to output a JSON object containing arguments to call these tools. This unlocks getting more reliable structured data back from the model and makes it easier to create useful agents.

Recap on OpenAI functions

As described in the [OpenAI documentation](https://platform.openai.com/docsgpt/function-calling), the basic steps for using functions are: 1. Call one of the models `gpt-3.5-turbo-0613` and `gpt-4-0613` with a user query and a set of function definitions described using the universal [json-schema](https://json-schema.org/) syntax. 2. The model can then choose to call one of the functions provided. If it does, a stringified JSON object adhering to your json schema definition will be returned. 3. You can then parse the string into JSON in your code and call the chosen function with the provided arguments (**NB:** the model may hallucinate or return invalid json, be sure to consider these scenarios in your code). 4. Finally call the model again by appending the function response as a new message. The model can then use this information to respond to the original use query. OpenAI have provided a simple example in their docs for a `get_current_weather` function that we will show how to adapt to use with Humanloop: ```python import openai import json # Example dummy function hard coded to return the same weather # In production, this could be your backend API or an external API def get_current_weather(location, unit="fahrenheit"): """Get the current weather in a given location""" weather_info = { "location": location, "temperature": "72", "unit": unit, "forecast": ["sunny", "windy"], } return json.dumps(weather_info) def run_conversation(): # Step 1: send the conversation and available functions to GPT messages = [{"role": "user", "content": "What's the weather like in Boston?"}] functions = [ { "name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA", }, "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}, }, "required": ["location"], }, } ] response = openai.ChatCompletion.create( model="gpt-3.5-turbo-0613", messages=messages, functions=functions, function_call="auto", # auto is default, but we'll be explicit ) response_message = response["choices"][0]["message"] # Step 2: check if GPT wanted to call a function if response_message.get("function_call"): # Step 3: call the function # Note: the JSON response may not always be valid; be sure to handle errors available_functions = { "get_current_weather": get_current_weather, } # only one function in this example, but you can have multiple function_name = response_message["function_call"]["name"] fuction_to_call = available_functions[function_name] function_args = json.loads(response_message["function_call"]["arguments"]) function_response = fuction_to_call( location=function_args.get("location"), unit=function_args.get("unit"), ) # Step 4: send the info on the function call and function response to GPT messages.append(response_message) # extend conversation with assistant's reply messages.append( { "role": "function", "name": function_name, "content": function_response, } ) # extend conversation with function response second_response = openai.ChatCompletion.create( model="gpt-3.5-turbo-0613", messages=messages, ) # get a new response from GPT where it can see the function response return second_response print(run_conversation()) ```

Using with Humanloop tools

OpenAI functions are treated as tools on Humanloop. Tools conveniently follow the same universal json-schema definition as OpenAI functions. We've expanded the definition of our model configuration to also include tool definitions. Historically the model config is made up of the chat template, choice of base model and any hyper-parameters that change the behaviour of the model. In the cases of OpenAIs `gpt-3.5-turbo-0613` and `gpt-4-0613` models, any tools defined as part of the model config are passed through as functions for the model to use. You can now specify these tools when using the Humanloop chat endpoint (as a replacement for OpenAI's ChatCompletion), or when using the Humanloop log endpoint in addition to the OpenAI calls:

Chat endpoint

We show here how to update the `run_conversation()` method from the OpenAI example to instead use the Humanloop chat endpoint with tools: ```python from humanloop import Humanloop hl = Humanloop( # get your API key here: https://app.humanloop.com/account/api-keys api_key="YOUR_API_KEY", ) def run_conversation(): # Step 1: send the conversation and available functions to GPT messages = [{"role": "user", "content": "What's the weather like in Boston?"}] # functions are referred to as tools on Humanloop, but follows the same schema tools = [ { "name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA", }, "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}, }, "required": ["location"], }, } ] response = hl.chat( project="Assistant", model_config={ "model": "gpt-3.5-turbo-0613", "tools": tools }, messages=messages ) response = response.body.data[0] # Step 2: check if GPT wanted to call a tool if response.get("tool_call"): # Step 3: call the function # Note: the JSON response may not always be valid; be sure to handle errors available_functions = { "get_current_weather": get_current_weather, } # only one function in this example, but you can have multiple function_name = response_message["function_call"]["name"] fuction_to_call = available_functions[function_name] function_args = json.loads(response["tool_call"]["arguments"]) function_response = fuction_to_call( location=function_args.get("location"), unit=function_args.get("unit"), ) # Step 4: send the response back to the model messages.append(response_message) messages.append( { "role": "tool", "name": function_name, "content": function_response, } ) second_response = hl.chat( project="Assistant", model_config={ "model": "gpt-3.5-turbo-0613", "tools": tools }, messages=messages ) return second_response ``` After running this snippet, the model configuration recorded on your project in Humanloop will now track what tools were provided to the model and the logged datapoints will provide details of the tool called to inspect: ![](file:47fd03e7-92bb-4131-8e45-363e81b28d35)

Log endpoint

Alternatively, you can also use the explicit Humanloop log alongside your existing OpenAI calls to achieve the same result: ```python from humanloop import Humanloop hl = Humanloop( # get your API key here: https://app.humanloop.com/account/api-keys api_key="YOUR_API_KEY", ) def run_conversation(): # Step 1: send the conversation and available functions to GPT messages = [{"role": "user", "content": "What's the weather like in Boston?"}] functions = [ { "name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA", }, "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}, }, "required": ["location"], }, } ] response = openai.ChatCompletion.create( model="gpt-3.5-turbo-0613", messages=messages, functions=functions, function_call="auto", # auto is default, but we'll be explicit ) response_message = response["choices"][0]["message"] # log the result to humanloop log_response = hl.log( project="Assistant", model_config={ "model": "gpt-3.5-turbo-0613", "tools": tools, }, messages=messages, tool_call=response_message.get("function_call") ) # Step 2: check if GPT wanted to call a function if response_message.get("function_call"): # Step 3: call the function # Note: the JSON response may not always be valid; be sure to handle errors available_functions = { "get_current_weather": get_current_weather, } # only one function in this example, but you can have multiple function_name = response_message["function_call"]["name"] fuction_to_call = available_functions[function_name] function_args = json.loads(response_message["function_call"]["arguments"]) function_response = fuction_to_call( location=function_args.get("location"), unit=function_args.get("unit"), ) # Step 4: send the info on the function call and function response to GPT messages.append(response_message) # extend conversation with assistant's reply messages.append( { "role": "function", "name": function_name, "content": function_response, } ) # extend conversation with function response second_response = openai.ChatCompletion.create( model="gpt-3.5-turbo-0613", messages=messages, ) # get a new response from GPT where it can see the function response log_response = hl.log( project="Assistant", model_config={ "model": "gpt-3.5-turbo-0613", "tools": tools, }, messages=messages, output=second_response["choices"][0]["message"]["content"], ) return second_response print(run_conversation()) ```

Coming soon

Support for defining tools in the playground! # June ## Deployment environments *July 27th, 2023* We've added support for environments to your deployments in Humanloop! This enables you to deploy your model configurations to specific environments. You'll no longer have to duplicate your projects to manage the deployment workflow between testing and production. With environments, you'll have the control required to manage the full LLM deployment lifecycle.

Enabling environments for your organisation

Every organisation automatically receives a default production environment. For any of your existing projects that had active deployments define, these have been automatically migrated over to use the default environment with no change in behaviour for the APIs. You can create additional environments with custom names by visiting your organisation's [environments page](https://app.humanloop.com/account/environments).

Creating an environment

Enter a custom name in the create environment dialog. Names have a constraint in that they must be unique within an organisation. ![](file:fa7e36ff-d8d7-4df2-aebc-c9df55239f16) The environments you define for your organisation will be available for each project and can be viewed in the project dashboard once created. ![](file:3d2795f1-68e0-4a5a-9da7-2ae1b86c7b5e)

The default environment

By default, the production environment is marked as the Default environment. This means that all API calls targeting the "Active Deployment," such as [Get Active Config](/docs/v4/api-reference/projects/getactiveconfig) or [Chat Deployed](/docs/v4/api-reference/chats/createdeployed) will use this environment. Renaming environments will take immediate effect, so ensure that this change is planned and does not disrupt your production workflows.

Using environments

Once created on the environments page, environments can be used for each project and are visible in the respective project dashboards. You can deploy directly to a specific environment by selecting it in the **Deployments** section. ![](file:6be2dda8-0a63-4c4f-925e-3c4ebcef3c12) Alternatively, you can deploy to multiple environments simultaneously by deploying a Model Config from either the Editor or the Model Configs table.

Using environments via API

![](file:711f48cd-5ef0-4a51-9572-cd228bb7657c) For v4.0 API endpoints that support Active Deployments, such as [Get Active Config](/docs/v4/api-reference/projects/getactiveconfig) or [Chat Deployed](/docs/v4/api-reference/chats/createdeployed), you can now optionally point to a model configuration deployed in a specific environment by including an optional additional `environment` field. You can find this information in our v4.0 API Documentation or within the environment card in the Project Dashboard under the "Use API" option. Clicking on the "Use API" option will provide code snippets that demonstrate the usage of the `environment` variable in practice. ![](file:739214e6-160c-417e-99c8-aa35aad99ba0) *** ## Improved Python SDK streaming response *July 20th, 2023* We've improved our Python SDK's streaming response to contain the datapoint ID. Using the ID, you can now provide feedback to datapoints created through streaming. The `humanloop.chat_stream()` and `humanloop.complete_stream()` methods now yield a dictionary with `output` and `id`. ```python {'output': '...', 'id': 'data_...'} ``` Install the updated SDK with ```shell pip install --upgrade humanloop ```

Example snippet

``` import asyncio from humanloop import Humanloop humanloop = Humanloop( api_key="YOUR_API_KEY", openai_api_key="YOUR_OPENAI_API_KEY", ) async def main(): response = await humanloop.chat_stream( project="sdk-example", messages=[ { "role": "user", "content": "Explain asynchronous programming.", } ], model_config={ "model": "gpt-3.5-turbo", "max_tokens": -1, "temperature": 0.7, "chat_template": [ { "role": "system", "content": "You are a helpful assistant who replies in the style of {{persona}}.", }, ], }, inputs={ "persona": "the pirate Blackbeard", }, ) async for token in response.content: print(token) # E.g. {'output': 'Ah', 'id': 'data_oun7034jMNpb0uBnb9uYx'} asyncio.run(main()) ``` *** ## OpenAI Azure support *July 20th, 2023* We've just added support for Azure deployments of OpenAI models to Humanloop! This update adds the ability to target Microsoft Azure deployments of OpenAI models to the playground and your projects. To set this up, visit your [organization's settings](https://app.humanloop.com/account/api-keys).

Enabling Azure OpenAI for your organization

As a prerequisite, you will need to already be setup with Azure OpenAI Service. See the [Azure OpenAI docs](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal) for more details. At the time of writing, access is granted by application only. ![](file:263b44ce-8760-4212-9076-a7ef238118d8) Click the Setup button and provide your Azure OpenAI endpoint and API key. Your endpoint can be found in the Keys & Endpoint section when examining your resource from the Azure portal. Alternatively, you can find the value in Azure OpenAI Studio > Playground > Code View. An example endpoint is: docs-test-001.openai.azure.com. Your API keys can also be found in the Keys & Endpoint section when examining your resource from the Azure portal. You can use either KEY1 or KEY2.

Working with Azure OpenAI models

Once you've successfully enabled Azure OpenAI for your organization, you'll be able to access it through the [playground](https://app.humanloop.com/playground) and in your projects in exactly the same way as your existing OpenAI and/or Anthropic models.

REST API and Python / TypeScript support

As with other model providers, once you've set up an Azure OpenAI-backed model config, you can call it with the Humanloop [REST API or our SDKs](/docs/api-reference/sdks). ```typescript import { Humanloop } from "humanloop"; const humanloop = new Humanloop({ apiKey: "API_KEY", }); const chatResponse = await humanloop.chat({ project: "project_example", messages: [ { role: "user", content: "Write me a song", }, ], provider_api_keys: { openai_azure: OPENAI_AZURE_API_KEY, openai_azure_endpoint: OPENAI_AZURE_ENDPOINT, }, model_config: { model: "my-azure-deployed-gpt-4", temperature: 1, }, }); console.log(chatResponse); ``` In the `model_config.model` field, provide the name of the model that you deployed from the Azure portal (see note below for important naming conventions when setting up your deployment in the Azure portal). The request will use the stored organization level key and endpoint you configured above, unless you override this on a per-request basis by passing both the endpoint and API key in the `provider_api_keys` field, as shown in the example above.

Note: Naming Model Deployments

When you deploy a model through the Azure portal, you'll have the ability to provide your deployment with a unique name. For instance, if you choose to deploy an instance of `gpt-35-turbo` in your OpenAI Service, you may choose to give this an arbitrary name like `my-orgs-llm-model`. In order to use all Humanloop features with your Azure model deployment, you must ensure that your deployments are named either with an unmodified base model name like `gpt-35-turbo`, or the base model name with a custom prefix like `my-org-gpt-35-turbo`. If your model deployments use arbitrary names which do not prefix a base model name, you may find that certain features such as setting `max_tokens=-1` in your model configs fail to work as expected. *** ## Project Editor *July 13th, 2023* We've introduced an Editor within each project to help you make it easier to to change prompts and bring in project specific data. The Editor will load up the currently active model config, and will save the generations in the project's data table. You can now also bring datapoints directly to the Editor. Select any datapoints you want to bring to Editor (also through `x` shortcut) and you can choose to open them in Editor (or `e` shortcut) Press `e` while selecting a datapoint to bring it into Editor We think this workflow significantly improves the workflow to go from interesting datapoint to improved model config. As always, let us know if you have other feedback. # May ## Cohere \_ May 23rd, 2023\_ We've just added support for Cohere to Humanloop! This update adds Cohere models to the playground and your projects - just add your Cohere API key in your [organization's settings](https://app.humanloop.com/account/api-keys). As with other providers, each user in your organization can also set a personal override API key, stored locally in the browser, for use in Cohere requests from the Playground.

Enabling Cohere for your organization

Add your Cohere API key to your organization settings to start using Cohere models with Humanloop.

Working with Cohere models

Once you've successfully enabled Cohere for your organization, you'll be able to access it through the [playground](https://app.humanloop.com/playground) and in your projects, in exactly the same way as your existing OpenAI and/or Anthropic models.

REST API and Python / TypeScript support

As with other model providers, once you've set up a Cohere-backed model config, you can call it with the Humanloop [REST API or our SDKs](/docs/api-reference/sdks). ```typescript import { Humanloop } from "humanloop"; const humanloop = new Humanloop({ apiKey: "API_KEY", }); const chatResponse = await humanloop.chat({ project: "project_example", messages: [ { role: "user", content: "Write me a song", }, ], provider_api_keys: { cohere: COHERE_API_KEY, }, model_config: { model: "command", temperature: 1, }, }); console.log(chatResponse); ``` If you don't provide a Cohere API key under the `provider_api_keys` field, the request will fall back on the stored organization level key you configured above. *** ## Improved Python SDK *May 17th, 2023* We've just released a new version of our Python SDK supporting our v4 API! This brings support for: * 💬 Chat mode `humanloop.chat(...)` * 📥 Streaming support `humanloop.chat_stream(...)` * 🕟 Async methods `humanloop.acomplete(...)` [https://pypi.org/project/humanloop/](https://pypi.org/project/humanloop/)

Installation

`pip install --upgrade humanloop`

Example usage

```python complete_response = humanloop.complete( project="sdk-example", inputs={ "text": "Llamas that are well-socialized and trained to halter and lead after weaning and are very friendly and pleasant to be around. They are extremely curious and most will approach people easily. However, llamas that are bottle-fed or over-socialized and over-handled as youth will become extremely difficult to handle when mature, when they will begin to treat humans as they treat each other, which is characterized by bouts of spitting, kicking and neck wrestling.[33]", }, model_config={ "model": "gpt-3.5-turbo", "max_tokens": -1, "temperature": 0.7, "prompt_template": "Summarize this for a second-grade student:\n\nText:\n{{text}}\n\nSummary:\n", }, stream=False, ) pprint(complete_response) pprint(complete_response.project_id) pprint(complete_response.data[0]) pprint(complete_response.provider_responses) ```

Migration from `0.3.x`

For those coming from an older SDK version, this introduces some breaking changes. A brief highlight of the changes: * The client initialization step of `hl.init(...)` is now `humanloop = Humanloop(...)`. * Previously `provider_api_keys` could be provided in `hl.init(...)`. They should now be provided when constructing `Humanloop(...)` client. * ```python humanloop = Humanloop( api_key="YOUR_API_KEY", openai_api_key="YOUR_OPENAI_API_KEY", anthropic_api_key="YOUR_ANTHROPIC_API_KEY", ) ``` * `hl.generate(...)`'s various call signatures have now been split into individual methods for clarity. The main ones are: * `humanloop.complete(project, model_config={...}, ...)` for a completion with the specified model config parameters. * `humanloop.complete_deployed(project, ...)` for a completion with the project's active deployment. # April ## TypeScript SDK *April 3rd, 2023* We now have a fully typed TypeScript SDK to make working with Humanloop even easier. [https://www.npmjs.com/package/humanloop](https://www.npmjs.com/package/humanloop) You can use this with your JavaScript, TypeScript or Node projects. **Installation** ```shell npm i humanloop ``` **Example usage** ```typescript import { Humanloop } from "humanloop" const humanloop = new Humanloop({ // Defining the base path is optional and defaults to https://api.humanloop.com/v3 // basePath: "https://api.humanloop.com/v3", apiKey: 'API_KEY', }) const chatResponse = await humanloop.chat({ "project": "project_example", "messages": [ { "role": "user", "content": "Write me a song", } ], "provider_api_keys": { "openai": OPENAI_API_KEY }, "model_config": { "model": "gpt-4", "temperature": 1, }, }) console.log(chatResponse) ``` # March ## Keyboard shortcuts and datapoint links *March 30th, 2023* We've added keyboard shortcuts to the datapoint viewer `g` for good\ `b` for bad and `j` /` k` for next/prev This should help you for quickly annotating data within your team. You can also link to specific datapoint in the URL now as well. *** ## ChatGPT support *March 2nd, 2023* ChatGPT is here! It's called 'gpt-3.5-turbo'. Try it out today in playground and on the generate endpoint. Faster and 10x cheaper than text-davinci-003. # February ## Faster datapoints table loading *February 20th, 2023* Initial datapoints table is now twice as fast to load! And it will continue to get faster. *** ## Ability to open datapoint in playground *February 20th, 2023* Added a way to go from the datapoint drawer to the playground with that datapoint loaded. Very convenient for trying tweaks to a model config or understanding an issue, without copy pasting.