# Humanloop Docs ## Docs - [Humanloop is the LLM Evals Platform for Enterprises](https://humanloop.com/docs/v5/getting-started/overview.mdx): Learn how to use Humanloop for prompt engineering, evaluation and monitoring. Comprehensive guides and tutorials for LLMOps. - [Quickstart](https://humanloop.com/docs/v5/quickstart.mdx): Quickstart guides for evaluating and instrumenting your LLM apps. - [Evals in code](https://humanloop.com/docs/v5/quickstart/evals-in-code.mdx): Getting up and running with Humanloop is quick and easy. This guide will explain how to set up evaluations on Humanloop and use them to iteratively improve your applications. - [Evals in the UI](https://humanloop.com/docs/v5/quickstart/evals-in-ui.mdx): Getting up and running with Humanloop is quick and easy. This guide will explain how to set up evaluations through the Humanloop UI and use them to iteratively improve your applications. - [Create a Prompt in the UI](https://humanloop.com/docs/v5/quickstart/create-prompt.mdx): This guide will show you how you can use Humanloop to quickly create a new prompt and experiment with it. - [Set up logging](https://humanloop.com/docs/v5/quickstart/set-up-logging.mdx): Use Humanloop to add logging to an AI project. - [Humanloop Files](https://humanloop.com/docs/v5/explanation/files.mdx): Discover how Humanloop manages datasets, with version control and collaboration to enable you to evaluate and fine-tune your models. - [Prompts](https://humanloop.com/docs/v5/explanation/prompts.mdx): Discover how Humanloop manages prompts, with version control and rigorous evaluation for better performance. - [Evaluators](https://humanloop.com/docs/v5/explanation/evaluators.mdx): Learn about LLM Evaluation using Evaluators. Evaluators are functions that can be used to judge the output of Prompts, Tools or other Evaluators. - [Tools](https://humanloop.com/docs/v5/explanation/tools.mdx): Discover how Humanloop manages tools for use with large language models (LLMs) with version control and rigorous evaluation for better performance. - [Flows](https://humanloop.com/docs/v5/explanation/flows.mdx): Humanloop Flows trace and evaluate complex AI workflows, from LLM agents to retrieval-augmented generation (RAG). By unifying all components, Flows provide the context needed to debug and iterate with confidence. - [Datasets](https://humanloop.com/docs/v5/explanation/datasets.mdx): Discover how Humanloop manages datasets, with version control and collaboration to enable you to evaluate and fine-tune your models. - [Logs](https://humanloop.com/docs/v5/explanation/logs.mdx): Logs contain the inputs and output of each time a Function Files is called. - [Directories](https://humanloop.com/docs/v5/explanation/directories.mdx): Directories can be used to group together related Files. This is useful for organizing your work as part of prompt engineering and collaboration. - [Environments](https://humanloop.com/docs/v5/explanation/environments.mdx): Deployment environments enable you to control the deployment lifecycle of your Prompts and other files between development and production environments. - [Evaluate a RAG app](https://humanloop.com/docs/v5/tutorials/rag-evaluation.mdx): Evaluate a RAG application with Humanloop. - [Evaluate an agent](https://humanloop.com/docs/v5/tutorials/agent-evaluation.mdx): Evaluate and improve the performance of an LLM agent. - [Capture user feedback](https://humanloop.com/docs/v5/tutorials/capture-user-feedback.mdx): Collect feedback from your users to improve your AI product. - [Run an Evaluation via the UI](https://humanloop.com/docs/v5/guides/evals/run-evaluation-ui.mdx): How to use Humanloop to evaluate multiple different Prompts across a Dataset. - [Run an Evaluation via the API](https://humanloop.com/docs/v5/guides/evals/run-evaluation-api.mdx): In this guide, we will walk through how to programmatically evaluate multiple different Prompts to compare the quality and performance of each version. - [Upload a Dataset from CSV](https://humanloop.com/docs/v5/guides/evals/upload-dataset-csv.mdx): Learn how to create Datasets in Humanloop to define fixed examples for your projects, and build up a collection of input-output pairs for evaluation and fine-tuning. - [Create a Dataset via the API](https://humanloop.com/docs/v5/guides/evals/create-dataset-api.mdx): Learn how to create Datasets in Humanloop to define fixed examples for your projects, and build up a collection of input-output pairs for evaluation and fine-tuning. - [Create a Dataset from existing Logs](https://humanloop.com/docs/v5/guides/evals/create-dataset-from-logs.mdx): Learn how to create Datasets in Humanloop to define fixed examples for your projects, and build up a collection of input-output pairs for evaluation and fine-tuning. - [Set up a code Evaluator](https://humanloop.com/docs/v5/guides/evals/code-based-evaluator.mdx): Learn how to create a code Evaluators in Humanloop to assess the performance of your AI applications. This guide covers setting up an offline evaluator, writing evaluation logic, and using the debug console. - [Set up LLM as a Judge](https://humanloop.com/docs/v5/guides/evals/llm-as-a-judge.mdx): Learn how to use LLM as a judge to check for PII in Logs. - [Set up a Human Evaluator](https://humanloop.com/docs/v5/guides/evals/human-evaluators.mdx): Learn how to set up a Human Evaluator in Humanloop. Human Evaluators allow your subject-matter experts and end-users to provide feedback on Prompt Logs. - [Run a Human Evaluation](https://humanloop.com/docs/v5/guides/evals/run-human-evaluation.mdx): Collect judgments from subject-matter experts (SMEs) to better understand the quality of your AI product. - [Manage multiple reviewers](https://humanloop.com/docs/v5/guides/evals/manage-multiple-reviewers.mdx): Learn how to split the work between your SMEs - [Compare and Debug Prompts](https://humanloop.com/docs/v5/guides/evals/comparing-prompts.mdx): In this guide, we will walk through comparing the outputs from multiple Prompts side-by-side using the Humanloop Editor environment and using diffs to help debugging. - [Set up CI/CD Evaluations](https://humanloop.com/docs/v5/guides/evals/cicd-integration.mdx): Learn how to automate LLM evaluations as part of your CI/CD pipeline using Humanloop and GitHub Actions. - [Spot-check your Logs](https://humanloop.com/docs/v5/guides/evals/spot-check-logs.mdx): Learn how to use the Humanloop Python SDK to sample a subset of your Logs and create an Evaluation Run to spot-check them. - [Use external Evaluators](https://humanloop.com/docs/v5/guides/evals/use-external-evaluators.mdx): Integrate your existing evaluation process with Humanloop. - [Evaluate external logs](https://humanloop.com/docs/v5/guides/evals/evaluate-external-logs.mdx): Run an Evaluation on Humanloop with your own - [Create a Prompt](https://humanloop.com/docs/v5/guides/prompts/create-prompt.mdx): Learn how to create a Prompt in Humanloop using the UI or SDK, version it, and use it to generate responses from your AI models. Prompt management is a key part of the Humanloop platform. - [Call a Prompt](https://humanloop.com/docs/v5/guides/prompts/call-prompt.mdx): Learn how to call your Prompts that are managed on Humanloop. - [Log to a Prompt](https://humanloop.com/docs/v5/guides/prompts/log-to-a-prompt.mdx): Learn how to create a Prompt in Humanloop using the UI or SDK, version it, and use it to generate responses from your AI models. Prompt management is a key part of the Humanloop platform. - [Tool calling in Editor](https://humanloop.com/docs/v5/guides/prompts/tool-calling-editor.mdx): Learn how to use tool calling in your large language models and intract with it in the Humanloop Prompt Editor. - [Re-use snippets in Prompts](https://humanloop.com/docs/v5/guides/prompts/reusable-snippets.mdx): Learn how to use the Snippet tool to manage common text snippets that you want to reuse across your different prompts. - [Deploy to an environment](https://humanloop.com/docs/v5/guides/prompts/deploy-to-environment.mdx): Environments enable you to deploy model configurations and experiments, making them accessible via API, while also maintaining a streamlined production workflow. - [Create a Directory](https://humanloop.com/docs/v5/guides/prompts/create-directory.mdx): Directories can be used to group together related files. This is useful for organizing your work. - [Link a Tool to a Prompt](https://humanloop.com/docs/v5/guides/prompts/link-tool.mdx): Learn how to create a JSON Schema tool that can be reused across multiple Prompts. - [Monitor production Logs](https://humanloop.com/docs/v5/guides/observability/monitoring.mdx): Learn how to create and use online Evaluators to observe the performance of your Prompts. - [Capture user feedback](https://humanloop.com/docs/v5/guides/observability/capture-user-feedback.mdx): Learn how to record user feedback on your generated Prompt Logs using the Humanloop SDK. - [Logging through API](https://humanloop.com/docs/v5/guides/observability/logging-through-api.mdx): Add logging your AI project using the Humanloop API. - [Invite collaborators](https://humanloop.com/docs/v5/guides/organization/invite-collaborators.mdx): Inviting people to your organization allows them to interact with your Humanloop projects. - [Manage API keys](https://humanloop.com/docs/v5/guides/organization/manage-api-keys.mdx): How to create, share and manage you Humanloop API keys. The API keys allow you to access the Humanloop API programmatically in your app. - [Manage Environments](https://humanloop.com/docs/v5/guides/organization/manage-deployment-environments.mdx): Environments are a tagging system for deploying Prompts. They enable you to deploy maintain a streamlined deployment workflow and keep track of different versions of Prompts. - [Deployment Options](https://humanloop.com/docs/v5/reference/deployment-options.mdx): Humanloop is SOC-2 compliant, offers within your VPC and never trains on your data. Learn more about our hosting options. - [Supported Models](https://humanloop.com/docs/v5/reference/models.mdx): Humanloop supports all the major large language model providers, including OpenAI, Anthropic, Google, AWS Bedrock, Azure, and more. Additionally, you can use your own custom models with with the API and still benefit from the Humanloop platform. - [Template Library](https://humanloop.com/docs/v5/template-library.mdx): Explore Humanloop’s template library. Find example evaluators and prompts for popular use cases like Agents and RAG, all ready for customization. - [Prompt file format](https://humanloop.com/docs/v5/reference/prompt-file-format.mdx): The `.prompt` file format is a human-readable and version-control-friendly format for storing model configurations. - [Humanloop Runtime Environment](https://humanloop.com/docs/v5/reference/python-environment.mdx): This reference provides details about the Python environment and supported packages. - [Security and Compliance](https://humanloop.com/docs/v5/reference/security-compliance.mdx): Learn about Humanloop's commitment to security, data protection, and compliance with industry standards. - [Data Management](https://humanloop.com/docs/v5/reference/data-management.mdx): Discover Humanloop's robust data management practices and state-of-the-art encryption methods ensuring maximum security and compliance for AI applications. - [Access roles (RBACs)](https://humanloop.com/docs/v5/reference/access-roles.mdx): Learn about the different roles and permissions in Humanloop to help you with prompt and data management for large language models. - [SSO and Authentication](https://humanloop.com/docs/v5/reference/sso-authentication.mdx): Learn about Single Sign-On (SSO) and authentication options for Humanloop - [LLMs.txt](https://humanloop.com/docs/v5/reference/llms-txt.mdx): Humanloop docs are accessible to AI tools using the llms.txt standard. - [SDKs](https://humanloop.com/docs/v5/api-reference/sdks.md): Learn how to integrate Humanloop into your applications using our Python and TypeScript SDKs or REST API. - [Errors](https://humanloop.com/docs/v5/api-reference/errors.md): This page provides a list of the error codes and messages you may encounter when using the Humanloop API. - [Humanloop API](https://humanloop.com/docs/v5/api-reference.md) - [February](https://humanloop.com/docs/v5/changelog/2025/02.mdx) - [January](https://humanloop.com/docs/v5/changelog/2025/01.mdx) - [December](https://humanloop.com/docs/v5/changelog/2024/12.mdx) - [November](https://humanloop.com/docs/v5/changelog/2024/11.mdx) - [October](https://humanloop.com/docs/v5/changelog/2024/10.mdx) - [September](https://humanloop.com/docs/v5/changelog/2024/09.mdx) - [August](https://humanloop.com/docs/v5/changelog/2024/08.mdx) - [July](https://humanloop.com/docs/v5/changelog/2024/07.mdx) - [June](https://humanloop.com/docs/v5/changelog/2024/06.mdx) - [May](https://humanloop.com/docs/v5/changelog/2024/05.mdx) - [April](https://humanloop.com/docs/v5/changelog/2024/04.mdx) - [March](https://humanloop.com/docs/v5/changelog/2024/03.mdx) - [February](https://humanloop.com/docs/v5/changelog/2024/02.mdx) - [January](https://humanloop.com/docs/v5/changelog/2024/01.mdx) - [December](https://humanloop.com/docs/v5/changelog/2023/12.mdx) - [November](https://humanloop.com/docs/v5/changelog/2023/11.mdx) - [October](https://humanloop.com/docs/v5/changelog/2023/10.mdx) - [September](https://humanloop.com/docs/v5/changelog/2023/09.mdx) - [August](https://humanloop.com/docs/v5/changelog/2023/08.mdx) - [July](https://humanloop.com/docs/v5/changelog/2023/07.mdx) - [June](https://humanloop.com/docs/v5/changelog/2023/06.mdx) - [May](https://humanloop.com/docs/v5/changelog/2023/05.mdx) - [April](https://humanloop.com/docs/v5/changelog/2023/04.mdx) - [March](https://humanloop.com/docs/v5/changelog/2023/03.mdx) - [February](https://humanloop.com/docs/v5/changelog/2023/02.mdx) - [Overview](https://humanloop.com/docs/v4/overview.mdx): Learn how to use Humanloop for prompt engineering, evaluation and monitoring. Comprehensive guides and tutorials for LLMOps. - [Why Humanloop?](https://humanloop.com/docs/v4/why-humanloop.mdx): Humanloop is an enterprise-grade stack for product teams building with large language models. We are SOC-2 compliant, offer self-hosting and never train on your data. - [Quickstart Tutorial](https://humanloop.com/docs/v4/tutorials/quickstart.mdx): Getting up and running with Humanloop is quick and easy. This guide will run you through creating and managing your first Prompt in a few minutes. - [Create your first GPT-4 App](https://humanloop.com/docs/v4/tutorials/create-your-first-gpt-4-app.mdx): In this tutorial, you’ll use Humanloop to quickly create a GPT-4 chat app. You’ll learn how to create a Prompt, call GPT-4, and log your results. You’ll also learn how to capture feedback from your end users to evaluate and improve your model. - [ChatGPT clone with streaming](https://humanloop.com/docs/v4/tutorials/chatgpt-clone-in-nextjs.mdx): In this tutorial, you'll build a custom ChatGPT using Next.js and streaming using Humanloop TypeScript SDK. - [Create a Prompt](https://humanloop.com/docs/v4/guides/create-prompt.mdx): Learn how to create a Prompt in Humanloop using the UI or SDK, version it, and use it to generate responses from your AI models. Prompt management is a key part of the Humanloop platform. - [Overview](https://humanloop.com/docs/v4/guides/generate-and-log-with-the-sdk.mdx): Learn how to generate from large language models and log the results in Humanloop, with managed and versioned prompts. - [Generate completions](https://humanloop.com/docs/v4/guides/completion-using-the-sdk.mdx): Learn how to generate completions from a large language model and log the results in Humanloop, with managed and versioned prompts. - [Generate chat responses](https://humanloop.com/docs/v4/guides/chat-using-the-sdk.mdx): Learn how to generate chat completions from a large language model and log the results in Humanloop, with managed and versioned prompts. - [Capture user feedback](https://humanloop.com/docs/v4/guides/capture-user-feedback.mdx): Learn how to record user feedback on datapoints generated by your large language model using the Humanloop SDK. - [Upload historic data](https://humanloop.com/docs/v4/guides/upload-historic-data.mdx): Learn how to upload your historic model data to an existing Humanloop project to warm-start your project. - [Logging](https://humanloop.com/docs/v4/guides/use-your-own-model-provider.mdx): Integrating Humanloop and running an experiment when using your own models. - [Chaining calls (Sessions)](https://humanloop.com/docs/v4/guides/logging-session-traces.mdx): Learn how to log sequences of LLM calls to Humanloop, enabling you to trace through "sessions" and troubleshoot where your LLM chain went wrong or track sequences of actions taken by your LLM agent. - [Overview](https://humanloop.com/docs/v4/guides/evaluation/overview.mdx): Learn how to set up and use Humanloop's evaluation framework to test and track the performance of your prompts. - [Run an evaluation](https://humanloop.com/docs/v4/guides/evaluation/evaluate-models-offline.mdx): How do you evaluate your large language model use case using a dataset and an evaluator on Humanloop? - [Set up evaluations using API](https://humanloop.com/docs/v4/guides/evaluation/evaluations-using-api.mdx): How to use Humanloop to evaluate your large language model use-case, using a dataset and an evaluator. - [Use LLMs to evaluate logs](https://humanloop.com/docs/v4/guides/evaluation/use-llms-to-evaluate-logs.mdx): Learn how to use LLM as a judge to check for PII in Logs. - [Self-hosted evaluations](https://humanloop.com/docs/v4/guides/evaluation/self-hosted-evaluations.mdx): Learn how to run an evaluation in your own infrastructure and post the results to Humanloop. - [Evaluating externally generated Logs](https://humanloop.com/docs/v4/guides/evaluation/evaluating-externally-generated-logs.mdx): Learn how to use the Humanloop Python SDK to create an evaluation run and post-generated logs. - [Evaluating with human feedback](https://humanloop.com/docs/v4/guides/evaluation/evaluating-with-human-feedback.mdx): Learn how to set up a human evaluator to collect feedback on the output of your model. - [Set up Monitoring](https://humanloop.com/docs/v4/guides/evaluation/monitoring.mdx): Learn how to create and use online evaluators to observe the performance of your models. - [Overview](https://humanloop.com/docs/v4/guides/overview.mdx): Datasets are pre-defined collections of input-output pairs that you can use within Humanloop to define fixed examples for your projects. - [Create a dataset](https://humanloop.com/docs/v4/guides/create-dataset.mdx): Learn how to create Datasets in Humanloop to define fixed examples for your projects, and build up a collection of input-output pairs for evaluation and fine-tuning. - [Batch generate](https://humanloop.com/docs/v4/guides/batch-generate.mdx): This guide demonstrates how to run a batch generation using a large language model across all the datapoints in a dataset. - [Overview](https://humanloop.com/docs/v4/guides/run-an-experiment.mdx): Experiments allow you to set up A/B test between multiple different Prompts. - [Run an experiment](https://humanloop.com/docs/v4/guides/experiments-from-the-app.mdx): Experiments allow you to set up A/B tests between multiple model configs. - [Run experiments managing your own model](https://humanloop.com/docs/v4/guides/run-an-experiment-with-your-own-model-provider.mdx): Experiments allow you to set up A/B test between multiple different model configs. - [Tool Calling in Editor](https://humanloop.com/docs/v4/guides/tool-calling.mdx): Learn how to use tool calling in your large language models and intract with it in the Humanloop Playground. - [Tool Calling with the SDK](https://humanloop.com/docs/v4/guides/create-a-tool-with-the-sdk.mdx): Learn how to use OpenAI function calling in the Humanloop Python SDK. - [Link a JSON Schema Tool](https://humanloop.com/docs/v4/guides/link-jsonschema-tool.mdx): Learn how to create a JSON Schema tool that can be reused across multiple Prompts. - [Use the Snippet Tool](https://humanloop.com/docs/v4/guides/snippet-tool.mdx): Learn how to use the Snippet tool to manage common text snippets that you want to reuse across your different prompts. - [Set up semantic search (RAG)](https://humanloop.com/docs/v4/guides/set-up-semantic-search.mdx): Learn how to set up a RAG system using the Pinecone integration to enrich your prompts with relevant context from a data source of documents. - [Fine-tune a model](https://humanloop.com/docs/v4/guides/finetune-a-model.mdx): In this guide we will demonstrate how to use Humanloop’s fine-tuning workflow to produce improved models leveraging your user feedback data. - [Manage API keys](https://humanloop.com/docs/v4/guides/create-and-revoke-api-keys.mdx): How to create, share and manage you Humanloop API keys. The API keys allow you to access the Humanloop API programmatically in your app. - [Invite collaborators](https://humanloop.com/docs/v4/guides/invite-collaborators.mdx): Inviting people to your organization allows them to interact with your Humanloop projects. - [Deploy to environments](https://humanloop.com/docs/v4/guides/deploy-to-an-environment.mdx): Environments enable you to deploy model configurations and experiments, making them accessible via API, while also maintaining a streamlined production workflow. - [Prompts](https://humanloop.com/docs/v4/prompts.mdx): Discover how Humanloop manages prompts, with version control and rigorous evaluation for better performance. - [Tools](https://humanloop.com/docs/v4/tools.mdx): Discover how Humanloop manages tools for use with large language models (LLMs) with version control and rigorous evaluation for better performance. - [Datasets](https://humanloop.com/docs/v4/datasets.mdx): Discover how Humanloop manages datasets, with version control and collaboration to enable you to evaluate and fine-tune your models. - [Evaluators](https://humanloop.com/docs/v4/evaluators.mdx): Learn about LLM Evaluation using Evaluators. Evaluators are functions that can be used to judge the output of Prompts, Tools or other Evaluators. - [Logs](https://humanloop.com/docs/v4/logs.mdx): Logs contain the inputs and outputs of each time a Prompt, Tool or Evaluator is called. - [Environments](https://humanloop.com/docs/v4/environments.mdx): Deployment environments enable you to control the deployment lifecycle of your Prompts and other files between development and production environments. - [Key Concepts](https://humanloop.com/docs/v4/key-concepts.mdx): Learn about the core entities and concepts in Humanloop. Understand how to use them to manage your projects and improve your models. - [Supported Models](https://humanloop.com/docs/v4/supported-models.mdx): Humanloop supports all the major large language model providers, including OpenAI, Anthropic, Google, AWS Bedrock, Azure, and more. Additionally, you can use your own custom models with with the API and still benefit from the Humanloop platform. - [Access Roles](https://humanloop.com/docs/v4/access-roles.mdx): Learn about the different roles and permissions in Humanloop to help you with prompt and data management for large language models. - [.prompt files](https://humanloop.com/docs/v4/prompt-file-format.mdx): The `.prompt` file format is a human-readable and version-control-friendly format for storing model configurations. - [Postman Workspace](https://humanloop.com/docs/v4/postman-workspace.mdx): Reference our Postman Workspace for examples of how to interact with the Humanloop API directly. - [SDKs](https://humanloop.com/docs/v4/api-reference/sdks.md): Learn how to integrate Humanloop into your applications using our Python and TypeScript SDKs or REST API. - [Errors](https://humanloop.com/docs/v4/api-reference/errors.md): This page provides a list of the error codes and messages you may encounter when using the Humanloop API. - [Humanloop API](https://humanloop.com/docs/v4/api-reference.md) - [November](https://humanloop.com/docs/v4/changelog/2024/11.mdx) - [October](https://humanloop.com/docs/v4/changelog/2024/10.mdx) - [September](https://humanloop.com/docs/v4/changelog/2024/09.mdx) - [August](https://humanloop.com/docs/v4/changelog/2024/08.mdx) - [July](https://humanloop.com/docs/v4/changelog/2024/07.mdx) - [June](https://humanloop.com/docs/v4/changelog/2024/06.mdx) - [May](https://humanloop.com/docs/v4/changelog/2024/05.mdx) - [April](https://humanloop.com/docs/v4/changelog/2024/04.mdx) - [March](https://humanloop.com/docs/v4/changelog/2024/03.mdx) - [February](https://humanloop.com/docs/v4/changelog/2024/02.mdx) - [January](https://humanloop.com/docs/v4/changelog/2024/01.mdx) - [December](https://humanloop.com/docs/v4/changelog/2023/12.mdx) - [November](https://humanloop.com/docs/v4/changelog/2023/11.mdx) - [October](https://humanloop.com/docs/v4/changelog/2023/10.mdx) - [September](https://humanloop.com/docs/v4/changelog/2023/09.mdx) - [August](https://humanloop.com/docs/v4/changelog/2023/08.mdx) - [July](https://humanloop.com/docs/v4/changelog/2023/07.mdx) - [June](https://humanloop.com/docs/v4/changelog/2023/06.mdx) - [May](https://humanloop.com/docs/v4/changelog/2023/05.mdx) - [April](https://humanloop.com/docs/v4/changelog/2023/04.mdx) - [March](https://humanloop.com/docs/v4/changelog/2023/03.mdx) - [February](https://humanloop.com/docs/v4/changelog/2023/02.mdx) ## API Docs - Humanloop API > Prompts [Log to a Prompt](https://humanloop.com/docs/v5/api-reference/prompts/log.mdx) - Humanloop API > Prompts [Update Prompt Log](https://humanloop.com/docs/v5/api-reference/prompts/update-log.mdx) - Humanloop API > Prompts [Call Prompt](https://humanloop.com/docs/v5/api-reference/prompts/call.mdx) - Humanloop API > Prompts [Call Prompt](https://humanloop.com/docs/v5/api-reference/prompts/call-stream.mdx) - Humanloop API > Prompts [List Prompts](https://humanloop.com/docs/v5/api-reference/prompts/list.mdx) - Humanloop API > Prompts [Upsert Prompt](https://humanloop.com/docs/v5/api-reference/prompts/upsert.mdx) - Humanloop API > Prompts [Get Prompt](https://humanloop.com/docs/v5/api-reference/prompts/get.mdx) - Humanloop API > Prompts [Delete Prompt](https://humanloop.com/docs/v5/api-reference/prompts/delete.mdx) - Humanloop API > Prompts [Move Prompt](https://humanloop.com/docs/v5/api-reference/prompts/move.mdx) - Humanloop API > Prompts [Populate Prompt template](https://humanloop.com/docs/v5/api-reference/prompts/populate-template.mdx) - Humanloop API > Prompts [List Versions of a Prompt](https://humanloop.com/docs/v5/api-reference/prompts/list-versions.mdx) - Humanloop API > Prompts [Commit a Prompt Version](https://humanloop.com/docs/v5/api-reference/prompts/commit.mdx) - Humanloop API > Prompts [Delete Prompt Version](https://humanloop.com/docs/v5/api-reference/prompts/delete-prompt-version.mdx) - Humanloop API > Prompts [Deploy Prompt](https://humanloop.com/docs/v5/api-reference/prompts/set-deployment.mdx) - Humanloop API > Prompts [Remove Deployment](https://humanloop.com/docs/v5/api-reference/prompts/remove-deployment.mdx) - Humanloop API > Prompts [List a Prompt's Environments](https://humanloop.com/docs/v5/api-reference/prompts/list-environments.mdx) - Humanloop API > Prompts [Update Monitoring](https://humanloop.com/docs/v5/api-reference/prompts/update-monitoring.mdx) - Humanloop API > Tools [Log to a Tool](https://humanloop.com/docs/v5/api-reference/tools/log.mdx) - Humanloop API > Tools [Update Tool Log](https://humanloop.com/docs/v5/api-reference/tools/update.mdx) - Humanloop API > Tools [List Tools](https://humanloop.com/docs/v5/api-reference/tools/list.mdx) - Humanloop API > Tools [Upsert Tool](https://humanloop.com/docs/v5/api-reference/tools/upsert.mdx) - Humanloop API > Tools [Get Tool](https://humanloop.com/docs/v5/api-reference/tools/get.mdx) - Humanloop API > Tools [Delete Tool](https://humanloop.com/docs/v5/api-reference/tools/delete.mdx) - Humanloop API > Tools [Move Tool](https://humanloop.com/docs/v5/api-reference/tools/move.mdx) - Humanloop API > Tools [List Versions of a Tool](https://humanloop.com/docs/v5/api-reference/tools/list-versions.mdx) - Humanloop API > Tools [Commit](https://humanloop.com/docs/v5/api-reference/tools/commit.mdx) - Humanloop API > Tools [Delete Tool Version](https://humanloop.com/docs/v5/api-reference/tools/delete-tool-version.mdx) - Humanloop API > Tools [Deploy Tool](https://humanloop.com/docs/v5/api-reference/tools/set-deployment.mdx) - Humanloop API > Tools [Remove Deployment](https://humanloop.com/docs/v5/api-reference/tools/remove-deployment.mdx) - Humanloop API > Tools [List a Tool's Environments](https://humanloop.com/docs/v5/api-reference/tools/list-environments.mdx) - Humanloop API > Tools [Update Monitoring](https://humanloop.com/docs/v5/api-reference/tools/update-monitoring.mdx) - Humanloop API > Datasets [List Datasets](https://humanloop.com/docs/v5/api-reference/datasets/list.mdx) - Humanloop API > Datasets [Upsert Dataset](https://humanloop.com/docs/v5/api-reference/datasets/upsert.mdx) - Humanloop API > Datasets [Get Dataset](https://humanloop.com/docs/v5/api-reference/datasets/get.mdx) - Humanloop API > Datasets [Delete Dataset](https://humanloop.com/docs/v5/api-reference/datasets/delete.mdx) - Humanloop API > Datasets [Move Dataset](https://humanloop.com/docs/v5/api-reference/datasets/move.mdx) - Humanloop API > Datasets [List Datapoints](https://humanloop.com/docs/v5/api-reference/datasets/list-datapoints.mdx) - Humanloop API > Datasets [List Versions of a Dataset](https://humanloop.com/docs/v5/api-reference/datasets/list-versions.mdx) - Humanloop API > Datasets [Commit a Dataset Version](https://humanloop.com/docs/v5/api-reference/datasets/commit.mdx) - Humanloop API > Datasets [Delete Dataset Version](https://humanloop.com/docs/v5/api-reference/datasets/delete-dataset-version.mdx) - Humanloop API > Datasets [Upload CSV](https://humanloop.com/docs/v5/api-reference/datasets/upload-csv.mdx) - Humanloop API > Datasets [Deploy Dataset](https://humanloop.com/docs/v5/api-reference/datasets/set-deployment.mdx) - Humanloop API > Datasets [Remove Deployment](https://humanloop.com/docs/v5/api-reference/datasets/remove-deployment.mdx) - Humanloop API > Datasets [List a Dataset's Environments](https://humanloop.com/docs/v5/api-reference/datasets/list-environments.mdx) - Humanloop API > Evaluators [Submit Evaluator Judgment](https://humanloop.com/docs/v5/api-reference/evaluators/log.mdx) - Humanloop API > Evaluators [List Evaluators](https://humanloop.com/docs/v5/api-reference/evaluators/list.mdx) - Humanloop API > Evaluators [Upsert Evaluator](https://humanloop.com/docs/v5/api-reference/evaluators/upsert.mdx) - Humanloop API > Evaluators [Get Evaluator](https://humanloop.com/docs/v5/api-reference/evaluators/get.mdx) - Humanloop API > Evaluators [Delete Evaluator](https://humanloop.com/docs/v5/api-reference/evaluators/delete.mdx) - Humanloop API > Evaluators [Move Evaluator](https://humanloop.com/docs/v5/api-reference/evaluators/move.mdx) - Humanloop API > Evaluators [List Versions of an Evaluator](https://humanloop.com/docs/v5/api-reference/evaluators/list-versions.mdx) - Humanloop API > Evaluators [Commit an Evaluator Version](https://humanloop.com/docs/v5/api-reference/evaluators/commit.mdx) - Humanloop API > Evaluators [Delete Evaluator Version](https://humanloop.com/docs/v5/api-reference/evaluators/delete-evaluator-version.mdx) - Humanloop API > Evaluators [Deploy Evaluator](https://humanloop.com/docs/v5/api-reference/evaluators/set-deployment.mdx) - Humanloop API > Evaluators [Remove Deployment](https://humanloop.com/docs/v5/api-reference/evaluators/remove-deployment.mdx) - Humanloop API > Evaluators [List an Evaluator's Environments](https://humanloop.com/docs/v5/api-reference/evaluators/list-environments.mdx) - Humanloop API > Evaluators [Update Monitoring](https://humanloop.com/docs/v5/api-reference/evaluators/update-monitoring.mdx) - Humanloop API > Flows [Log to a Flow](https://humanloop.com/docs/v5/api-reference/flows/log.mdx) - Humanloop API > Flows [Update Flow Log](https://humanloop.com/docs/v5/api-reference/flows/update-log.mdx) - Humanloop API > Flows [Get Flow](https://humanloop.com/docs/v5/api-reference/flows/get.mdx) - Humanloop API > Flows [Delete Flow](https://humanloop.com/docs/v5/api-reference/flows/delete.mdx) - Humanloop API > Flows [Move Flow](https://humanloop.com/docs/v5/api-reference/flows/move.mdx) - Humanloop API > Flows [List Flows](https://humanloop.com/docs/v5/api-reference/flows/list.mdx) - Humanloop API > Flows [Upsert Flow](https://humanloop.com/docs/v5/api-reference/flows/upsert.mdx) - Humanloop API > Flows [List Versions of a Flow](https://humanloop.com/docs/v5/api-reference/flows/list-versions.mdx) - Humanloop API > Flows [Commit a Flow Version](https://humanloop.com/docs/v5/api-reference/flows/commit.mdx) - Humanloop API > Flows [Delete Flow Version](https://humanloop.com/docs/v5/api-reference/flows/delete-flow-version.mdx) - Humanloop API > Flows [Deploy Flow](https://humanloop.com/docs/v5/api-reference/flows/set-deployment.mdx) - Humanloop API > Flows [Remove Deployment](https://humanloop.com/docs/v5/api-reference/flows/remove-deployment.mdx) - Humanloop API > Flows [List a Flow's Environments](https://humanloop.com/docs/v5/api-reference/flows/list-environments.mdx) - Humanloop API > Flows [Update Monitoring](https://humanloop.com/docs/v5/api-reference/flows/update-monitoring.mdx) - Humanloop API > Directories [List ](https://humanloop.com/docs/v5/api-reference/directories/list.mdx) - Humanloop API > Directories [Create](https://humanloop.com/docs/v5/api-reference/directories/create.mdx) - Humanloop API > Directories [Get](https://humanloop.com/docs/v5/api-reference/directories/get.mdx) - Humanloop API > Directories [Delete](https://humanloop.com/docs/v5/api-reference/directories/delete.mdx) - Humanloop API > Directories [Update](https://humanloop.com/docs/v5/api-reference/directories/update.mdx) - Humanloop API > Files [List Files](https://humanloop.com/docs/v5/api-reference/files/list-files.mdx) - Humanloop API > Files [Retrieve by path](https://humanloop.com/docs/v5/api-reference/files/retrieve-by-path.mdx) - Humanloop API > Evaluations [List Evaluations](https://humanloop.com/docs/v5/api-reference/evaluations/list.mdx) - Humanloop API > Evaluations [Create Evaluation](https://humanloop.com/docs/v5/api-reference/evaluations/create.mdx) - Humanloop API > Evaluations [Add Evaluators](https://humanloop.com/docs/v5/api-reference/evaluations/add-evaluators.mdx) - Humanloop API > Evaluations [Remove Evaluator](https://humanloop.com/docs/v5/api-reference/evaluations/remove-evaluator.mdx) - Humanloop API > Evaluations [Get Evaluation](https://humanloop.com/docs/v5/api-reference/evaluations/get.mdx) - Humanloop API > Evaluations [Delete Evaluation](https://humanloop.com/docs/v5/api-reference/evaluations/delete.mdx) - Humanloop API > Evaluations [List Runs for Evaluation](https://humanloop.com/docs/v5/api-reference/evaluations/list-runs-for-evaluation.mdx) - Humanloop API > Evaluations [Create Run](https://humanloop.com/docs/v5/api-reference/evaluations/create-run.mdx) - Humanloop API > Evaluations [Add Existing Run](https://humanloop.com/docs/v5/api-reference/evaluations/add-existing-run.mdx) - Humanloop API > Evaluations [Remove Run](https://humanloop.com/docs/v5/api-reference/evaluations/remove-run.mdx) - Humanloop API > Evaluations [Update Evaluation Run](https://humanloop.com/docs/v5/api-reference/evaluations/update-evaluation-run.mdx) - Humanloop API > Evaluations [Add Logs to Run](https://humanloop.com/docs/v5/api-reference/evaluations/add-logs-to-run.mdx) - Humanloop API > Evaluations [Get Evaluation Stats](https://humanloop.com/docs/v5/api-reference/evaluations/get-stats.mdx) - Humanloop API > Evaluations [Get Logs for Evaluation](https://humanloop.com/docs/v5/api-reference/evaluations/get-logs.mdx) - Humanloop API > Logs [List Logs](https://humanloop.com/docs/v5/api-reference/logs/list.mdx) - Humanloop API > Logs [Delete Logs](https://humanloop.com/docs/v5/api-reference/logs/delete.mdx) - Humanloop API > Logs [Get Log](https://humanloop.com/docs/v5/api-reference/logs/get.mdx) - Humanloop API > Chats [Chat](https://humanloop.com/docs/v4/api-reference/chats/create.mdx) - Humanloop API > Chats [Chat](https://humanloop.com/docs/v4/api-reference/chats/create-stream.mdx) - Humanloop API > Chats [Chat Deployed](https://humanloop.com/docs/v4/api-reference/chats/create-deployed.mdx) - Humanloop API > Chats [Chat Deployed](https://humanloop.com/docs/v4/api-reference/chats/create-deployed-stream.mdx) - Humanloop API > Chats [Chat Model Config](https://humanloop.com/docs/v4/api-reference/chats/create-config.mdx) - Humanloop API > Chats [Chat Model Config](https://humanloop.com/docs/v4/api-reference/chats/create-config-stream.mdx) - Humanloop API > Chats [Create Experiment](https://humanloop.com/docs/v4/api-reference/chats/create-experiment.mdx) - Humanloop API > Chats [Create Experiment Stream](https://humanloop.com/docs/v4/api-reference/chats/create-experiment-stream.mdx) - Humanloop API > Completions [Create](https://humanloop.com/docs/v4/api-reference/completions/create.mdx) - Humanloop API > Completions [Create](https://humanloop.com/docs/v4/api-reference/completions/create-stream.mdx) - Humanloop API > Completions [Completion Deployed](https://humanloop.com/docs/v4/api-reference/completions/create-deployed.mdx) - Humanloop API > Completions [Completion Deployed](https://humanloop.com/docs/v4/api-reference/completions/create-deployed-stream.mdx) - Humanloop API > Completions [Completion Model Config](https://humanloop.com/docs/v4/api-reference/completions/create-config.mdx) - Humanloop API > Completions [Completion Model Config](https://humanloop.com/docs/v4/api-reference/completions/create-config-stream.mdx) - Humanloop API > Completions [Create Experiment](https://humanloop.com/docs/v4/api-reference/completions/create-experiment.mdx) - Humanloop API > Completions [Create Experiment Stream](https://humanloop.com/docs/v4/api-reference/completions/create-experiment-stream.mdx) - Humanloop API > Datapoints [Get](https://humanloop.com/docs/v4/api-reference/datapoints/get.mdx) - Humanloop API > Datapoints [Update](https://humanloop.com/docs/v4/api-reference/datapoints/update.mdx) - Humanloop API > Datapoints [Delete](https://humanloop.com/docs/v4/api-reference/datapoints/delete.mdx) - Humanloop API > Projects [List For Project](https://humanloop.com/docs/v4/api-reference/projects/list-datasets.mdx) - Humanloop API > Projects [List For Project](https://humanloop.com/docs/v4/api-reference/projects/list-evaluations.mdx) - Humanloop API > Projects [List](https://humanloop.com/docs/v4/api-reference/projects/list.mdx) - Humanloop API > Projects [Create](https://humanloop.com/docs/v4/api-reference/projects/create.mdx) - Humanloop API > Projects [Get](https://humanloop.com/docs/v4/api-reference/projects/get.mdx) - Humanloop API > Projects [Delete](https://humanloop.com/docs/v4/api-reference/projects/delete.mdx) - Humanloop API > Projects [Update](https://humanloop.com/docs/v4/api-reference/projects/update.mdx) - Humanloop API > Projects [List Configs](https://humanloop.com/docs/v4/api-reference/projects/list-configs.mdx) - Humanloop API > Projects [Create Feedback Type](https://humanloop.com/docs/v4/api-reference/projects/create-feedback-type.mdx) - Humanloop API > Projects [Update Feedback Types](https://humanloop.com/docs/v4/api-reference/projects/update-feedback-types.mdx) - Humanloop API > Projects [Export](https://humanloop.com/docs/v4/api-reference/projects/export.mdx) - Humanloop API > Projects > Active Config [Get Active Config](https://humanloop.com/docs/v4/api-reference/projects/active-config/get.mdx) - Humanloop API > Projects > Active Config [Deactivate Config](https://humanloop.com/docs/v4/api-reference/projects/active-config/deactivate.mdx) - Humanloop API > Projects > Deployed Config [List Deployed Configs](https://humanloop.com/docs/v4/api-reference/projects/deployed-config/list.mdx) - Humanloop API > Projects > Deployed Config [Deploy Config](https://humanloop.com/docs/v4/api-reference/projects/deployed-config/deploy.mdx) - Humanloop API > Projects > Deployed Config [Delete Deployed Config](https://humanloop.com/docs/v4/api-reference/projects/deployed-config/delete.mdx) - Humanloop API > Datasets [Create](https://humanloop.com/docs/v4/api-reference/datasets/create.mdx) - Humanloop API > Datasets [List ](https://humanloop.com/docs/v4/api-reference/datasets/list.mdx) - Humanloop API > Datasets [Get](https://humanloop.com/docs/v4/api-reference/datasets/get.mdx) - Humanloop API > Datasets [Delete](https://humanloop.com/docs/v4/api-reference/datasets/delete.mdx) - Humanloop API > Datasets [Update](https://humanloop.com/docs/v4/api-reference/datasets/update.mdx) - Humanloop API > Datasets [Datapoints](https://humanloop.com/docs/v4/api-reference/datasets/list-datapoints.mdx) - Humanloop API > Datasets [Create Datapoint](https://humanloop.com/docs/v4/api-reference/datasets/create-datapoint.mdx) - Humanloop API > Evaluations [Get](https://humanloop.com/docs/v4/api-reference/evaluations/get.mdx) - Humanloop API > Evaluations [List Datapoints](https://humanloop.com/docs/v4/api-reference/evaluations/list-datapoints.mdx) - Humanloop API > Evaluations [Create](https://humanloop.com/docs/v4/api-reference/evaluations/create.mdx) - Humanloop API > Evaluations [Log](https://humanloop.com/docs/v4/api-reference/evaluations/log.mdx) - Humanloop API > Evaluations [Result](https://humanloop.com/docs/v4/api-reference/evaluations/result.mdx) - Humanloop API > Evaluations [Update Status](https://humanloop.com/docs/v4/api-reference/evaluations/update-status.mdx) - Humanloop API > Evaluations [Add Evaluators](https://humanloop.com/docs/v4/api-reference/evaluations/add-evaluators.mdx) - Humanloop API > Evaluations [Get Evaluations](https://humanloop.com/docs/v4/api-reference/evaluations/list.mdx) - Humanloop API > Evaluators [List](https://humanloop.com/docs/v4/api-reference/evaluators/list.mdx) - Humanloop API > Evaluators [Create](https://humanloop.com/docs/v4/api-reference/evaluators/create.mdx) - Humanloop API > Evaluators [Get](https://humanloop.com/docs/v4/api-reference/evaluators/get.mdx) - Humanloop API > Evaluators [Delete](https://humanloop.com/docs/v4/api-reference/evaluators/delete.mdx) - Humanloop API > Evaluators [Update](https://humanloop.com/docs/v4/api-reference/evaluators/update.mdx) - Humanloop API > Feedback [Feedback](https://humanloop.com/docs/v4/api-reference/feedback/feedback.mdx) - Humanloop API > Logs [List ](https://humanloop.com/docs/v4/api-reference/logs/list.mdx) - Humanloop API > Logs [Log](https://humanloop.com/docs/v4/api-reference/logs/log.mdx) - Humanloop API > Logs [Delete](https://humanloop.com/docs/v4/api-reference/logs/delete.mdx) - Humanloop API > Logs [Update By Reference](https://humanloop.com/docs/v4/api-reference/logs/update-by-ref.mdx) - Humanloop API > Logs [Get](https://humanloop.com/docs/v4/api-reference/logs/get.mdx) - Humanloop API > Logs [Update](https://humanloop.com/docs/v4/api-reference/logs/update.mdx) - Humanloop API > Model Configs [Register](https://humanloop.com/docs/v4/api-reference/model-configs/register.mdx) - Humanloop API > Model Configs [Get](https://humanloop.com/docs/v4/api-reference/model-configs/get.mdx) - Humanloop API > Model Configs [Export by ID](https://humanloop.com/docs/v4/api-reference/model-configs/export.mdx) - Humanloop API > Model Configs [Serialize](https://humanloop.com/docs/v4/api-reference/model-configs/serialize.mdx) - Humanloop API > Model Configs [Deserialize](https://humanloop.com/docs/v4/api-reference/model-configs/deserialize.mdx) - Humanloop API > Sessions [List ](https://humanloop.com/docs/v4/api-reference/sessions/list.mdx) - Humanloop API > Sessions [Create](https://humanloop.com/docs/v4/api-reference/sessions/create.mdx) - Humanloop API > Sessions [Get](https://humanloop.com/docs/v4/api-reference/sessions/get.mdx)