Humanloop provides a simple set of callable building blocks for your AI applications and avoids complex abstractions.

Prompts, Tools and Evaluators are the core building blocks of your AI features on Humanloop:

  • Prompts define a task for a large language model.

  • Tools are functions that can extend your LLMs with access to external data sources and enabling them to take actions.

  • Evaluators are functions that can be used to judge the output of Prompts, Tools or even other Evaluators.

  • Flows are orchestrations of Prompts, Tools and other code, enabling you to evaluate and improve your full AI pipeline.

File Properties

These core building blocks of Prompts, Tools and Evaluators are represented as different file types within a flexible filesystem in your Humanloop organization.

All file types share the following key properties:

Managed UI or code first

You can create and manage these files in the Humanloop UI, or via the API. Product teams and their subject matter experts may prefer using the UI first workflows for convenience, whereas AI teams and engineers may prefer to use the API for greater control and customisation.

Are strictly version controlled

Files have immutable versions that are uniquely determined by their parameters that characterise the behaviour of the system. For example, a Prompt version is determined by the prompt template, base model and hyperparameters chosen. Within the Humanloop Editor and via the API, you can commit new versions of a file, view the history of changes and revert to a previous version.

Have a flexible runtime

All files can be called (if you use the Humanloop runtime) or logged to (where you manage the runtime yourself). For example, with Prompts, Humanloop integrates to all the major model providers. You can choose to call a Prompt, where Humanloop acts as a proxy to the model provider. Alternatively, you can choose to manage the model calls yourself and log the results to the Prompt on Humanloop. Using the Humanloop runtime is generally the simpler option and allows you to call the file natively within the Humanloop UI, whereas owning the runtime yourself and logging allows you to have more fine-grained control.

Are composable with sessions

Files can be combined with other files to create more complex systems like chains and agents. For example, a Prompt can call a Tool, which can then be evaluated by an Evaluator. The orchestration of more complex systems is best done in code using the API and the full trace of execution is accessible in the Humanloop UI for debugging and evaluation purposes.

Have a serialized form

All files can be exported and imported in a serialized form. For example, Prompts are serialized to our .prompt format. This provides a useful medium for more technical teams that wish to maintain the source of truth in their existing version control system like git.

Support deployments

You can tag file versions with specific environments and target these environments via the UI and API to facilitate robust deployment workflows.


Humanloop also has the concept of Datasets that are used within Evaluation workflows. Datasets share all the same properties, except they do not have a runtime consideration.