Introducing Templates in Humanloop

By Jordan BurgessCofounder and CPO

Today we’re releasing Templates in Humanloop - a library of Prompts, Evaluators, and Datasets, designed to eliminate the cold start problem and help your team accelerate time to value when developing and evaluating AI applications.

Building AI applications is a new paradigm and getting started with the right foundation is a common burden for teams. We believe you shouldn’t have to begin with a blank slate so we've curated a library of well-tested prompts, evaluators, and datasets to help you get to value faster.

Templates are launching in beta today. They include ready-to-use evaluators for things like chatbots, RAG, and more, curated datasets supported by our collaboration with Hugging Face, and RAG and agentic workflows which show best practices for building these systems.

This release lays the groundwork for a future where the Template Library will expand to include additional agentic templates enabling teams to easily build more complex AI applications with pre-configured workflows.

Template Library on Humanloop
Template Library on Humanloop contains examples of prompts, evaluators, and datasets for you to adopt best practices.

What are Templates?

  • Prompts: examples tailored to specific use cases like classification and conversational agents
  • Evaluators: ready-to-use evals to measure performance and accuracy, as well as guardrails to prevent things like PII leakage.
  • Tools: plug-and-play components for workflows like data extraction and retrieval
  • Datasets: curated datasets powered by our collaboration with Hugging Face
  • Flows: structured workflows to guide experimentation and iteration

With Templates, you can focus on refining your app rather than assembling everything from scratch.

A 'hub' of prebuilt prompts or evaluators is not useful if you can't customize them to your use case.

All of the templates are fully customizable. Simply clone a Template from our Library into your workspace, experiment with it, and tweak it to your unique needs as needed.

For example:

  • Building a Medical RAG app: Clone our MedicalQA Template, which includes all the necessary components you need to run the app in Humanloop - complete with built-in Evaluators to measure performance.
  • Working on an AI app in the Legal domain: Test your model with curated datasets like LegalBench, available directly in our Library for seamless evaluation.
Medical QA Template
Medical QA Template

Collaboration with Hugging Face

A major bottleneck in AI development is finding the right datasets. To solve this, we’ve curated high-quality datasets from the Hugging Face Hub and made them available directly within the Humanloop platform. We’ve improved the experience to be:

  • Discoverable & searchable: Each dataset includes metadata, preview descriptions, and structured field explanations to help you find what you need quickly.
  • Ready to use: Clone datasets directly into your workspace, adapt them as needed, and use them immediately in evaluations.
  • Benchmarking & performance tracking: Leverage industry-leading datasets to compare and iterate on your projects.
Hugging Face and Humanloop Collaboration
Hugging Face and Humanloop Collaboration

Elevating AI development standards

Templates are now available in beta, giving you a faster, more structured way to build and evaluate AI applications. What you gain with Templates:

  • Faster time to value: Templates eliminate bottlenecks by providing pre-configured setups for common AI workflows.
  • Best practices built-in: Each template reflects proven methods used by top AI teams.
  • Reduce complexity: Eliminate the challenge of creating datasets, evaluations, and workflow configurations.
  • Clear scope & possibilities: Quickly gauge what it’ll take to build your app and experiment before having to do any customization.

To get started, browse the Library section in your dashboard, select a Template, and start experimenting. See documentation for more details.

Sign-up now and explore the Template Library yourself.

About the author

avatar
Jordan Burgess
Cofounder and CPO
Jordan Burgess is the cofounder and Chief Product Officer of Humanloop. Jordan studied in Machine Learning and Engineering at Cambridge and MIT, and helped build AI at Alexa and Bloomsbury AI (acq. by Facebook).
Twitter
𝕏@jordnb
LinkedIn
LinkedIn iconLinkedIn

Ready to build successful AI products?

Book a 1:1 demo for a guided tour of the platform tailored to your organization.

© 2020 - 2045 Humanloop, Inc.
HIPAAHIPAA