The Humanloop platform will be sunset on September 8th, 2025. If you need to export your data, please see our Migration Guide.

Sign inBook a demo
DocsReferenceChangelog
DocsReferenceChangelog
  • Getting Started
    • Overview
    • Quickstart
  • Explanation
    • Integrating Humanloop
  • Tutorials
    • Evaluate an Agent in the UI
    • Evaluate an Agent in code
    • Evaluate a RAG app
    • Capture user feedback
  • How-To Guides
    • Migrating from Humanloop
  • Reference
    • Deployment Options
    • Supported Models
    • Template Library
    • Vercel AI SDK
    • .prompt and .agent Files
    • Humanloop Runtime Environment
    • Security and Compliance
    • Data Management
    • Access roles (RBACs)
    • SSO and Authentication
    • LLMs.txt
Getting Started

Quickstart

Quickly evaluate your LLM apps and improve them.

If you’re technical, get started by evaluating or logging an AI application in code:

Eval in code

Create an Eval in code

Add logging to your existing app

Log your existing app


Or, if you don’t want to touch code, get started by creating a Prompt or an Eval in the UI:

Create a Prompt in the UI

Create a Prompt in the UI

Agent Eval in the UI

Create an Eval in the UI

Was this page helpful?
Previous

Evals in code

Quickly evaluate your LLM apps and improve them, all managed in code.
Next
Built with
LogoLogo
Sign inBook a demo