Prompt Decorator

Auto-instrumentation for LLM provider calls

Overview

The Prompt decorator automatically instruments LLM provider calls and creates Prompt Logs on Humanloop. When applied to a function, it:

  • Creates a new Log for each LLM provider call made within the decorated function.
  • Versions the Prompt using hyperparameters of the provider call.

Decorator Definition

1@hl_client.prompt(
2 # Required: path on Humanloop workspace for the Prompt
3 path: str
4)
5def function(*args, **kwargs): ...

The decorated function will have the same signature as the original function.

Parameters

ParameterTypeRequiredDescription
pathstringYesPath on Humanloop workspace for the Prompt

Usage

1@hl_client.prompt(path="MyFeature/Process")
2def process_input(text: str) -> str:
3 return openai.chat.completions.create(
4 model="gpt-4o-mini",
5 messages=[{"role": "user", "content": text}]
6 ).choices[0].message.content

Behavior

Versioning

The hyperparameters of the LLM provider call are used to version the Prompt.

If the configuration changes, new Logs will be created under the new version of the the same Prompt.

The following parameters are considered for versioning the Prompt:

ParameterDescription
modelThe LLM model identifier
endpointThe API endpoint type
providerThe LLM provider (e.g., “openai”, “anthropic”)
max_tokensMaximum tokens in completion
temperatureSampling temperature
top_pNucleus sampling parameter
presence_penaltyPresence penalty for token selection
frequency_penaltyFrequency penalty for token selection

Log Creation

Each LLM provider call within the decorated function creates a Log with the following fields set:

FieldTypeDescription
inputsdict[str, Any]Function arguments that aren’t ChatMessage arrays
messagesChatMessage[]ChatMessage arrays passed to the LLM
output_messageChatMessageLLM response with role and content
errorstringError message if the LLM call fails
prompt_tokensintNumber of tokens in the prompt
reasoning_tokensintNumber of tokens used in reasoning
output_tokensintNumber of tokens in the completion
finish_reasonstringReason the LLM stopped generating
start_timedatetimeWhen the LLM call started
end_timedatetimeWhen the LLM call completed

Error Handling

  • LLM provider errors are caught and logged in the Log’s error field. However, HumanloopRuntimeError is not caught and will be re-raised: they indicate wrong SDK or decorator usage.
  • The decorated function propagates exceptions from the LLM provider.

Best Practices

  1. Multiple Logs will be created if you make multiple calls inside the decorated function. To avoid confusion, avoid calls with different providers or hyperparameters, as this will create multiple versions of the Prompt.
  2. Calling prompts.log() or prompts.call() inside the decorated function works normally, with no interaction with the decorator. However, it indicates a misuse of the decorator, as they are alternatives for achieving the same result.
  3. If you want to switch between providers with ease, use prompts.call() with a provider parameter instead of the decorator.

Humanloop Prompts are more than the string passed to the LLM provider. They encapsulate LLM hyperparameters, associations to available tools, and can be templated. For more details, refer to our Prompts explanation.