Prompt Decorator
Auto-instrumentation for LLM provider calls
Overview
The Prompt decorator automatically instruments LLM provider calls and creates Prompt Logs on Humanloop. When applied to a function, it:
- Creates a new Log for each LLM provider call made within the decorated function.
- Versions the Prompt using hyperparameters of the provider call.
Decorator Definition
Python
TypeScript
The decorated function will have the same signature as the original function.
Parameters
Usage
Behavior
Versioning
The hyperparameters of the LLM provider call are used to version the Prompt.
If the configuration changes, new Logs will be created under the new version of the the same Prompt.
The following parameters are considered for versioning the Prompt:
Log Creation
Each LLM provider call within the decorated function creates a Log with the following fields set:
Python
TypeScript
Error Handling
Python
TypeScript
- LLM provider errors are caught and logged in the Log’s
error
field. However,HumanloopRuntimeError
is not caught and will be re-raised: they indicate wrong SDK or decorator usage. - The decorated function propagates exceptions from the LLM provider.
Best Practices
- Multiple Logs will be created if you make multiple calls inside the decorated function. To avoid confusion, avoid calls with different providers or hyperparameters, as this will create multiple versions of the Prompt.
- Calling
prompts.log()
orprompts.call()
inside the decorated function works normally, with no interaction with the decorator. However, it indicates a misuse of the decorator, as they are alternatives for achieving the same result. - If you want to switch between providers with ease, use
prompts.call()
with aprovider
parameter instead of the decorator.
Related Documentation
Humanloop Prompts are more than the string passed to the LLM provider. They encapsulate LLM hyperparameters, associations to available tools, and can be templated. For more details, refer to our Prompts explanation.