Flow Decorator

Automatic tracing for AI features

Overview

The Flow decorator creates and manages traces for AI features. When applied to a function, it:

  • Creates a new trace on function invocation.
  • Adds all logging calls made in the function’s execution context to the trace.
  • Completes the trace when the function exits.

Decorator Definition

1@hl_client.flow(
2 # Required: path on Humanloop workspace for the Flow
3 path: str,
4 # Optional: metadata for versioning the Flow
5 attributes: dict[str, Any] = None
6)
7def function(*args, **kwargs): ...

The decorated function will have the same signature as the decorated function.

Parameters

ParameterTypeRequiredDescription
pathstringYesPath on Humanloop workspace for the Flow
attributesobjectNoKey-value object for versioning the Flow

Behavior

The decorated function creates a Flow Log when called. All Logs created inside the decorated function are added to its trace.

The Flow Log’s fields are populated as follows:

FieldTypeDescription
inputsobjectFunction arguments that aren’t ChatMessage arrays
messagesarrayChatMessage arrays passed as arguments
output_messageChatMessageReturn value if it’s a ChatMessage-like object
outputstringStringified return value if not a ChatMessage-like object
errorstringError message if function throws or return value can’t be serialized

If the decorated function returns a ChatMessage object, the output_message field is populated. Otherwise, the output field is populated with the stringified return value.

If user code throws an exception, the error field is populated with the exception message and function returns None.

Usage

Tracing Decorators

Logs created by other Humanloop decorators or SDK calls to the trace.

1@hl_client.prompt(path="MyFeature/Call LLM"):
2def call_llm(messages: List[ChatMessage]):
3 return openai.chat.completions.create(
4 model="gpt-4o-mini",
5 messages=messages
6 ).choices[0].message.content
7
8@hl_client.flow(path="MyFeature/Process")
9def process_input(inputs: list[str]) -> list[str]:
10 # Logs created by Prompt decorator are added to the trace
11 return [
12 call_llm([{"role": "user", "content": text}])
13 for text in inputs
14 ]

Tracing SDK Calls

Logs created through the Humanloop SDK are added to the trace.

1@hl_client.flow(path="MyFeature/Process")
2def process_input(text: str) -> str:
3 # Created Log is added to the trace
4 llm_output = hl_client.prompts.call(
5 path="MyFeature/Transform",
6 messages=[{"role": "user", "content": text}]
7 ).logs[0].output_message.content
8
9 transformed_output = transform(llm_output)
10 # Created Log is added to the trace
11 hl_client.tools.log(
12 path="MyFeature/Transform",
13 tool={function: TRANSFORM_JSON_SCHEMA},
14 inputs={"text": text},
15 output=transformed_output
16 )
17
18 return transformed_output

Constraints

Nested Traces

  • Cannot call flows.log() inside a decorated function. This will raise a HumanloopRuntimeError
  • To create nested traces, call another flow-decorated function.

Trace management

  • The decorator traces the Logs created in its scope, including other Humanloop decorators or SDK calls.
  • Passing trace_parent_id argument to an SDK logging call inside the decorated function is ignored and emits a warning; the Log is added to the trace of the decorated function.

Error Handling

  • User code exceptions are caught and logged inside the Flow Log’s error field. The decorated function returns None on exception.
  • HumanloopRuntimeError exceptions indicate incorrect decorator or SDK usage and are re-raised instead of being logged under error.

A explanation of Flows and their role in the Humanloop platform is found in our Flows documentation.