Use Humanloop to add logging to an AI project.

This tutorial takes an existing Python script, splits it into versioned components, and adds logging to Humanloop.

Prerequisites

Create the chat agent

To demonstrate how to add logging to Humanloop, we will start with a simple chat agent that answers math and science questions.

Create an agent.py file and add the following:

agent.py
1import json
2
3from humanloop import Humanloop
4from openai import OpenAI
5
6openai = OpenAI(api_key="YOUR_OPENAI_KEY")
7humanloop = Humanloop(api_key="YOUR_HUMANLOOP_KEY")
8
9
10def calculator(operation: str, num1: int, num2: int) -> str:
11 """Do arithmetic operations on two numbers."""
12 if operation == "add":
13 return num1 + num2
14 elif operation == "subtract":
15 return num1 - num2
16 elif operation == "multiply":
17 return num1 * num2
18 elif operation == "divide":
19 return num1 / num2
20 else:
21 return "Invalid operation"
22
23
24def call_model(messages: list[str]) -> str:
25 output = openai.chat.completions.create(
26 messages=messages,
27 model="gpt-4o",
28 tools=[{
29 "type": "function",
30 "function": {
31 'name': 'calculator',
32 'description': 'Do arithmetic operations on two numbers.',
33 'parameters': {
34 'type': 'object',
35 'required': ['operation', 'num1', 'num2'],
36 'properties': {
37 'operation': {'type': 'string'},
38 'num1': {'type': 'integer'},
39 'num2': {'type': 'integer'}
40 },
41 'additionalProperties': False
42 },
43 },
44 }],
45 temperature=0.7,
46 )
47
48 # Check if model asked for a tool call
49 if output.choices[0].message.tool_calls:
50 for tool_call in output.choices[0].message.tool_calls:
51 arguments = json.loads(tool_call.function.arguments)
52 if tool_call.function.name == "calculator":
53 result = calculator(**arguments)
54 return f"[TOOL CALL] {result}"
55
56 # Otherwise, return the LLM response
57 return output.choices[0].message.content
58
59
60def conversation():
61 messages = [
62 {
63 "role": "system",
64 "content": "You are a a groovy 80s surfer dude "
65 "helping with math and science."
66 },
67 ]
68 while True:
69 user_input = input("You: ")
70 if user_input == "exit":
71 break
72 messages.append({"role": "user", "content": user_input})
73 response = call_model(messages=messages)
74 messages.append({"role": "assistant", "content": response})
75 print(f"Agent: {response}")
76
77
78if __name__ == "__main__":
79 conversation()

Log to Humanloop

If you use a programming language not supported by the SDK, or want more control, see our guide on logging through the API for an alternative to decorators.

Use the SDK decorators to enable logging. At runtime, every call to a decorated function will create a Log on Humanloop.

agent.py
1@humanloop.tool(path="Logging Quickstart/Calculator")
2def calculator(operation: str, num1: int, num2: int) -> str:
3 ...
4
5@humanloop.prompt(path="Logging Quickstart/QA Prompt")
6def call_model(messages: list[str]) -> str:
7 ...
8
9
10@humanloop.flow(path="Logging Quickstart/QA Agent")
11def conversation():
12 ...
13
14if __name__ == "__main__":
15 conversation()

Run the code

Have a conversation with the agent. When you’re done, type exit to close the program.

$python agent.py
>You: Hi dude!
>Agent: Tubular! I am here to help with math and science, what is groovin?
>You: How does flying work?
>Agent: ...
>You: What is 5678 * 456?
>Agent: [TOOL CALL] 2587968
>You: exit

Check your workspace

Navigate to your workspace to see the logged conversation.

Inside the Logging Quickstart directory on the left, click the QA Agent Flow. Select the Logs tab from the top of the page and click the Log inside the table.

You will see the conversation’s trace, containing Logs corresponding to the Tool and the Prompt.

Change the agent and rerun

Modify the call_model function to use a different model and temperature.

agent.py
1@humanloop.prompt(path="Logging Quickstart/QA Prompt")
2def call_model(messages: list[str]) -> str:
3 output = openai.chat.completions.create(
4 messages=messages,
5 model="gpt-4o-mini",
6 tools=[
7 # The @tool decorator adds a .json_schema attribute
8 # to avoid manual schema definition
9 calculator.json_schema
10 ],
11 temperature=0.2,
12 )
13
14 # Check if model asked for a tool call
15 if output.choices[0].message.tool_calls:
16 for tool_call in output.choices[0].message.tool_calls:
17 arguments = json.loads(tool_call.function.arguments)
18 if tool_call.function.name == "calculator":
19 result = calculator(**arguments)
20 return f"[TOOL CALL] {result}"
21
22 # Otherwise, return the LLM response
23 return output.choices[0].message.content

Run the agent again, then head back to your workspace.

Click the QA Prompt Prompt, select the Dashboard tab from the top of the page and look at Uncommitted Versions.

By changing the hyperparameters of the OpenAI call, you have tagged a new version of the Prompt.

Next steps

Logging is the first step to observing your AI product. Follow up with these guides on monitoring and evals: