GuidesTools

Tool Calling with the SDK

In this guide we will demonstrate how to take advantage of OpenAI function calling in our Python SDK.

The Humanloop SDK provides an easy way for you to integrate the functionality of OpenAI function calling, which we refer to as JSON Schema tools, into your existing projects. Tools follow the same universal JSON Schema syntax definition as OpenAI function calling. In this guide, we’ll walk you through the process of using tools with the Humanloop SDK via the chat endpoint.


Creating a Tool

Prerequisites

  • A Humanloop account - you can create one by going to our sign up page.
  • Python installed - you can download and install Python by following the steps on the Python download page.
Using other model providers

This guide assumes you’re using OpenAI with the gpt-4 model. Only specific models from OpenAI are supported for function calling.

Install and initialize the SDK

The SDK requires Python 3.8 or greater.

Import the Humanloop SDK: If you haven’t done so already, you’ll need to install and import the Humanloop SDK into your Python environment. You can do this using pip:

1pip install humanloop

Note, this guide was built with Humanloop==0.5.18.

Then import the SDK in your script:

1from humanloop import Humanloop

Initialize the SDK: Initialize the Humanloop SDK with your API key:

1from humanloop import Humanloop
2
3hl = Humanloop(api_key="<YOUR_HUMANLOOP_API_KEY>")

Create a chat with the tool: We’ll start with the general chat endpoint format.

1from humanloop import Humanloop
2
3hl = Humanloop(api_key="<YOUR_HUMANLOOP_API_KEY>")
4
5
6def run_conversation():
7 # Step 1: send the conversation and available functions to GPT
8 messages = [{"role": "user", "content": "What's the weather like in Boston?"}]
9
10 # TODO - Add tools definition here
11
12 response = hl.chat(
13 project="Assistant",
14 model_config={"model": "gpt-4", "max_tokens": 100},
15 messages=messages,
16 )
17 response = response.data[0]

Define the tool: Define a tool using the universal JSON Schema syntax syntax. Let’s assume we’ve defined a get_current_weather tool, which returns the current weather for a specified location. We’ll add it in via a "tools": tools, field. We’ve also defined a dummy get_current_weather method at the top. This can be replaced by your own function to fetch real values, for now we’re hardcoding it to return a random temperature and cloudy for this example.

1from humanloop import Humanloop
2import random
3import json
4
5hl = Humanloop(api_key="<YOUR_HUMANLOOP_API_KEY>")
6
7def get_current_weather(location, unit):
8 # Your own function call logic
9 # We will return dummy values in this example
10
11 # Generate random temperature between 0 and 20
12 temperature = random.randint(0, 20)
13
14 return {"temperature": temperature, "other": "cloudy"}
15
16
17
18def run_conversation():
19 # Step 1: send the conversation and available functions to GPT
20 messages = [
21 {
22 "role": "user",
23 "content": "What's the weather like in both Boston AND London tonight?",
24 }
25 ]
26 tools = [
27 {
28 "name": "get_current_weather",
29 "description": "Get the current weather in a given location",
30 "parameters": {
31 "type": "object",
32 "properties": {
33 "location": {
34 "type": "string",
35 "description": "The city and state, e.g. San Francisco, CA",
36 },
37 "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
38 },
39 "required": ["location"],
40 },
41 },
42 ]
43
44 response = hl.chat(
45 project="Assistant",
46 model_config={"model": "gpt-3.5-turbo-1106", "tools": tools, "max_tokens": 100},
47 messages=messages,
48 )
49 response = response.body
50 output_message = response["data"][0]["output_message"]
51
52 # Remove the deprecated tool_call field (not nessecary for SDK rc verions >0.6)
53 del output_message["tool_call"]
54
55 # Add the output messge from the previous chat to the messages
56 messages.append(output_message)
57
58 # TODO - Add assistant response logic

Check assistant response

The code above will make the call to OpenAI with the tool but it does nothing to handle the assistant response. When responding with a tool response the response should have a tool_calls field. Fetch that value and pass it to your own function. An example of this can be seen below. Replace the TODO - Add assistant handling logic in your code from above with the following. Multiple tool calls can be returned with the latest OpenAI models gpt-4-1106-preview and gpt-3.5-turbo-1106, so below we loop through the tool_calls and populate the response accordingly.

1 # Step 2: check if GPT wanted to call a tool
2 if output_message.get("tool_calls"):
3 # Step 3: call the function
4 # Note: the JSON response may not always be valid; be sure to handle errors
5 available_functions = {
6 "get_current_weather": get_current_weather,
7 }
8
9 for tool_call in output_message["tool_calls"]:
10 function_name = tool_call["function"]["name"]
11 function_args = json.loads(tool_call["function"]["arguments"])
12 function_to_call = available_functions[function_name]
13 function_response = function_to_call(
14 location=function_args.get("location"),
15 unit=function_args.get("unit"),
16
17 # TODO - return the tool response back to OpenAI

Return the tool response

We can then return the tool response to OpenAI. This can be done by formatting OpenAI tool message into the relative assistant message seen below along with a tool message with the function name and function response.

1 # Step 2: check if GPT wanted to call a tool
2 if output_message.get("tool_calls"):
3 # Step 3: call the function
4 # Note: the JSON response may not always be valid; be sure to handle errors
5 available_functions = {
6 "get_current_weather": get_current_weather,
7 }
8
9 for tool_call in output_message["tool_calls"]:
10 function_name = tool_call["function"]["name"]
11 function_args = json.loads(tool_call["function"]["arguments"])
12 function_to_call = available_functions[function_name]
13 function_response = function_to_call(
14 location=function_args.get("location"),
15 unit=function_args.get("unit"),
16 )
17
18 # Step 4: send the response back to the model per function call
19 messages.append(
20 {
21 "role": "tool",
22 "content": json.dumps(function_response),
23 "tool_call_id": tool_call["id"],
24 }
25 )
26
27 second_response = hl.chat(
28 project="Assistant",
29 model_config={
30 "model": "gpt-3.5-turbo-1106",
31 "tools": tools,
32 "max_tokens": 500,
33 },
34 messages=messages,
35 )
36 return second_response

Review assistant response

The assistant should respond with a message that incorporates the parameters you provided, for example: The current weather in Boston is 22 degrees and cloudy. The above can be run by adding the python handling logic at the both of your file:

1if __name__ == "__main__":
2 response = run_conversation()
3 response = response.data[0].output
4 # Print to console the response from OpenAI with the formatted message
5 print(response)

The full code from this example can be seen below:

1from humanloop import Humanloop
2import random
3import json
4
5hl = Humanloop(
6 api_key="<YOUR_HUMANLOOP_API_KEY>",
7)
8
9
10def get_current_weather(location, unit):
11 # Your own function call logic
12 # We will return dummy values in this example
13
14 # Generate random temperature between 0 and 20
15 temperature = random.randint(0, 20)
16
17 return {"temperature": temperature, "other": "cloudy"}
18
19
20def run_conversation():
21 # Step 1: send the conversation and available functions to GPT
22 messages = [
23 {
24 "role": "user",
25 "content": "What's the weather like in both Boston AND London tonight?",
26 }
27 ]
28 tools = [
29 {
30 "name": "get_current_weather",
31 "description": "Get the current weather in a given location",
32 "parameters": {
33 "type": "object",
34 "properties": {
35 "location": {
36 "type": "string",
37 "description": "The city and state, e.g. San Francisco, CA",
38 },
39 "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
40 },
41 "required": ["location"],
42 },
43 },
44 ]
45
46 response = hl.chat(
47 project="Assistant",
48 model_config={"model": "gpt-3.5-turbo-1106", "tools": tools, "max_tokens": 100},
49 messages=messages,
50 )
51 response = response.body
52 output_message = response["data"][0]["output_message"]
53
54 # Remove the deprecated tool_call field (not nessecary for SDK rc verions >0.6)
55 del output_message["tool_call"]
56
57 # Add the output messge from the previous chat to the messages
58 messages.append(output_message)
59
60 # Step 2: check if GPT wanted to call a tool
61 if output_message.get("tool_calls"):
62 # Step 3: call the function
63 # Note: the JSON response may not always be valid; be sure to handle errors
64 available_functions = {
65 "get_current_weather": get_current_weather,
66 }
67
68 for tool_call in output_message["tool_calls"]:
69 function_name = tool_call["function"]["name"]
70 function_args = json.loads(tool_call["function"]["arguments"])
71 function_to_call = available_functions[function_name]
72 function_response = function_to_call(
73 location=function_args.get("location"),
74 unit=function_args.get("unit"),
75 )
76
77 # Step 4: send the response back to the model per function call
78 messages.append(
79 {
80 "role": "tool",
81 "content": json.dumps(function_response),
82 "tool_call_id": tool_call["id"],
83 }
84 )
85
86 second_response = hl.chat(
87 project="Assistant",
88 model_config={
89 "model": "gpt-3.5-turbo-1106",
90 "tools": tools,
91 "max_tokens": 500,
92 },
93 messages=messages,
94 )
95 return second_response
96
97
98if __name__ == "__main__":
99 response = run_conversation()
100 response = response.data[0]output
101 # Print to console the response from OpenAI with the formatted message
102 print(response)