July 3, 2023

Introducing Tools

Today we’re announcing Tools as a part of Humanloop.

Tools allow you to connect an LLM to any API and to an array of data sources to give it extra capabilities and access to private data. Under your organization settings on Humanloop you can now configure and manage tools in a central place.

Read more on our blog and see an example of setting up a tool for semantic search.

OpenAI functions API

We’ve updated our APIs to support OpenAI function calling.

OpenAI functions are now supported as tools on Humanloop. This allows you to pass tool definitions as part of the model configuration when calling our chat and log endpoints. For the latest OpenAI models gpt-3.5-turbo-0613 and gpt-4-0613 the model can then choose to output a JSON object containing arguments to call these tools.

This unlocks getting more reliable structured data back from the model and makes it easier to create useful agents.

Recap on OpenAI functions

As described in the OpenAI documentation, the basic steps for using functions are:

  1. Call one of the models gpt-3.5-turbo-0613 and gpt-4-0613 with a user query and a set of function definitions described using the universal json-schema syntax.
  2. The model can then choose to call one of the functions provided. If it does, a stringified JSON object adhering to your json schema definition will be returned.
  3. You can then parse the string into JSON in your code and call the chosen function with the provided arguments (NB: the model may hallucinate or return invalid json, be sure to consider these scenarios in your code).
  4. Finally call the model again by appending the function response as a new message. The model can then use this information to respond to the original use query.

OpenAI have provided a simple example in their docs for a get_current_weather function that we will show how to adapt to use with Humanloop:

1import openai
2import json
3
4
5# Example dummy function hard coded to return the same weather
6# In production, this could be your backend API or an external API
7def get_current_weather(location, unit="fahrenheit"):
8 """Get the current weather in a given location"""
9 weather_info = {
10 "location": location,
11 "temperature": "72",
12 "unit": unit,
13 "forecast": ["sunny", "windy"],
14 }
15 return json.dumps(weather_info)
16
17
18def run_conversation():
19 # Step 1: send the conversation and available functions to GPT
20 messages = [{"role": "user", "content": "What's the weather like in Boston?"}]
21 functions = [
22 {
23 "name": "get_current_weather",
24 "description": "Get the current weather in a given location",
25 "parameters": {
26 "type": "object",
27 "properties": {
28 "location": {
29 "type": "string",
30 "description": "The city and state, e.g. San Francisco, CA",
31 },
32 "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
33 },
34 "required": ["location"],
35 },
36 }
37 ]
38 response = openai.ChatCompletion.create(
39 model="gpt-3.5-turbo-0613",
40 messages=messages,
41 functions=functions,
42 function_call="auto", # auto is default, but we'll be explicit
43 )
44 response_message = response["choices"][0]["message"]
45
46 # Step 2: check if GPT wanted to call a function
47 if response_message.get("function_call"):
48 # Step 3: call the function
49 # Note: the JSON response may not always be valid; be sure to handle errors
50 available_functions = {
51 "get_current_weather": get_current_weather,
52 } # only one function in this example, but you can have multiple
53 function_name = response_message["function_call"]["name"]
54 fuction_to_call = available_functions[function_name]
55 function_args = json.loads(response_message["function_call"]["arguments"])
56 function_response = fuction_to_call(
57 location=function_args.get("location"),
58 unit=function_args.get("unit"),
59 )
60
61 # Step 4: send the info on the function call and function response to GPT
62 messages.append(response_message) # extend conversation with assistant's reply
63 messages.append(
64 {
65 "role": "function",
66 "name": function_name,
67 "content": function_response,
68 }
69 ) # extend conversation with function response
70 second_response = openai.ChatCompletion.create(
71 model="gpt-3.5-turbo-0613",
72 messages=messages,
73 ) # get a new response from GPT where it can see the function response
74 return second_response
75
76
77print(run_conversation())

Using with Humanloop tools

OpenAI functions are treated as tools on Humanloop. Tools conveniently follow the same universal json-schema definition as OpenAI functions.

We’ve expanded the definition of our model configuration to also include tool definitions. Historically the model config is made up of the chat template, choice of base model and any hyper-parameters that change the behaviour of the model.

In the cases of OpenAIs gpt-3.5-turbo-0613 and gpt-4-0613 models, any tools defined as part of the model config are passed through as functions for the model to use.

You can now specify these tools when using the Humanloop chat endpoint (as a replacement for OpenAI’s ChatCompletion), or when using the Humanloop log endpoint in addition to the OpenAI calls:

Chat endpoint

We show here how to update the run_conversation() method from the OpenAI example to instead use the Humanloop chat endpoint with tools:

1from humanloop import Humanloop
2
3hl = Humanloop(
4 # get your API key here: https://app.humanloop.com/account/api-keys
5 api_key="YOUR_API_KEY",
6)
7
8def run_conversation():
9 # Step 1: send the conversation and available functions to GPT
10 messages = [{"role": "user", "content": "What's the weather like in Boston?"}]
11 # functions are referred to as tools on Humanloop, but follows the same schema
12 tools = [
13 {
14 "name": "get_current_weather",
15 "description": "Get the current weather in a given location",
16 "parameters": {
17 "type": "object",
18 "properties": {
19 "location": {
20 "type": "string",
21 "description": "The city and state, e.g. San Francisco, CA",
22 },
23 "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
24 },
25 "required": ["location"],
26 },
27 }
28 ]
29 response = hl.chat(
30 project="Assistant",
31 model_config={
32 "model": "gpt-3.5-turbo-0613",
33 "tools": tools
34 },
35 messages=messages
36 )
37 response = response.body.data[0]
38
39 # Step 2: check if GPT wanted to call a tool
40 if response.get("tool_call"):
41 # Step 3: call the function
42 # Note: the JSON response may not always be valid; be sure to handle errors
43 available_functions = {
44 "get_current_weather": get_current_weather,
45 } # only one function in this example, but you can have multiple
46 function_name = response_message["function_call"]["name"]
47 fuction_to_call = available_functions[function_name]
48 function_args = json.loads(response["tool_call"]["arguments"])
49 function_response = fuction_to_call(
50 location=function_args.get("location"),
51 unit=function_args.get("unit"),
52 )
53
54 # Step 4: send the response back to the model
55 messages.append(response_message)
56 messages.append(
57 {
58 "role": "tool",
59 "name": function_name,
60 "content": function_response,
61 }
62 )
63 second_response = hl.chat(
64 project="Assistant",
65 model_config={
66 "model": "gpt-3.5-turbo-0613",
67 "tools": tools
68 },
69 messages=messages
70 )
71 return second_response

After running this snippet, the model configuration recorded on your project in Humanloop will now track what tools were provided to the model and the logged datapoints will provide details of the tool called to inspect:

Log endpoint

Alternatively, you can also use the explicit Humanloop log alongside your existing OpenAI calls to achieve the same result:

1from humanloop import Humanloop
2
3hl = Humanloop(
4 # get your API key here: https://app.humanloop.com/account/api-keys
5 api_key="YOUR_API_KEY",
6)
7
8def run_conversation():
9 # Step 1: send the conversation and available functions to GPT
10 messages = [{"role": "user", "content": "What's the weather like in Boston?"}]
11 functions = [
12 {
13 "name": "get_current_weather",
14 "description": "Get the current weather in a given location",
15 "parameters": {
16 "type": "object",
17 "properties": {
18 "location": {
19 "type": "string",
20 "description": "The city and state, e.g. San Francisco, CA",
21 },
22 "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
23 },
24 "required": ["location"],
25 },
26 }
27 ]
28 response = openai.ChatCompletion.create(
29 model="gpt-3.5-turbo-0613",
30 messages=messages,
31 functions=functions,
32 function_call="auto", # auto is default, but we'll be explicit
33 )
34 response_message = response["choices"][0]["message"]
35
36 # log the result to humanloop
37 log_response = hl.log(
38 project="Assistant",
39 model_config={
40 "model": "gpt-3.5-turbo-0613",
41 "tools": tools,
42 },
43 messages=messages,
44 tool_call=response_message.get("function_call")
45 )
46
47 # Step 2: check if GPT wanted to call a function
48 if response_message.get("function_call"):
49 # Step 3: call the function
50 # Note: the JSON response may not always be valid; be sure to handle errors
51 available_functions = {
52 "get_current_weather": get_current_weather,
53 } # only one function in this example, but you can have multiple
54 function_name = response_message["function_call"]["name"]
55 fuction_to_call = available_functions[function_name]
56 function_args = json.loads(response_message["function_call"]["arguments"])
57 function_response = fuction_to_call(
58 location=function_args.get("location"),
59 unit=function_args.get("unit"),
60 )
61
62 # Step 4: send the info on the function call and function response to GPT
63 messages.append(response_message) # extend conversation with assistant's reply
64 messages.append(
65 {
66 "role": "function",
67 "name": function_name,
68 "content": function_response,
69 }
70 ) # extend conversation with function response
71 second_response = openai.ChatCompletion.create(
72 model="gpt-3.5-turbo-0613",
73 messages=messages,
74 ) # get a new response from GPT where it can see the function response
75
76 log_response = hl.log(
77 project="Assistant",
78 model_config={
79 "model": "gpt-3.5-turbo-0613",
80 "tools": tools,
81 },
82 messages=messages,
83 output=second_response["choices"][0]["message"]["content"],
84 )
85 return second_response
86
87
88print(run_conversation())

Coming soon

Support for defining tools in the playground!