July 3, 2023
Introducing Tools
Today we’re announcing Tools as a part of Humanloop.
Tools allow you to connect an LLM to any API and to an array of data sources to give it extra capabilities and access to private data. Under your organization settings on Humanloop you can now configure and manage tools in a central place.
Read more on our blog and see an example of setting up a tool for semantic search.
OpenAI functions API
We’ve updated our APIs to support OpenAI function calling.
OpenAI functions are now supported as tools on Humanloop. This allows you to pass tool definitions as part of the model configuration when calling our chat
and log
endpoints. For the latest OpenAI models gpt-3.5-turbo-0613
and gpt-4-0613
the model can then choose to output a JSON object containing arguments to call these tools.
This unlocks getting more reliable structured data back from the model and makes it easier to create useful agents.
Recap on OpenAI functions
As described in the OpenAI documentation, the basic steps for using functions are:
- Call one of the models
gpt-3.5-turbo-0613
andgpt-4-0613
with a user query and a set of function definitions described using the universal json-schema syntax. - The model can then choose to call one of the functions provided. If it does, a stringified JSON object adhering to your json schema definition will be returned.
- You can then parse the string into JSON in your code and call the chosen function with the provided arguments (NB: the model may hallucinate or return invalid json, be sure to consider these scenarios in your code).
- Finally call the model again by appending the function response as a new message. The model can then use this information to respond to the original use query.
OpenAI have provided a simple example in their docs for a get_current_weather
function that we will show how to adapt to use with Humanloop:
Using with Humanloop tools
OpenAI functions are treated as tools on Humanloop. Tools conveniently follow the same universal json-schema definition as OpenAI functions.
We’ve expanded the definition of our model configuration to also include tool definitions. Historically the model config is made up of the chat template, choice of base model and any hyper-parameters that change the behaviour of the model.
In the cases of OpenAIs gpt-3.5-turbo-0613
and gpt-4-0613
models, any tools defined as part of the model config are passed through as functions for the model to use.
You can now specify these tools when using the Humanloop chat endpoint (as a replacement for OpenAI’s ChatCompletion), or when using the Humanloop log endpoint in addition to the OpenAI calls:
Chat endpoint
We show here how to update the run_conversation()
method from the OpenAI example to instead use the Humanloop chat endpoint with tools:
After running this snippet, the model configuration recorded on your project in Humanloop will now track what tools were provided to the model and the logged datapoints will provide details of the tool called to inspect:
Log endpoint
Alternatively, you can also use the explicit Humanloop log alongside your existing OpenAI calls to achieve the same result:
Coming soon
Support for defining tools in the playground!