June 20, 2023
Improved Python SDK streaming response
We’ve improved our Python SDK’s streaming response to contain the datapoint ID. Using the ID, you can now provide feedback to datapoints created through streaming.
The humanloop.chat_stream()
and humanloop.complete_stream()
methods now yield a dictionary with output
and id
.
Install the updated SDK with
Example snippet
OpenAI Azure support
We’ve just added support for Azure deployments of OpenAI models to Humanloop!
This update adds the ability to target Microsoft Azure deployments of OpenAI models to the playground and your projects. To set this up, visit your organization’s settings.
Enabling Azure OpenAI for your organization
As a prerequisite, you will need to already be setup with Azure OpenAI Service. See the Azure OpenAI docs for more details. At the time of writing, access is granted by application only.
Click the Setup button and provide your Azure OpenAI endpoint and API key.
Your endpoint can be found in the Keys & Endpoint section when examining your resource from the Azure portal. Alternatively, you can find the value in Azure OpenAI Studio > Playground > Code View. An example endpoint is: docs-test-001.openai.azure.com.
Your API keys can also be found in the Keys & Endpoint section when examining your resource from the Azure portal. You can use either KEY1 or KEY2.
Working with Azure OpenAI models
Once you’ve successfully enabled Azure OpenAI for your organization, you’ll be able to access it through the playground and in your projects in exactly the same way as your existing OpenAI and/or Anthropic models.
REST API and Python / TypeScript support
As with other model providers, once you’ve set up an Azure OpenAI-backed model config, you can call it with the Humanloop REST API or our SDKs.
In the model_config.model
field, provide the name of the model that you deployed from the Azure portal (see note below for important naming conventions when setting up your deployment in the Azure portal).
The request will use the stored organization level key and endpoint you configured above, unless you override this on a per-request basis by passing both the endpoint and API key in the provider_api_keys
field, as shown in the example above.
Note: Naming Model Deployments
When you deploy a model through the Azure portal, you’ll have the ability to provide your deployment with a unique name. For instance, if you choose to deploy an instance of gpt-35-turbo
in your OpenAI Service, you may choose to give this an arbitrary name like my-orgs-llm-model
.
In order to use all Humanloop features with your Azure model deployment, you must ensure that your deployments are named either with an unmodified base model name like gpt-35-turbo
, or the base model name with a custom prefix like my-org-gpt-35-turbo
. If your model deployments use arbitrary names which do not prefix a base model name, you may find that certain features such as setting max_tokens=-1
in your model configs fail to work as expected.