August 5, 2024

OpenAI Structured Outputs

OpenAI have introduced Structured Outputs functionality to their API.

This feature allows the model to more reliably adhere to user defined JSON schemas for use cases like information extraction, data validation, and more.

We’ve extended our /chat (in v4) and prompt/call (in v5) endpoints to support this feature. There are two ways to trigger Structured Outputs in the API:

  1. Tool Calling: When defining a tool as part of your Prompt definition, you can now include a strict=true flag. The model will then output JSON data that adheres to the tool parameters schema definition.
1""" Example using our v5 API. """
2from humanloop import Humanloop
3
4client = Humanloop(
5 api_key="YOUR_API_KEY",
6)
7
8client.prompts.call(
9 path="person-extractor",
10 prompt={
11 "model": "gpt-4o",
12 "template": [
13 {
14 "role": "system",
15 "content": "You are an information extractor.",
16 },
17 ],
18 "tools": [
19 {
20 "name": "extract_person_object",
21 "description": "Extracts a person object from a user message.",
22 # New parameter to enable structured outputs
23 "strict": True,
24 "parameters": {
25 "type": "object",
26 "properties": {
27 "name": {
28 "type": "string",
29 "name": "Full name",
30 "description": "Full name of the person",
31 },
32 "address": {
33 "type": "string",
34 "name": "Full address",
35 "description": "Full address of the person",
36 },
37 "job": {
38 "type": "string",
39 "name": "Job",
40 "description": "The job of the person",
41 }
42 },
43 # These fields need to be defined in strict mode
44 "required": ["name", "address", "job"],
45 "additionalProperties": False,
46 },
47 }
48 ],
49 },
50 messages=[
51 {
52 "role": "user",
53 "content": "Hey! I'm Jacob Martial, I live on 123c Victoria street, Toronto and I'm a software engineer at Humanloop.",
54 },
55 ],
56 stream=False,
57)
  1. Response Format: We have expanded the response_format with option json_schema and a request parameter to also include an optional json_schema field where you can pass in the schema you wish the model to adhere to.
1client.prompts.call(
2 path="person-extractor",
3 prompt={
4 "model": "gpt-4o",
5 "template": [
6 {
7 "role": "system",
8 "content": "You are an information extractor.",
9 },
10 ],
11 "response_format":{
12 "type": "json_schema",
13 # New parameter to enable structured outputs
14 "response_format": {
15 "type": "json_schema",
16 "json_schema": {
17 "name": "person_object",
18 "strict": True,
19 "schema": {
20 "type": "object",
21 "properties": {
22 "name": {
23 "type": "string",
24 "name": "Full name",
25 "description": "Full name of the person"
26 },
27 "address": {
28 "type": "string",
29 "name": "Full address",
30 "description": "Full address of the person"
31 },
32 "job": {
33 "type": "string",
34 "name": "Job",
35 "description": "The job of the person"
36 }
37 },
38 "required": ["name", "address", "job"],
39 "additionalProperties": False}
40 }
41 }
42 },
43 },
44 messages=[
45 {
46 "role": "user",
47 "content": "Hey! I'm Jacob Martial, I live on 123c Victoria street, Toronto and I'm a software engineer at Humanloop.",
48 },
49 ],
50 stream=False,
51)

This new response formant functionality is only supported by the latest OpenAPI model snapshots gpt-4o-2024-08-06 and gpt-4o-mini-2024-07-18.

We will also be exposing this functionality in our Editor UI soon too!