Tutorials

ChatGPT clone with streaming

In this tutorial, you'll build a custom ChatGPT using Next.js and streaming using Humanloop TypeScript SDK.

At the end of this tutorial, you’ll have built a simple ChatGPT-style interface using Humanloop as the backend to manage interactions with your model provider, track user engagement and experiment with model configuration.

If you just want to leap in, the complete repo for this project is available on GitHub here.

A simple ChatGPT-style interface using the Humanloop SDK to manage interaction with your model provider, track user engagement, log results and help you evaluate and improve your model.

Step 1: Create a new Prompt in Humanloop

First, create a Prompt with the name chat-tutorial-ts. Go to the Editor tab on the left. Here, we can play with parameters and prompt templates to create a model which will be accessible via the Humanloop SDK.

Model Provider API keys

If this is your first time using the Prompt Editor, you’ll be prompted to enter an OpenAI API key. You can create one by going here.

The Prompt Editor is an interactive environment where you can experiment with prompt templates to create a model which will be accessible via the Humanloop SDK.

Let’s try to create a chess tutor. Paste the following system message into the Chat template box on the left-hand side.

You are a chess grandmaster, who is also a friendly and helpful chess instructor.
Play a game of chess with the user. Make your own moves in reply to the student.
Explain succintly why you made that move. Make your moves in algebraic notation.

In the Parameters section above, select gpt-4 as the model. Click Commit and enter a commit message such as “GPT-4 Grandmaster”.

Navigate back to the Dashboard tab in the sidebar. Your new Prompt Version is visible in the table at the bottom of the Prompt dashboard.

Step 2: Set up a Next.js application

Now, let’s turn to building out a simple Next.js application. We’ll use the Humanloop TypeScript SDK to provide programmatic access to the model we just created.

Run npx create-next-app@latest to create a fresh Next.js project. Accept all the default config options in the setup wizard (which includes using TypeScript, Tailwind, and the Next.js app router). Now npm run dev to fire up the development server.

Next npm i humanloop to install the Humanloop SDK in your project.

Edit app/page.tsx to the following. This code stubs out the basic React components and state management we need for a chat interface.

page.tsx
1"use client";
2
3import { ChatMessageWithToolCall } from "humanloop";
4import * as React from "react";
5
6const { useState } = React;
7
8export default function Home() {
9 const [messages, setMessages] = useState<ChatMessage[]>([]);
10 const [inputValue, setInputValue] = useState("");
11
12 const onSend = async () => {
13 const userMessage: ChatMessageWithToolCall = {
14 role: "user",
15 content: inputValue,
16 };
17
18 setInputValue("");
19
20 const newMessages = [...messages, userMessage];
21
22 setMessages(newMessages);
23
24 // REPLACE ME LATER
25 const res = "I'm not a language model. I'm just a string. 😞";
26 // END REPLACE ME
27
28 const assistantMessage: ChatMessageWithToolCall = {
29 role: "assistant",
30 content: res,
31 };
32
33 setMessages([...newMessages, assistantMessage]);
34 };
35
36 const handleKeyDown = (e: React.KeyboardEvent<HTMLInputElement>) => {
37 if (e.key === "Enter") {
38 onSend();
39 }
40 };
41
42 return (
43 <main className="flex flex-col items-center min-h-screen p-8 md:p-24">
44 <h1 className="text-2xl font-bold leading-7 text-gray-900 dark:text-gray-200 sm:truncate sm:text-3xl sm:tracking-tight">
45 Chess Tutor
46 </h1>
47 <div className="flex-col w-full mt-8">
48 {messages.map((msg, idx) => (
49 <MessageRow key={idx} msg={msg}></MessageRow>
50 ))}
51
52 <div className="flex w-full">
53 <div className="min-w-[70px] uppercase text-xs text-gray-500 dark:text-gray-300 pt-2">
54 User
55 </div>
56 <input
57 className="w-full px-4 py-1 mr-3 leading-tight text-gray-700 break-words bg-transparent border-none appearance-none dark:text-gray-200 flex-grow-1 focus:outline-none"
58 type="text"
59 placeholder="Type your message here..."
60 aria-label="Prompt"
61 value={inputValue}
62 onChange={(e) => setInputValue(e.target.value)}
63 onKeyDown={(e) => handleKeyDown(e)}
64 ></input>
65 <button
66 className="px-3 font-medium text-gray-500 uppercase border border-gray-300 rounded dark:border-gray-100 dark:text-gray-200 hover:border-blue-500 hover:text-blue-500"
67 onClick={() => onSend()}
68 >
69 Send
70 </button>
71 </div>
72 </div>
73 </main>
74 );
75}
76
77interface MessageRowProps {
78 msg: ChatMessageWithToolCall;
79}
80
81const MessageRow: React.FC<MessageRowProps> = ({ msg }) => {
82 return (
83 <div className="flex pb-4 mb-4 border-b border-gray-300">
84 <div className="min-w-[80px] uppercase text-xs text-gray-500 leading-tight pt-1">
85 {msg.role}
86 </div>
87 <div className="pl-4 whitespace-pre-line">{msg.content as string}</div>
88 </div>
89 );
90};

We shouldn’t call the Humanloop SDK from the client’s browser as this would require giving out the Humanloop API key, which you should not do! Instead, we’ll create a simple backend API route in Next.js which can perform the Humanloop requests on the Node server and proxy these back to the client.

Create a file containing the code below at app/api/chat/route.ts. This will automatically create an API route at /api/chat. In the call to the Humanloop SDK, you’ll need to pass the project name you created in step 1.

app/api/chat/route.ts
1import { Humanloop, ChatMessageWithToolCall } from "humanloop";
2
3if (!process.env.HUMANLOOP_API_KEY) {
4 throw Error(
5 "no Humanloop API key provided; add one to your .env.local file with: `HUMANLOOP_API_KEY=..."
6 );
7}
8
9const humanloop = new Humanloop({
10 basePath: "https://api.humanloop.com/v4",
11 apiKey: process.env.HUMANLOOP_API_KEY,
12});
13
14export async function POST(req: Request): Promise<Response> {
15 const messages: ChatMessageWithToolCall[] =
16 (await req.json()) as ChatMessageWithToolCall[];
17 console.log(messages);
18
19 const response = await humanloop.chatDeployed({
20 project: "chat-tutorial-ts",
21 messages,
22 });
23
24 return new Response(JSON.stringify(response.data.data[0].output));
25}

In this code, we’re calling humanloop.chatDeployed. This function is used to target the model which is actively deployed on your project - in this case it should be the model we set up in step 1. Other related functions in the SDK reference (such as humanloop.chat) allow you to target a specific model config (rather than the actively deployed one) or even specify model config directly in the function call.

When we receive a response from Humanloop, we strip out just the text of the chat response and send this back to the client via a Response object (see Next.js - Route Handler docs). The Humanloop SDK response contains much more data besides the raw text, which you can inspect by logging to the console.

For the above to work, you’ll need to ensure that you have a .env.local file at the root of your project directory with your Humanloop API key. You can generate a Humanloop API key by clicking your name in the bottom left and selecting API keys. This environment variable will only be available on the Next.js server, not on the client (see Next.js - Environment Variables).

.env.local
HUMANLOOP_API_KEY=...

Now, modify page.tsx to use a fetch request against the new API route.

page.tsx
1const onSend = async () => {
2 // REPLACE ME NOW
3
4 setMessages(newMessages);
5
6 const response = await fetch("/api/chat", {
7 method: "POST",
8 headers: {
9 "Content-Type": "application/json",
10 },
11 body: JSON.stringify(newMessages),
12 });
13
14 const res = await response.json();
15
16 // END REPLACE ME
17};

You should now find that your application works as expected. When we send messages from the client, a GPT response appears beneath (after a delay).

Back in your Humanloop Prompt dashboard you should see Logs being recorded as clients interact with your model.

Step 3: Streaming tokens

(Note: requires Node version 18+).

You may notice that model responses can take a while to appear on screen. Currently, our Next.js API route blocks while the entire response is generated, before finally sending the whole thing back to the client browser in one go. For longer generations, this can take some time, particularly with larger models like GPT-4. Other model config settings can impact this too.

To provide a better user experience, we can deal with this latency by streaming tokens back to the client as they are generated and have them display eagerly on the page. The Humanloop SDK wraps the model providers’ streaming functionality so that we can achieve this. Let’s incorporate streaming tokens into our app next.

Edit the API route at to look like the following. Notice that we have switched to using the humanloop.chatDeployedStream function, which offers Server Sent Event streaming as new tokens arrive from the model provider.

app/api/chat/route.ts
1import { Humanloop, ChatMessageWithToolCall } from "humanloop";
2
3if (!process.env.HUMANLOOP_API_KEY) {
4 throw Error(
5 "no Humanloop API key provided; add one to your .env.local file with: `HUMANLOOP_API_KEY=..."
6 );
7}
8
9const humanloop = new Humanloop({
10 basePath: "https://api.humanloop.com/v4",
11 apiKey: process.env.HUMANLOOP_API_KEY,
12});
13
14export async function POST(req: Request): Promise<Response> {
15 const messages: ChatMessageWithToolCall[] =
16 (await req.json()) as ChatMessageWithToolCall[];
17
18 const response = await humanloop.chatDeployedStream({
19 project: "chat-tutorial-ts",
20 messages,
21 });
22
23 return new Response(response.data);
24}

Now, modify the onSend function in page.tsx to the following. This streams the response body in chunks, updating the UI each time a new chunk arrives.

app/page.tsx
1const onSend = async () => {
2 const userMessage: ChatMessageWithToolCall = {
3 role: "user",
4 content: inputValue,
5 };
6
7 setInputValue("");
8
9 const newMessages: ChatMessageWithToolCall[] = [
10 ...messages,
11 userMessage,
12 { role: "assistant", content: "" },
13 ];
14
15 setMessages(newMessages);
16
17 const response = await fetch("/api/chat", {
18 method: "POST",
19 headers: {
20 "Content-Type": "application/json",
21 },
22 body: JSON.stringify(newMessages),
23 });
24
25 if (!response.body) throw Error();
26
27 const decoder = new TextDecoder();
28 const reader = response.body.getReader();
29 let done = false;
30 while (!done) {
31 const chunk = await reader.read();
32 const value = chunk.value;
33 done = chunk.done;
34 const val = decoder.decode(value);
35 const jsonChunks = val
36 .split("}{")
37 .map(
38 (s) => (s.startsWith("{") ? "" : "{") + s + (s.endsWith("}") ? "" : "}")
39 );
40 const tokens = jsonChunks.map((s) => JSON.parse(s).output).join("");
41
42 setMessages((messages) => {
43 const updatedLastMessage = messages.slice(-1)[0];
44
45 return [
46 ...messages.slice(0, -1),
47 {
48 ...updatedLastMessage,
49 content: (updatedLastMessage.content as string) + tokens,
50 },
51 ];
52 });
53 }
54};

You should now find that tokens stream onto the screen as soon as they are available.

Step 4: Add Feedback buttons

We’ll now add feedback buttons to the Assistant chat messages, and submit feedback on those Logs via the Humanloop API whenever the user clicks the buttons.

Modify page.tsx to include an id for each message in React state. Note that we’ll only have ids for assistant messages, and null for user messages.

page.tsx
1// A new type which also includes the Humanloop data_id for a message generated by the model.
2interface ChatListItem {
3 id: string | null; // null for user messages, string for assistant messages
4 message: ChatMessageWithToolCall;
5}
6
7export default function Home() {
8 const [chatListItems, setChatListItems] =
9 useState<ChatListItem[]>([]); // <- update to use the new type
10 ...

Modify the onSend function to look like this:

page.tsx
1const onSend = async () => {
2 const userMessage: ChatMessageWithToolCall = {
3 role: "user",
4 content: inputValue,
5 };
6
7 setInputValue("");
8
9 const newItems: ChatListItem[] = [
10 // <- modified to update the new list type
11 ...chatListItems,
12 { message: userMessage, id: null },
13 { message: { role: "assistant", content: "" }, id: null },
14 ];
15
16 setChatListItems(newItems);
17
18 const response = await fetch("/api/chat", {
19 method: "POST",
20 headers: {
21 "Content-Type": "application/json",
22 },
23 body: JSON.stringify(newItems.slice(0, -1).map((item) => item.message)), // slice off the final message, which is the currently empty placeholder for the assistant response
24 });
25
26 if (!response.body) throw Error();
27
28 const decoder = new TextDecoder();
29 const reader = response.body.getReader();
30 let done = false;
31 while (!done) {
32 const chunk = await reader.read();
33 const value = chunk.value;
34 done = chunk.done;
35 const val = decoder.decode(value);
36 const jsonChunks = val
37 .split("}{")
38 .map(
39 (s) => (s.startsWith("{") ? "" : "{") + s + (s.endsWith("}") ? "" : "}")
40 );
41 const tokens = jsonChunks.map((s) => JSON.parse(s).output).join("");
42 const id = JSON.parse(jsonChunks[0]).id; // <- extract the data id from the streaming response
43
44 setChatListItems((chatListItems) => {
45 const lastItem = chatListItems.slice(-1)[0];
46 const updatedId = id || lastItem.id; // <- use the id from the streaming response if it's not already set
47 return [
48 ...chatListItems.slice(0, -1),
49 {
50 ...lastItem,
51 message: {
52 ...lastItem.message,
53 content: (lastItem.message.content as string) + tokens,
54 },
55 id: updatedId, // <- include the id when we update state
56 },
57 ];
58 });
59 }
60};

Now, modify the MessageRow component to become a ChatItemRow component which knows about the id.

page.tsx
1interface ChatItemRowProps {
2 item: ChatListItem;
3}
4
5const ChatItemRow: React.FC<ChatItemRowProps> = ({ item }) => {
6 const onFeedback = async (feedback: string) => {
7 const response = await fetch("/api/feedback", {
8 method: "POST",
9 headers: {
10 "Content-Type": "application/json",
11 },
12 body: JSON.stringify({ id: item.id, value: feedback }),
13 });
14 };
15
16 return (
17 <div className="flex pb-4 mb-4 border-b border-gray-300">
18 <div className="min-w-[80px] uppercase text-xs text-gray-500 dark:text-gray-300 leading-tight pt-1">
19 {item.message.role}
20 </div>
21 <div className="pl-4 whitespace-pre-line">
22 {item.message.content as string}
23 </div>
24 <div className="grow" />
25 <div className="text-xs">
26 {item.id !== null && (
27 <div className="flex gap-2">
28 <button
29 className="p-1 bg-gray-100 border-gray-600 rounded hover:bg-gray-200 border-1"
30 onClick={() => onFeedback("good")}
31 >
32 👍
33 </button>
34 <button
35 className="p-1 bg-gray-100 border-gray-600 rounded hover:bg-gray-200 border-1"
36 onClick={() => onFeedback("bad")}
37 >
38 👎
39 </button>
40 </div>
41 )}
42 </div>
43 </div>
44 );
45};

And finally for page.tsx, modify the rendering of the message history to use the new component:

page.tsx
1// OLD
2// {messages.map((msg, idx) => (
3// <MessageRow key={idx} msg={msg}></MessageRow>
4// ))}
5
6// NEW
7{
8 chatListItems.map((item, idx) => (
9 <ChatItemRow key={idx} item={item}></ChatItemRow>
10 ));
11}

Next, we need to create a Next.js API route for submitting feedback, similar to the one we had for making a /chat request. Create a new file at the path app/api/feedback/route.ts with the following code:

api/feedback/route.ts
1import { Humanloop } from "humanloop";
2
3if (!process.env.HUMANLOOP_API_KEY) {
4 throw Error(
5 "no Humanloop API key provided; add one to your .env.local file with: `HUMANLOOP_API_KEY=..."
6 );
7}
8
9const humanloop = new Humanloop({
10 apiKey: process.env.HUMANLOOP_API_KEY,
11});
12
13interface FeedbackRequest {
14 id: string;
15 value: string;
16}
17
18export async function POST(req: Request): Promise<Response> {
19 const feedbackRequest: FeedbackRequest = await req.json();
20
21 await humanloop.feedback({
22 type: "rating",
23 data_id: feedbackRequest.id,
24 value: feedbackRequest.value,
25 });
26
27 return new Response();
28}

This code simply proxies the feedback request through the Next.js server. You should now see feedback buttons on the relevant rows in chat.

Chat interface with feedback buttons.

When you click one of these feedback buttons and visit the Prompt in Humanloop, you should see the feedback logged against the log.

Conclusion

Congratulations! You’ve now built a working chat interface and used Humanloop to handle interaction with the model provider and log chats. You used a system message (which is invisible to your end user) to make GPT-4 behave like a chess tutor. You also added a way for your app’s users to provide feedback which you can track in Humanloop to help improve your models.

Now that you’ve seen how to create a simple Humanloop project and build a chat interface on top of it, try visiting the Humanloop project dashboard to view the logs and iterate on your model configs. You can also create experiments to learn which model configs perform best with your users. To learn more about these topics, take a look at our guides below.

All the code for this project is available on Github.