How to Maximize LLM Performance

Jordan Burgess

An overview of the techniques that OpenAI recommends to get the best performance of your LLM applications. Covering best-practices in prompt engineering, retrieval-augmented generation (RAG) and fine-tuning.

How to Maximize LLM Performance

OpenAI Fine-tuning: GPT-3.5-Turbo

Conor Kelly

Brief overview of fine-tuning, why it’s significant, how it works on OpenAI and how Humanloop can help you to finetune your own custom models.

OpenAI Fine-tuning: GPT-3.5-Turbo

OpenAI's plans according to Sam Altman

Raza Habib

Last week I had the privilege to sit down with Sam Altman and 20 other developers to discuss OpenAI’s product plans. Sam was remarkably open. The discussion touched on practical developer issues as well as bigger-picture questions related to OpenAI’s mission and the societal impact of AI. Here are the key takeaways.

OpenAI's plans according to Sam Altman

Prompt Engineering 101

Raza Habib
Sinan Ozdemir

In this post, we'll explore the fundamentals of prompt engineering. We'll explain how Large Language Models (LLMs) interpret prompts to generate outputs, and provide tips and tricks to get you started prototyping and implementing LLMs quickly.

Prompt Engineering 101

The fastest way to get AI from Playground to Production

Get started
No credit card required