How to Maximize LLM Performance
An overview of the techniques that OpenAI recommends to get the best performance of your LLM applications. Covering best-practices in prompt engineering, retrieval-augmented generation (RAG) and fine-tuning.

An overview of the techniques that OpenAI recommends to get the best performance of your LLM applications. Covering best-practices in prompt engineering, retrieval-augmented generation (RAG) and fine-tuning.
Brief overview of fine-tuning, why it’s significant, how it works on OpenAI and how Humanloop can help you to finetune your own custom models.
Today we’re announcing Tools as a part of Humanloop. Tools allow you to connect an LLM to any API to give it extra capabilities and access to private data.
Last week I had the privilege to sit down with Sam Altman and 20 other developers to discuss OpenAI’s product plans. Sam was remarkably open. The discussion touched on practical developer issues as well as bigger-picture questions related to OpenAI’s mission and the societal impact of AI. Here are the key takeaways.
In this post, we'll explore the fundamentals of prompt engineering. We'll explain how Large Language Models (LLMs) interpret prompts to generate outputs, and provide tips and tricks to get you started prototyping and implementing LLMs quickly.