Humanloop partners with Stability AI to build the first open-source InstructGPT

Raza Habib
  • Humanloop is partnering with Carper AI, a Stability AI company, to build and release a 70 billion parameter open-source large language model (LLM) that can follow human instructions.
  • This will be the first open-source LLM trained using Reinforcement Learning from Human Feedback (RLHF), a technique aimed at improving the safety and usability of LLMs.
  • Humanloop will provide expertise and software for adapting LLMs directly from human feedback to customise them for specific tasks.

At Humanloop, we want to make working with AI as natural as instructing a colleague. This is why we’re excited to be partnering with Stability AI, to build and release the first open-source large language model that can follow human instructions.

We’re joining forces with Carper AI – a subsidiary of Stability focused on reinforcement learning and preference learning – Scale and Hugging Face to democratize “instruction-tuning” of large language models.

Large language models have demonstrated extraordinary capabilities and pushed the frontier of AI. They enable better search, writing assistants, code generation and even generalist assistants that automate tasks. Compared to traditional supervised machine learning, they do not need large labeled datasets to be adapted for new tasks. Instead, most large language models are trained on the simple task of next word prediction on very large unlabeled datasets.

Unfortunately, LLMs trained by next word prediction are difficult to use, often produce factually inaccurate or offensive output, and can be used in harmful applications. A partial solution is to take a language model trained in the usual way and adjust it afterwards to produce more socially acceptable and honest content. This works by repeatedly prompting a language model with an instruction, gathering feedback from humans on its outputs and adjusting the model’s parameters in the direction of better predicted human feedback.

Reinforcement Learning from Human Feedback (RHLF) has been shown to make models considerably more aligned and easier to use. This technique has been used by OpenAI, DeepMind, and Anthropic to produce LLMs that follow instructions or act as helpful assistant. In prior work, OpenAI found that the outputs from models trained with RLHF were preferred to those from 100x larger models trained without human feedback.

However, gatekept models limit their value to academics, hobbyists, and industry. We see a future where RLHF-tuned models will be applied and adapted to every domain and task unlocking huge amounts of real world value.

There have been open source releases of large language models before but this is the first attempt to create an open model trained with RLHF. We view RLHF training as an extremely important step in making LLMs useful and safe to be deployed in a public setting. The risks of LLMs have been well documented and range from spreading misinformation to reinforcing social biases. Compared to standard language models, training with RLHF dramatically reduces these risks and at the same time increases the model’s usefulness.

The resources and technical expertise to build a Large Language Model of this scale and complexity are huge. The model, a 70 billion parameter, will be trained on a chinchilla-optimal dataset from scratch on the Stabilty AI supercomputer. Carper AI is partnering with Humanloop and Scale to collect and apply the human feedback data that will be used to improve the underlying language model that Carper trains. Humanloop are expert in adapting LLMs from human feedback and Scale are leaders in data annotation. Hugging Face will host the final trained model and make it generally accessible.

It is expected that the release of this model will spur both research and innovation. It will enable many new applications and companies as well as allow us to deepen our understanding of state-of-the-art AI systems.

About the author

avatar
Name
Raza Habib
Role
Cofounder and CEO
Twitter
𝕏RazRazcle
Raza is the CEO and Cofounder at Humanloop. He was inspired to work on AI as “the most transformative technology in our lifetimes” after studying under Prof David Mackay while doing Physics at Cambridge. Raza was the founding engineer of Monolith AI – applying AI to mechanical engineering, and has built speech systems at Google AI. He has a PhD in Machine Learning from UCL.