Model Context Protocol (MCP) ExplainedA guide on the Model Context Protocol (MCP), explaining what MCP is, how it works, its architecture as well as the benefits and challenges you should know.Conor Kelly
What is LLM Observability and Monitoring? Learn how LLM monitoring works, why it's important, as well as best practices and important metrics you should know.Conor Kelly
5 LLM Evaluation Tools You Should Know in 2025A guide on the LLM evaluation platforms you should know in 2025Conor Kelly
What is Prompt Management?Learn what prompt management is, how it works, its benefits and challenges, and how to set it up.Conor Kelly
Top 5 Vector Databases in 2025A guide on the top vector database to use for AI agents and applications in 2025Conor Kelly
AI Is Blurring the Line Between PMs and EngineersAI is blurring the line between PMs and engineers; now PMs and domain experts can drive product creation through prompt engineering.Raza Habib
Structured Outputs: Everything You Should KnowExplore Structured Outputs, understand how it works and how to get started on OpenAI, Gemini and Humanloop.Conor Kelly
Introducing Templates in HumanloopJumpstart your AI app development with Humanloop Templates — pre-built workflows, curated datasets, and ready-to-use evaluators designed to accelerate your path to production.Jordan Burgess
LLM Benchmarks: Understanding Language Model PerformanceLearn about key LLM benchmarks, why they should be prioritised for specific tasks, and what metrics should be used to compare LLM performance.Conor Kelly
8 Retrieval Augmented Generation (RAG) Architectures You Should Know in 2025Explore the 8 most popular Retrieval Augmented Generation (RAG) architectures, understand their workflows and use cases for building generative AI applications.Conor Kelly