A user-centric architecture for portable, interoperable preference management in AI systems.
Visit the HCP website for an interactive guide

Part of our Loyal Agents collaboration with the Consumer Reports Innovation Lab

Human Context Protocol, or HCP, defines a protocol to connect LLM clients (such as ChatGPT) and LLM Memory Managers.

Examples of LLM memory managers include: Conversation Buffers (simple, like full or windowed conversation history), vector databases (like Redis) used for storing and retrieving relevant information, specialized memory systems (like A-MEM, HiAgent, and MIRIX) with features like hierarchical memory and multi-agent structures, and framework-specific managers (like LangGraph) that facilitate memory management within an application. Other methods involve using tags within an LLM’s output to trigger memory operations.

The job of the HCP Server, or PCR, is to facilitate communication and data transfer between LLM clients and LLM Memory Managers, without caring how either the LLM client or the Memory Manager is implemented. In short, HCP defines the contracts for search and preference update, and provides a way for preferences to be used portably across contexts. The HCP server implements three high-level tools designed to do three complementary tasks the language model needs to understand, use, and update your preferences, namely: schema definition, preference search, and preference update.

This project aims to create a reference implementation of the HCP Server, or PCR, and use that server to carry out a handful of use cases.

Visit our Loyal Agents page
Along with a working prototype to ground discussion, we consider adoption dynamics and market incentives, high-stakes use cases, and outline novel paths via the HCP towards trustworthy personalization in the human-AI economy.”

Robust AI Personalization Controls: The Human Context Protocol

Anand Shah, Tobin South, Talfan Evans, Hannah Rose Kirk, Jiaxin Pei, Andrew Trask, E. Glen Weyl, and Michiel Bakker

Read publication