Skip to content

Blog

Personal Context Protocol — Your AI Agent Shouldn't Own Your Soul

|5 min read

Your Context Is Not a Side Effect of a Model

We are heading into an agentic era of AI. Not just chatbots in a browser tab \u2014 but systems that read your email, schedule your meetings, negotiate with suppliers, draft contracts, and reason about your long-term goals.

The promise is seductive: a real second brain that knows you deeply and works for you.

But there is a catch. To make these agents truly useful, you have to give them almost everything. Your projects, your files, your history, your habits, your relationships, your plans, your fears.

Today, that second brain does not live under your control. It lives inside the silos of a few model providers and big SaaS platforms.

I think that is upside-down. We need an architecture where humans own their context, and agents just rent it.

The Problem: Agents Need Your Life, Providers Get Your Soul

The better an AI agent knows you, the more powerful it becomes.

A planning agent that truly understands your calendar, energy, and long-term goals can outperform any to-do app. A finance agent that sees your real income and tax situation can give advice no generic blog post can match. A business brain aggregating CRM, ERP, email, and docs can spot patterns you would never have time to see.

But in most current architectures, all of that context lives in proprietary cloud silos \u2014 in opaque agent "memories" tied to a single vendor \u2014 or in ad-hoc RAG layers nobody really controls.

The implicit deal is: give us your entire digital life, and we will give you great AI. Trust us.

Your context \u2014 your soul \u2014 becomes just another asset managed by someone else.

A Different Mental Model: Your Brain as an Onion

Your context is not one big blob. It has layers and domains.

L0 \u2014 Public: Blog ideas, templates, marketing copy. Things you would publish anyway.

L1 \u2014 Work: Projects, clients, tasks, deals. Professional but not sensitive.

L2 \u2014 Confidential: Finances, contracts, negotiation strategies. Information that could cause real damage if exposed.

L3 \u2014 Core: Health, relationships, fears, core beliefs. The things that define who you are.

A healthy agentic system must see this onion \u2014 not flatten it.

There is a second axis: what area of life does this data belong to? Business, finance, relationships, health, learning, creativity, admin. Every note, file, decision, and event gets two coordinates. "This is L2, finance, business" versus "This is L3, health, relationships."

Simple. But enough to draw real boundaries.

Agents Should Have Profiles, Not Root Access

Right now, most agents are either completely blind or have god mode. Both extremes are wrong.

A finance agent should see L1 and L2, finance and business sector. A planning agent should see L1, work and admin sector. A creative agent should see L0 and L1, creativity and external content.

No agent gets L3 by default. Ever.

This is not about restricting AI capability. It is about matching access to purpose \u2014 the same principle that drives role-based access control in every well-designed system. Agents get a profile that defines what they can see, not a skeleton key to everything.

Asking to Go Deeper: Elevation as a First-Class Flow

Agents must ask before going deeper. Instead of silently consuming everything they can reach, an agent should say:

"To answer this properly, I need L2 finance data, business sector, last 6 months. This will let me see your revenue volatility, check past pricing decisions, and model cashflow."

Then you decide: Allow. Allow partially. Deny.

You always stay above the protocol. You see what layer, what sector, why. You decide. The agent never escalates its own privileges. It makes a case, you grant access \u2014 with full transparency about what it will see and why it needs it.

This is not a UX inconvenience. It is the architectural equivalent of informed consent.

Vaults, Not Silos

Your context should live in an encrypted vault \u2014 under keys you control \u2014 not in someone else's cloud.

Think of it like a password manager for your context. Or a crypto wallet for your personal data. The vault holds everything. Agents send structured requests. The vault decides what to return, when to log access, when to revoke.

Models stay in the cloud, evolving fast. That is fine \u2014 models are commodity infrastructure. Your context stays under your key. That is the asset.

The difference between a silo and a vault is ownership. A silo locks you in. A vault locks others out.

PCP \u2014 Personal Context Protocol

I wrapped this thinking into a small open spec: PCP.

It defines layers, sectors, object schemas, agent profiles, access policies, and elevation flows. It does not dictate your model, database, or UI. It just gives a common language for four questions:

  1. What is this context?
  2. How deep is it?
  3. Who is allowed to see it?
  4. When must they ask first?

Your context is not a side effect of a model. It is a first-class asset with its own protocol.

The draft spec with JSON examples is here.

The Question Worth Asking

If you are building or deploying AI agents today, consider this:

Would you trust your agents more if there was a clear, inspectable protocol that controlled what they see, how deep they go, and when they have to ask you first?

I think the answer is obvious. The harder question is who builds it \u2014 and whether the incentives of model providers and platform vendors align with giving users real control.

History suggests they do not. Which is why this needs to be an open protocol, not a product feature.

Next step

Let’s talk about your process.

If you have a workflow that consumes more time than it should, it is worth a conversation. We analyse your process and show where an AI agent has the biggest impact.