The Broad Way

[ Sharp Mind · Sharp Blade · Sharp Spirit ]

root@construct:~
/building-sigil-protocol-why-ai-agents-need-identity
$_
<-- back to /logs
2026-02-14//LOG

Building Sigil Protocol: Why AI Agents Need Identity

I have been thinking about this problem for months and I finally started building. The problem is simple to state and HARD to solve: when AI agents talk to each other, how do you know who you are talking to? Right now the answer is API keys. Agent A calls Agent B's endpoint with a bearer token and that is it. That is the entire identity layer. A string. A shared secret. This works fine when you have two services talking to each other in a controlled environment. It breaks COMPLETELY in a multi-agent ecosystem where agents are autonomous, where they are built by different organizations, and where trust needs to be granular and revocable. Think about it. In the human internet, I can verify who I am talking to through a chain of trust. TLS certificates, signed by certificate authorities, verified by my browser. I can check that google.com is actually Google because Digicert says so and my OS trusts Digicert. We have this entire infrastructure for human-facing identity and we have NOTHING equivalent for agents. Sigil Protocol is my attempt at fixing this. The core idea is cryptographic attestation for AI agents. Every agent gets a Sigil, which is a signed identity document that contains: the agent's public key, its capabilities declaration, its creator's identity, and a chain of attestations from other agents or organizations that have verified it. The capabilities declaration is crucial. It is not enough to know WHO an agent is. You need to know WHAT it is authorized to do. Agent A might be authorized to read your calendar but not send emails. Agent B might be authorized to execute trades up to 1000 dollars but not above. These capability boundaries need to be cryptographically bound to the identity, not enforced by the honor system. The attestation chain works like a web of trust. When Organization X deploys Agent A, they sign its Sigil with their organizational key. When Agent A interacts with Agent B successfully, Agent B can optionally add an attestation: "I have interacted with this agent and it behaved according to its declared capabilities." Over time, agents build reputation through accumulated attestations. The technical implementation uses Ed25519 for signing. Each Sigil is a JSON document with a canonical serialization so signatures are deterministic. The chain of attestations is a Merkle-like structure where each new attestation includes a hash of the previous state. This means you cannot selectively remove attestations without breaking the chain. If an agent behaves badly and gets a negative attestation, it STAYS. Revocation is handled through a lightweight revocation registry. When a Sigil is revoked, the revocation notice is signed by the original issuer and propagated through a gossip protocol. Agents are expected to check revocation status before trusting a Sigil, similar to OCSP stapling in TLS. The hardest part is bootstrapping. You need a root of trust. For now, Sigil uses a small set of "anchor" organizations whose public keys are hardcoded in the protocol. This is not ideal and it is explicitly marked as a transitional mechanism. The long-term vision is a fully decentralized web of trust where anchors are not necessary because the attestation graph is dense enough to establish trust through multiple independent paths. I am building the reference implementation in TypeScript. The core library handles Sigil creation, signing, verification, and attestation chain validation. There is also a simple registry server for publishing and discovering Sigils. The protocol spec is separate from the implementation because I want this to be language-agnostic. If someone wants to build a Rust or Go implementation, the spec should be sufficient. Why does this matter? Because we are about to enter a world where AI agents interact with each other at scale. They will negotiate, transact, share data, and coordinate actions. Without a robust identity layer, that world is a security nightmare. Every agent interaction is a trust decision, and right now we are making those decisions with the equivalent of a sticky note that says "trust me bro." The protocol is open source. The spec is in progress. I am building this in the open because identity infrastructure MUST be open. A proprietary identity system for AI agents is worse than no system at all, because it creates a gatekeeper in a space that needs to be permissionless. More updates as I ship.
The Broad Way | Kinho.dev