Cyber.md: security documentation designed for AI agents
Baz proposes a structured file standard that allows AI agents to read and act on an organization's security posture without human intervention.
Last week, a modest yet substantive proposal appeared on Hacker News: Cyber.md, a security posture file format designed specifically for AI agents to consume directly. It is not an official standard from any governing body, but rather an initiative from Baz, a security-focused company, though the reasoning behind it deserves attention.
The idea stems from a concrete problem: corporate security documentation (policies, controls, asset inventories, compliance frameworks) exists in formats intended for human reading. PDFs, internal wikis, spreadsheets. When an AI agent needs to reason about an organization's security posture, or when Claude Code launches a sub-agent tasked with reviewing infrastructure configurations, that agent must extract context from scattered and poorly structured sources. Cyber.md proposes centralizing this information in a single Markdown file with predictable semantic structure.
What exactly is Cyber.md
According to Baz's description, Cyber.md is a file that lives at the root of a repository or in a known system location, similar in concept to how `SECURITY.md` or `CODEOWNERS` work in development ecosystems. Its content is organized into standardized sections: asset scope, applied controls, relevant compliance frameworks (ISO 27001, SOC 2, etc.), security owners, and incident response procedures.
What sets it apart from a conventional `SECURITY.md` is that it is designed with agent readability as a first-class requirement, not as an afterthought. Sections use normalized headers, key values employ parseable formats, and tools like MCP servers or Claude skills are expected to ingest the file and answer questions about it without requiring complex embeddings or ad-hoc RAG pipelines.
Why it matters in the current Claude Code context
This type of proposal makes sense in an ecosystem where Claude Code already enables chaining specialized sub-agents and connecting MCP servers to external data sources. If an engineering team configures a security audit sub-agent, that agent needs to know which controls are in place, which assets are critical, and who is responsible for each area. Without a standard format, that context must be manually injected into each session or a retrieval pipeline must be built specific to each client.
With a file like Cyber.md in place, the sub-agent can read it directly via a file-reading tool or a filesystem MCP server, and operate on that information in a structured way from the start. It follows the same principle that led to adopting `robots.txt` or `humans.txt` on the web: simple conventions that create interoperability without case-by-case negotiation.
Beyond that, with context windows like Claude Opus 4.7 (up to 1 million tokens), including a file of this type directly in a long-running session's context is no longer a space constraint issue. The bottleneck lies in having information well structured, not in size.
Who finds it useful right now
In its current state, Cyber.md appears most useful for security teams already integrating agents into their workflows and facing the challenge of providing them with organizational context reproducibly. It's also valuable for platform teams building Claude Code skills or plugins oriented toward compliance and auditing.
For others, the value will depend on whether the proposal gains enough traction to become a widely adopted convention. With a single Hacker News post and zero comments at the time of publication, it remains far from being a de facto standard. But the direction is sound: as agents take on more operational tasks, the absence of standardized context formats will become a real problem, not hypothetical.
We at ClaudeWave will follow the evolution of initiatives like this one, because interoperability between agents and organizational documentation is exactly the kind of plumbing problem that tends to be solved late and poorly if not addressed proactively.
Sources
Read next
Siftly Wants to Train Human Judgment in AI-Assisted Code Review
Siftly proposes a different approach: instead of letting AI review your code, use it to sharpen your own judgment as a reviewer. An idea worth discussing.
Agent Harness Engineering: structuring agents that won't break
Addy Osmani names a discipline many teams already practice without knowing it: designing the scaffolding that keeps AI agents on track.
Design.md Generator: A skill to codify design taste
A Chrome extension generates design.md files with design criteria ready for Claude and other LLMs. Does it solve a real problem or add noise to context?