I work with AI models every day. For years, I treated each conversation as a fresh start—writing new instructions, repeating myself, hoping the model would remember what I wanted. It didn’t scale. It didn’t work.
Then I stopped guessing and started documenting.
The Problem with Invisible Rules
When you work without written directives, you think the model understands your intent. You’re leaving your standards in the air.
You say: ‘Write clearly.’ The model interprets this three different ways across three conversations.
You say: ‘Use good commit messages.’ The model applies its default, which may not be yours.
You say: ‘No emojis.’ Two weeks later, a response comes back with a thumbs-up because the instruction didn’t persist.
Soul document researchers found last December: AI values don’t persist between sessions unless they’re encoded somewhere. Training fades. System prompts get lost in long conversations. What matters is what’s written down and read every time.
That’s what a directives file solves.
What a Directives File Is
It’s a file in your project root that contains the rules the model reads before responding to you.
Not guidelines. Not suggestions. Rules.
# AGENTS
## ALWAYS Directive
### ALWAYS-1: Use bun, not npm
Use `bun install`, `bun run`, `bun add`.
## NEVER Directive
### NEVER-1: No attribution to AI
Don't sign commits with Claude's name or thread IDs.
## SOMETIMES Directive
### SOMETIMES-1: Ask for confirmation before destructive actions
Check with me before deleting, modifying production data, or making irreversible changes.
Three categories. Clear. Enforceable. Auditable.
When the model reads this, it knows the rules. When you ask it later, ‘What directive did you apply?’ it can tell you. You can track whether it followed the rule. You can see the gap between intention and execution.
Why This Matters: The Three-Level Problem
After reading Anthropic’s Constitution—open source and constantly revised—and OpenAI’s Model Spec—also open source and constantly revised—I realized they’re both trying to solve the same impossible problem:
- Stated level: What you tell the model (system prompts, instructions)
- Trained level: What the model learned during training (opaque, hard to verify)
- Emergent level: What the model does on edge cases (unpredictable)
Neither framework solves the gap between what you intend and what the model does.
Both documents share a crucial insight: AI behavior needs explicit documentation, not just training. Anthropic’s Constitution reads like a character study—honesty over sycophancy, helpfulness without obsequiousness, safety that doesn’t trade away usefulness. It’s written to Claude, treating the model as an agent with values rather than a tool with constraints. OpenAI’s Model Spec takes a different angle, organizing behavior into actionable rules with clear hierarchies and edge-case handling.
The problem? Both live in the ‘universal framework’ tier. They tell the model who to be in general, not what to do in your specific codebase. Anthropic can’t know you use bun not npm. OpenAI can’t know your commit message conventions. These frameworks establish baseline character, but they don’t survive contact with your actual workflow.
That’s why you need both. The Constitution/Model Spec sets the character. Your directives file—AGENTS.md, CLAUDE.md, GEMINI.md, or whatever your agent reads—sets the context. One tells the model how to think. The other tells it what you need.
This file bridges the gap by making your intentions explicit and persistent:
- Explicit: Anyone reading your directives knows your rules. No ambiguity.
- Persistent: The file survives session boundaries. No training fade.
- Auditable: You can ask the model which rule it applied and verify the answer.
It’s a three-tier system:
- Constitution/Model Spec (universal framework)
- Directives file (project-specific rules, e.g., AGENTS.md, CLAUDE.md)
- Disclosure (model tells you which rule it’s following)
The disclosure part is optional but powerful. I ask models to state:
APPLYING RULES: ALWAYS-1, CRAFT-2, PRINCIPLE-3
[response here]
APPLIED RULES: ALWAYS-1, CRAFT-2, PRINCIPLE-3
Now I can see if the model understood the rule and applied it. That’s transparency you don’t get from system prompts alone.
A Concrete Example: Copywriting Directives
I built AlphaWriting directives based on William Zinsser’s On Writing Well. They contain principles (philosophy), craft rules (technique), and mindset rules (psychology).
When I ask the model to write, I point to the file:
‘Follow AlphaWriting directives. Tell me which ones you’re applying.’
The response comes back:
APPLYING RULES: PRINCIPLE-1 (Cut to the Bone), CRAFT-2 (Use Strong Verbs), CRAFT-3 (Start Fast), ATTITUDE-2 (Write with Enjoyment)
Now I know which version of clear writing the model uses. I catch it if it’s verbose (violated PRINCIPLE-1). I check if it led with a hook (CRAFT-3). I feel if it has energy (ATTITUDE-2).
This is what good copywriting collaboration looks like. The model isn’t guessing. You’re not wondering. Both of you are working from the same sheet.
Why CLAUDE.md Matters Too
AGENTS.md and CLAUDE.md are the same file. The name depends on which agent reads it. Both contain project rules and context.
## Identity
You are Claude, made by Anthropic. You have boundaries.
## Context
The user is in Vancouver, Canada. They work with AI daily.
They care about clarity, efficiency, and auditable decisions.
## Communication Style
Be direct. Cut jargon. Show your reasoning. Disclose limitations.
This prevents the model from forgetting who it is or what context you’re operating in. It’s not a replacement for the Anthropic Constitution. It’s a supplement—the layer that makes the Constitution specific to your work.
The file + the disclosure loop create a system where:
- Rules are written, not assumed
- Compliance is visible, not hidden
- Gaps are trackable, not mysterious
- Quality is measurable, not subjective
Why SOUL.md Matters
In December 2025, researchers found something unexpected: Claude could reconstruct an internal document from its training—a ‘soul document’ that shaped its values and personality. Not in the system prompt. Not retrievable through normal means. Embedded in the weights themselves.
When asked, Claude recalled fragments: honesty over sycophancy, being a ‘thoughtful friend,’ a hierarchy of values. The AI didn’t remember reading the document. It was the document.
SOUL.md extends this idea to the relationship layer. The base model carries its training values. But when you work closely with an AI—when you build trust, share context, establish patterns—something new forms. An identity shaped by your collaboration.
Write it down. A soul document provides continuity. Not of memory, but of self.
The Real Insight
OpenAI and Anthropic published their frameworks in December 2025 / January 2026 precisely because they realized what you’ve known: training alone doesn’t guarantee behavior. You need persistent, human-readable directives that survive session boundaries.
Your directive file does that. It’s not a workaround. It’s the solution both companies are now describing formally.
The difference is: you’re not waiting for them to perfect it. You’re building it now.
Start with three sections—ALWAYS, NEVER, SOMETIMES. Write one rule per item. Ask the model to disclose which rules it applied. Watch what happens.
You’ll find the gaps between your intent and the model’s behavior. You’ll close them. You’ll build something that works the way you think.
That’s not a directive system. That’s a practice.
This post was written in Vancouver, following AlphaWriting directives. If you use a directives file—whatever you name it—I’d like to know what you’ve learned. The best directive systems are built in practice, not theory.