Introducing Portable Knowledge Cards: Reference Files Any AI Agent Can Read
Portable knowledge cards encode best practices into files any AI agent can read. They bridge the gap between learning and production.
The Graduation Cliff
You just finished an eight-week course on AI agent engineering. You learned prompt design principles, tool selection frameworks, multi-agent coordination patterns, and production testing strategies. You took notes. You passed the assessments. You got the certificate.
Six months later, you are building a RAG pipeline at work. You know you learned something about chunking strategies — that chunk size was important, that there was a sweet spot. But the details are gone. Was it 200 tokens or 2000? Was overlap 10% or 20%? Did you test with semantic chunking or fixed-size?
So you do what everyone does: you Google it. You read three blog posts that contradict each other. You watch a 45-minute YouTube video to extract two minutes of useful information. You end up guessing.
This is the graduation cliff. The gap between what you learned and what you can apply when it matters. Every course has this problem. We decided to solve it.
What Are Portable Knowledge Cards?
A portable knowledge card is a structured reference file designed to be read by both humans and AI agents. Each card encodes a specific best practice in a three-part format:
**What** — the principle or technique, stated clearly
**Why** — the reasoning behind it, so you (or your agent) can judge when it applies
**Apply By** — concrete actions, code patterns, or decision criteria
That third part is what makes knowledge cards different from documentation. Documentation describes. Knowledge cards prescribe. They do not say "chunking is the process of splitting documents." They say "chunk size between 200-800 tokens, measure empirically with your data, use semantic boundaries when available, and here is the code to do it."
Internally, our students know these as RAILS — Reference-Applied Intelligence Learning Standards. That is the name they use inside the curriculum, and it reflects the metaphor: like train tracks keep a train on course, these cards keep your AI agents on course. But the concept is portable. You do not need our platform to use them.
The Problem They Solve (Beyond Forgetting)
The graduation cliff is real, but it is actually the smaller problem. The bigger problem is this: how do you transfer best practices to an AI agent?
Think about it. You hire a junior developer and they ramp up by reading your team's style guide, attending code reviews, and absorbing institutional knowledge through osmosis. Over six months, they internalize the team's standards.
You cannot do that with an AI agent. An agent does not have six months. It does not attend code reviews. It needs to know your standards right now, in the context window, every single time it runs.
This is where portable knowledge cards shine. They are formatted so you can paste them directly into an AI agent's system prompt or instructions file, and the agent will follow the best practices automatically. The structure is deliberate — the What/Why/Apply By format maps cleanly to how language models process instructions:
**What** gives the model the concept name and scope
**Why** gives the model the reasoning to generalize appropriately
**Apply By** gives the model concrete, actionable steps
When you hand a knowledge card to Claude, GPT-4, or any capable model, it does not just read it — it applies it. The card becomes part of the agent's working knowledge for that session.
A Real Knowledge Card: Chunking Strategy
Let me show you an actual knowledge card from Course 2 (RAG & Knowledge Systems). This one addresses the #1 lever in RAG pipeline quality:
Knowledge Card: Chunking Strategy
What
Chunk size is the single biggest lever in RAG quality. The sweet spot
is 200-800 tokens per chunk. This matters more than your choice of
embedding model, vector database, or LLM.
Why
Too small (< 100 tokens): precise embedding signal, but fragments
meaning and loses context. Retrieval finds pieces but not answers.
Too large (> 1500 tokens): preserves context, but dilutes the
embedding signal with noise. Retrieval returns documents where the
relevant sentence is buried in irrelevant text.
The 200-800 range balances embedding precision with context
preservation. Empirical data across production systems:
| Chunk Size | Recall@5 | Context Quality |
|-------------|----------|-----------------|
| 50 tokens | 65% | Low |
| 200 tokens | 82% | Good |
| 500 tokens | 88% | Very good |
| 800 tokens | 85% | Good |
| 1500 tokens | 72% | Mixed |
| 3000 tokens | 58% | Poor |
Apply By
1. Start with 500-token chunks as your baseline
2. Use semantic boundaries (paragraphs, sections) when available
3. Add 10-15% overlap between chunks to prevent boundary information loss
4. MEASURE: embed your chunks, run representative queries, check if
the relevant chunk ranks in top-5 results
5. Adjust chunk size based on measurement, not intuition
6. Re-measure after every change to the corpus
Code pattern for fixed-size chunking with boundary awareness:
def chunk_document(text, target_size=500, overlap=75):
chunks = []
start = 0
while start < len(text):
end = start + target_size
Snap to nearest sentence boundary
if end < len(text):
boundary = text[start:end].rfind('. ')
if boundary > target_size * 0.5:
end = start + boundary + 2
chunks.append(text[start:end].strip())
start = end - overlap
return chunks
Notice what this card does that documentation does not. It does not explain what chunking is — you already know that. It tells you the specific numbers, the specific trade-offs, and the specific code to implement right now. And when you paste this into an AI agent's instructions, that agent will chunk documents at 500 tokens with overlap and measure the results. No additional prompting required.
Why This Is Different from Documentation
I want to draw this distinction sharply because it matters.
Documentation is descriptive. It explains how a system works, what parameters are available, what the API expects. Documentation answers "what can I do?"
Knowledge cards are prescriptive. They encode decisions that have already been made based on experience. They answer "what should I do?"
Here is the same concept in both formats:
Documentation style:
> "The `chunk_size` parameter controls how many tokens each chunk contains. Valid values are 1 to 8192. Larger chunks provide more context but may reduce retrieval precision. Smaller chunks are more precise but may lack context."
Knowledge card style:
> "Set chunk_size to 500. Measure Recall@5 on representative queries. If below 80%, try 300 and 800 and keep whichever scores higher. Do not go below 200 or above 1000 without a measured reason."
The documentation tells you about the knob. The knowledge card tells you where to set it and how to verify it is right. Both are useful. But when you are building a production system at 2 AM and you need to make a decision, the knowledge card is the one that saves you.
The Portability Factor
Here is where knowledge cards get genuinely exciting for your career.
When you finish the Agentic Context Programming curriculum, you take your knowledge cards with you. Not metaphorically — literally. They are markdown files. You copy them. They are yours.
Starting a new job where the team is building a RAG pipeline? Paste the chunking card and the evaluation card into your AI coding assistant's project instructions. Now every code suggestion your assistant makes will follow the best practices you learned.
Building your own agent system? Drop the production testing card into your agent's context. It will automatically test structure, properties, and behavior instead of exact output — because the card tells it to.
Reviewing a colleague's agent architecture? Reference the coordination patterns card. It lays out when to use orchestrator vs. blackboard vs. pipeline patterns, with decision criteria, not just descriptions.
This portability is the point. Course knowledge usually lives in a notebook that you never reopen. Knowledge cards live wherever your AI agents live — in system prompts, in `.claude/` directories, in project instruction files. They are active, not archived.
Available in English and Spanish
Our student body spans the Americas, and best practices do not have a language barrier. Every knowledge card in the curriculum is available in both English and Spanish. The Spanish versions are not translations — they are written natively, with terminology that matches how AI engineering is discussed in Spanish-speaking technical communities.
This matters because when a student in Buenos Aires or Barcelona pastes a knowledge card into their AI agent's instructions, the agent should respond with the same quality and precision as it would with the English version. We tested this. It does.
The Full Knowledge Card Library
The curriculum includes eleven knowledge cards spanning the full stack of AI agent engineering:
1. Core Principles — The 5 prompt engineering principles that apply everywhere
2. Tool Selection — Decision tree for choosing the right tool for each task
3. MCP Design — Model Context Protocol server design patterns
4. Agent Loops — Think-Act-Observe and when to break the loop
5. Multi-Agent Coordination — Orchestrator, blackboard, and pipeline patterns
6. Dynamic Spawning — When and how to create agents at runtime
7. Production Testing — Three-tier testing for non-deterministic systems
8. Custom Agent Development — Building agents with the Anthropic API
9. Architecture Selection — Choosing the right architecture for your problem
10. Cognitive Architectures — ReAct, Tree of Thought, Chain of Thought, Self-Refinement
11. Quality Standards — Seven production quality rails for shipping with confidence
Each card follows the same What/Why/Apply By format. Each is tested with real AI agents to verify that pasting the card into instructions actually produces the described behavior. Each is something you keep forever.
Building Your Own Knowledge Cards
The format is open. There is nothing proprietary about What/Why/Apply By. Here is how to create your own:
Step 1: Identify a decision you make repeatedly. Not a concept — a decision. "How big should my chunks be?" is a decision. "What is chunking?" is a concept.
Step 2: Write the What in one sentence. If you need more than two sentences, you are combining multiple cards.
Step 3: Write the Why as the reasoning a smart colleague would need to understand your decision. Include trade-offs and data if you have it.
Step 4: Write the Apply By as a numbered list of specific actions. Include code patterns, threshold values, and verification steps. If someone (or some agent) follows these steps exactly, they should get a good result.
Step 5: Test it. Paste the card into an AI agent's instructions. Give the agent a relevant task. Does it follow the card? If not, the Apply By section needs to be more specific.
That fifth step is what separates knowledge cards from well-intentioned notes. If an AI agent cannot follow it, a stressed engineer at 2 AM probably cannot either.
Want the full library of portable knowledge cards? They are included in the Agentic Context Programming curriculum — all eleven cards, in English and Spanish, ready to paste into any AI agent you work with.
[Explore the Knowledge Card Library]