I thought I solved the problem of runaway AI costs with token counters like ctx. I was wrong. Letting an agent see how many tokens it’s about to consume is a start, but it doesn’t fix the deeper issue.
The real problem is that your prompts are static. They are blind to what’s happening right now.
Imagine an AI agent refactoring a large codebase. It’s burning through tokens. You realize you pointed it at the wrong branch and hit ‘stop.’ Too late. The agent already blew through $10 of API calls because its prompt was a fixed instruction, completely unaware of your change of mind or its own mounting costs.
This is prompt blindness. Your agent operates in a vacuum, disconnected from the system’s state and your intent. The fix isn’t a better static prompt. The fix is a prompt that generates and adapts itself based on a shared, live context.
The Disconnect: System, Agent, and User
True context requires three parties to be in sync:
- The System: Knows about resource limits, costs, and rate limits.
- The Agent: Knows its own progress and token consumption.
- The User: Knows when their intent changes or a task is no longer needed.
When these are isolated, your agent can’t stop when costs explode, your system can’t warn the agent to be more efficient, and you can’t cancel a runaway job effectively.
Dynamic Prompts: The Missing Layer
The solution is a prompt that is no longer a static string, but a function of the current context.
Instead of this:
// Static, blind prompt
const systemPrompt = "You are a helpful coding assistant."
You build this:
Now, the prompt itself is an active part of the control loop. This is the first step in context engineering.
Go’s context.Context: The Perfect Tool
This is where Go’s design becomes essential. The problem of managing state and cancellation across complex operations is exactly what context.Context was built for. As the Hatchet team noted while building their orchestration platform:
Centralized cancellation mechanism with context.Context
Remember how agents are expensive? Let’s say a user triggers a $10 execution, and suddenly changes their mind and hits ‘stop generating’—to save yourself some money, you’d like to cancel the execution… Go’s adoption of context.Context makes it trivial to cancel work, because the vast majority of libraries expect and respect this pattern.
This pattern gives you two critical abilities: propagating values (like budget) and signaling cancellation.
Putting It All Together
This gives you a complete model for context engineering:
- Token Awareness (
ctx): Know the cost before you act. - Dynamic Prompts: Adapt instructions based on runtime state.
- Context Propagation (
context.Context): Propagate state and cancellation signals instantly.
That $10 runaway execution is now impossible. The agent that would have blindly burned your money now:
- Warns you at $1.
- Switches to a more efficient mode at $2.
- Stops itself at a hard limit of $5, or instantly when you hit cancel.
The $10 mistake becomes a $0.50 correction.
Single Agents That Evolve
This pattern does more than just save money. It allows a single agent to change its ‘personality’ mid-task. Traditional agents are static; they’re locked into one system prompt from start to finish.
An adaptive agent, however, can evolve. Its system prompt becomes a function of its own performance and your feedback.
The agent starts a task with a standard prompt. After a few errors, its identity shifts to be more defensive. As the budget depletes, it becomes ruthlessly efficient. If you signal confusion, it becomes more explanatory. It’s the same agent, but its behavior adapts to the reality of the task, all managed through the context.
Multi-Agent Workflows That Don’t Lose Their Way
The power of this pattern multiplies in multi-agent systems. Consider a simple Plan -> Execute workflow. Traditionally, the Plan Agent passes structured data—like a JSON object—to the Execute Agent.
The problem? Context is lost. The Execute Agent gets the what but not the why. It doesn’t know the budget constraints that shaped the plan, the alternatives that were discarded, or the urgency of the task.
With context propagation, the Plan Agent doesn’t just pass a plan; it passes the entire, evolved context.
Now, the Execute Agent understands the constraints that shaped its instructions. And if you cancel the job, the signal propagates instantly through the entire chain, halting all work.
Conclusion: Engineering Context, Not Just Prompts
The craft of building with AI is moving from prompt engineering to context engineering. The future isn’t about finding the perfect static prompt. It’s about building systems where prompts assemble themselves based on live data.
This new model works at every scale:
- Single agents evolve their identity to become more effective and economical.
- Multi-agent workflows maintain a coherent state and purpose, eliminating the context loss that plagues traditional systems.
Go, with its first-class context package, provides the ideal foundation. It’s not just a feature; it’s a design philosophy that matches the demands of building robust, economically viable AI systems.
The age of static prompts is over. The age of context engineering has begun.