Skip to content
Go back

Beyond the Prompt: Engineering Context with Go

DRAFT
Published: Aug 25, 2025
Toronto, Canada

I thought I solved the problem of runaway AI costs with token counters like ctx. I was wrong. Letting an agent see how many tokens it’s about to consume is a start, but it doesn’t fix the deeper issue.

The real problem is that your prompts are static. They are blind to what’s happening right now.

Imagine an AI agent refactoring a large codebase. It’s burning through tokens. You realize you pointed it at the wrong branch and hit ‘stop.’ Too late. The agent already blew through $10 of API calls because its prompt was a fixed instruction, completely unaware of your change of mind or its own mounting costs.

This is prompt blindness. Your agent operates in a vacuum, disconnected from the system’s state and your intent. The fix isn’t a better static prompt. The fix is a prompt that generates and adapts itself based on a shared, live context.

The Disconnect: System, Agent, and User

True context requires three parties to be in sync:

  1. The System: Knows about resource limits, costs, and rate limits.
  2. The Agent: Knows its own progress and token consumption.
  3. The User: Knows when their intent changes or a task is no longer needed.

When these are isolated, your agent can’t stop when costs explode, your system can’t warn the agent to be more efficient, and you can’t cancel a runaway job effectively.

Dynamic Prompts: The Missing Layer

The solution is a prompt that is no longer a static string, but a function of the current context.

Instead of this:

// Static, blind prompt
const systemPrompt = "You are a helpful coding assistant."

You build this:

// Dynamic, context-aware prompt
func generateSystemPrompt(ctx context.Context) string {
    budget := ctx.Value("token_budget").(int)
    elapsed := ctx.Value("elapsed_tokens").(int)
    remaining := budget - elapsed
    
    if remaining < 1000 {
        return fmt.Sprintf("CRITICAL: Only %d tokens remaining. Summarize and conclude now.", remaining)
    }
    
    if costSoFar := ctx.Value("cost").(float64); costSoFar > 5.0 {
        return fmt.Sprintf("WARNING: Spent $%.2f. Optimize for efficiency.", costSoFar)
    }
    
    return "You are a helpful coding assistant. Budget is healthy."
}
(This example is complete, it can be run "as is")

Now, the prompt itself is an active part of the control loop. This is the first step in context engineering.

Go’s context.Context: The Perfect Tool

This is where Go’s design becomes essential. The problem of managing state and cancellation across complex operations is exactly what context.Context was built for. As the Hatchet team noted while building their orchestration platform:

Centralized cancellation mechanism with context.Context

Remember how agents are expensive? Let’s say a user triggers a $10 execution, and suddenly changes their mind and hits ‘stop generating’—to save yourself some money, you’d like to cancel the execution… Go’s adoption of context.Context makes it trivial to cancel work, because the vast majority of libraries expect and respect this pattern.

This pattern gives you two critical abilities: propagating values (like budget) and signaling cancellation.

func executeAgent(ctx context.Context, task string) error {
    // Create a context that can be cancelled
    ctx, cancel := context.WithTimeout(ctx, 30*time.Second)
    defer cancel()
    
    // Add budget and cost data to the context
    ctx = context.WithValue(ctx, "max_cost", 10.0)
    ctx = context.WithValue(ctx, "token_budget", 50000)
    
    // Listen for cancellation from a user or another process
    go func() {
        <-ctx.Done()
        log.Printf("Agent cancelled: %v", ctx.Err())
        // All downstream operations will halt
    }()
    
    // Generate a prompt that reflects the live context
    systemPrompt := generateSystemPrompt(ctx)
    
    // Execute with full context awareness
    return agent.Execute(ctx, systemPrompt, task)
}
(This example is complete, it can be run "as is")

Putting It All Together

This gives you a complete model for context engineering:

  1. Token Awareness (ctx): Know the cost before you act.
  2. Dynamic Prompts: Adapt instructions based on runtime state.
  3. Context Propagation (context.Context): Propagate state and cancellation signals instantly.

That $10 runaway execution is now impossible. The agent that would have blindly burned your money now:

  • Warns you at $1.
  • Switches to a more efficient mode at $2.
  • Stops itself at a hard limit of $5, or instantly when you hit cancel.

The $10 mistake becomes a $0.50 correction.

Single Agents That Evolve

This pattern does more than just save money. It allows a single agent to change its ‘personality’ mid-task. Traditional agents are static; they’re locked into one system prompt from start to finish.

An adaptive agent, however, can evolve. Its system prompt becomes a function of its own performance and your feedback.

type AgentState struct {
    Budget        float64
    TokensUsed    int
    ErrorCount    int
    UserSentiment string // e.g., "confused", "satisfied"
}

func generateSystemPrompt(ctx context.Context, state *AgentState) string {
    basePrompt := "You are a coding assistant."
    
    // Adapt to budget pressure
    budgetRatio := float64(state.TokensUsed) / (state.Budget * 100)
    if budgetRatio > 0.9 {
        return basePrompt + " CRITICAL: Budget nearly gone. Generate a minimal solution now."
    }
    
    // Adapt to repeated errors
    if state.ErrorCount > 3 {
        return basePrompt + " DEFENSIVE: Multiple errors detected. Prioritize robust, simple code."
    }
    
    // Adapt to user confusion
    if state.UserSentiment == "confused" {
        return basePrompt + " EXPLANATORY: User is confused. Provide detailed explanations."
    }
    
    return basePrompt + " Proceed with standard approach."
}
(This example is complete, it can be run "as is")

The agent starts a task with a standard prompt. After a few errors, its identity shifts to be more defensive. As the budget depletes, it becomes ruthlessly efficient. If you signal confusion, it becomes more explanatory. It’s the same agent, but its behavior adapts to the reality of the task, all managed through the context.

Multi-Agent Workflows That Don’t Lose Their Way

The power of this pattern multiplies in multi-agent systems. Consider a simple Plan -> Execute workflow. Traditionally, the Plan Agent passes structured data—like a JSON object—to the Execute Agent.

The problem? Context is lost. The Execute Agent gets the what but not the why. It doesn’t know the budget constraints that shaped the plan, the alternatives that were discarded, or the urgency of the task.

With context propagation, the Plan Agent doesn’t just pass a plan; it passes the entire, evolved context.

func PlanAgent(ctx context.Context, task string) context.Context {
    // ... planning logic ...
    plan := generatePlan(ctx) // Plan is generated with cost awareness
    
    // Create an evolved prompt for the next agent
    executionPrompt := fmt.Sprintf(`
        EXECUTION PHASE
        Plan: %s
        Budget Remaining: $%.2f
        Execute this plan. The planning phase prioritized cost-efficiency.
    `, plan, budgetRemaining)
    
    // Pass the new prompt and the rest of the context forward
    ctx = context.WithValue(ctx, "execution_prompt", executionPrompt)
    return ctx
}

func ExecuteAgent(ctx context.Context) error {
    // Inherits full context: budget, history, cancellation, and the evolved prompt
    prompt := ctx.Value("execution_prompt").(string)
    return executeWithPrompt(ctx, prompt)
}

func RunWorkflow(ctx context.Context, task string) error {
    ctx, cancel := context.WithCancel(ctx)
    defer cancel()
    
    // A single cancel() call will stop any agent, in any phase
    go func() {
        if userHitsCancel() {
            cancel()
        }
    }()
    
    // Plan phase creates and enriches the context
    ctx = PlanAgent(ctx, task)
    
    // Execute phase inherits the rich context
    return ExecuteAgent(ctx)
}
(This example is complete, it can be run "as is")

Now, the Execute Agent understands the constraints that shaped its instructions. And if you cancel the job, the signal propagates instantly through the entire chain, halting all work.

Conclusion: Engineering Context, Not Just Prompts

The craft of building with AI is moving from prompt engineering to context engineering. The future isn’t about finding the perfect static prompt. It’s about building systems where prompts assemble themselves based on live data.

This new model works at every scale:

  • Single agents evolve their identity to become more effective and economical.
  • Multi-agent workflows maintain a coherent state and purpose, eliminating the context loss that plagues traditional systems.

Go, with its first-class context package, provides the ideal foundation. It’s not just a feature; it’s a design philosophy that matches the demands of building robust, economically viable AI systems.

The age of static prompts is over. The age of context engineering has begun.

Content Attribution: 50% by Alpha, 25% by Claude Opus 4.1, 25% by Gemini 2.5 Pro
  • 50% by Alpha: Provided the core insight connecting dynamic prompts, context engineering, and Go's context.Context pattern. Integrated references from Hatchet and connected to broader AI economics themes.
  • 25% by Claude Opus 4.1: Assisted with structuring the narrative and developing code examples.
  • 25% by Gemini 2.5 Pro: Assisted with drafting and refining technical explanations.