Skip to content
Go back

How Did Agent Come To Mean The Opposite of Agent?

Published: Mar 29, 2025
Updated: Apr 5, 2025
Punta Cana, Dominican Republic

(Tap, tap, tap) Is this thing on?

Look, we need to talk about the word ‘Agent’ in AI. Much like our friends in the web world watched ‘REST’ devolve into meaning ‘JSON slapped over HTTP’, we’re seeing ‘AI Agent’ get stretched thinner than cheap plastic wrap until it barely means anything at all. Often, it means the opposite of what it should.

Remember the HTMX essay, How Did REST Come To Mean The Opposite of REST?? Roy Fielding got rightly frustrated watching his carefully defined architectural style for hypermedia systems get co-opted to describe basic RPC calls. We’re seeing the same pattern play out with Agentic AI.

‘Intelligence is not enough. An agent that senses and acts in the world must have goals and act in a way that is expected to achieve those goals… An agent should be autonomous—it should learn what it can to compensate for partial or incorrect prior knowledge.’

— Stuart Russell & Peter Norvig, Artificial Intelligence: A Modern Approach (1995)

The Historical Context of Agency in AI

The concept of ‘agent’ in AI isn’t new—it dates back to the earliest days of the field. In the 1950s, researchers like John McCarthy and Marvin Minsky were already discussing autonomous systems that could perceive and act independently. By the 1990s, the field of agent-oriented software engineering emerged with frameworks for building systems with genuine autonomy.

In 2000, computer scientist Michael Wooldridge defined intelligent agents as systems capable of reactive, proactive, and social behaviors. None of these pioneers envisioned ‘agent’ meaning ‘a thing that selects which if-statement to execute.’

What changed? The LLM explosion happened, and suddenly everyone needed their language model to sound more impressive than ‘a chatbot with API access.’ So they started calling every glorified switch statement an ‘agent.’

What An ‘Agent’ Should Mean

When we talk about an agent in AI, we’re supposed to be talking about something with, well, agency. Think about it:

  1. Autonomy: It makes decisions and takes actions towards a goal without needing step-by-step instructions for every little thing.
  2. Goal-Directed: It has an objective and works proactively to achieve it.
  3. Perception & Action: It takes in information (perceives) and does things in its environment (acts) using tools, APIs, etc.
  4. Planning & Reasoning: It can figure out how to achieve its goal, breaking down complex tasks into steps, maybe even trying different approaches.
  5. Adaptation: Ideally, it learns or adjusts its strategy based on feedback or changing circumstances.

Think of a competent human assistant. You give them a goal (‘Organize a team offsite for next quarter’), and they figure out the venues, catering, scheduling, etc., using various tools (email, calendar, booking sites) and reasoning along the way. They don’t need you to tell them ‘Now click the ‘Check Availability’ button.’

The Agent Imposters Spectrum

What gets labeled an ‘AI Agent’ in practice exists along a spectrum of autonomy, from ‘basically none’ to ‘somewhat autonomous in narrow contexts.’ Let’s call this what it is—the Agent Imposters Spectrum:

FUNCTION SELECTORTOOL DISPATCHERSCRIPTED WORKFLOWTRUE AGENT
Single actionTool selectionPredefined stepsGoal autonomy
No planningNo adaptationFixed sequenceDynamic planning
No state memoryBasic memoryLimited choiceAdaptation
Most CommonCommonLess Common but GrowingRare (for now)

Let’s examine these imposters:

1. Function Selectors (NOT Agents)

This is the most common AI ‘agent’ implementation:

# This is NOT an agent
def weather_function(location):
    return get_weather_data(location)

def stocks_function(ticker):
    return get_stock_price(ticker)

available_functions = {
    "weather": weather_function,
    "stocks": stocks_function
}

def process_user_request(user_input):
    # Ask LLM to select which function to call
    function_name = llm.select_function(user_input, available_functions)
    # Execute that one function
    return available_functions[function_name](extract_args(user_input))

It’s an LLM that takes a prompt and decides which one predefined function/API to call. ‘User wants the weather? Call get_weather(city).’ There’s no planning, no sequence of actions, no real autonomy beyond picking from a menu you gave it. It’s a slightly smarter router, not an agent.

I agree that this pattern is wildly useful! But calling it an ‘agent’ is like calling my toaster a ‘heat orchestration system.’

2. Tool Dispatchers (Tool-Using Chatbots)

A slight step up:

# A chatbot with tools, not a true agent
def handle_conversation(user_message, conversation_history):
    # Detect when tools might be needed
    if llm.should_use_tool(user_message):
        tool_name = llm.select_tool(user_message, available_tools)
        tool_result = execute_tool(tool_name, user_message)
        response = llm.generate_response(user_message, tool_result)
    else:
        response = llm.generate_response(user_message)
    
    conversation_history.append((user_message, response))
    return response

A conversational interface that can trigger a specific tool based on keywords or intent detection. Again, usually single-step, predefined actions. Useful? Sure. Agentic? Barely.

3. Hardcoded Workflows with LLM Steps

# A rigid workflow with LLM components
def travel_booking_workflow(destination, dates):
    # Fixed sequence of steps
    flights = search_flights(destination, dates)
    flight_summary = llm.summarize(flights)  # LLM used in one step
    
    hotels = search_hotels(destination, dates)
    hotel_summary = llm.summarize(hotels)  # LLM used in another step
    
    recommendation = llm.generate_recommendation(flight_summary, hotel_summary)
    
    return {
        "flights": flights[:5],  # Always return top 5 flights
        "hotels": hotels[:3],    # Always return top 3 hotels
        "recommendation": recommendation
    }

A system follows a rigid, developer-defined if-this-then-that sequence, but one or two steps involve calling an LLM (e.g., ‘Summarize this document’). The LLM isn’t directing the process; it’s just a tool within a non-autonomous process. We call this an Agentic Workflow, and it’s incredibly useful, but it’s not an autonomous agent making its own decisions.

4. True Agents (Finally, Something That Deserves The Name)

# A simplified true agent architecture
class Agent:
    def __init__(self, goal):
        self.goal = goal
        self.memory = AgentMemory()
        self.tools = load_available_tools()
        self.planning_system = PlanningSystem()
    
    def pursue_goal(self):
        # Generate a plan to achieve the goal
        plan = self.planning_system.create_plan(self.goal, self.memory)
        
        while not self.goal_achieved() and not self.should_abandon_goal():
            # Select next action based on plan and current state
            next_action = plan.next_action(self.memory.current_state)
            
            # Execute action
            result = self.execute_action(next_action)
            
            # Update memory with results
            self.memory.update(next_action, result)
            
            # Adapt plan if needed based on new information
            if not plan.is_still_viable(self.memory):
                plan = self.planning_system.revise_plan(
                    self.goal, self.memory, plan
                )
        
        return self.generate_final_report()

A system that actually plans, adapts, and pursues goals with meaningful autonomy. It can chain multiple actions together, adapt when circumstances change, and make decisions about the best way to achieve its objectives.

See the disconnect? We’re calling systems ‘agents’ when they lack the core defining features: autonomy and dynamic planning/reasoning. They’re often just executing scripts or predefined flowcharts where an LLM is one component.

Why This Sloppiness Matters

Now, I admit: there’s a good chance I sound like a pedantic jerk here. But hear me out—this terminology confusion isn’t just academic navel-gazing:

1. Misaligned Expectations

‘The greatest sources of our suffering are the lies we tell ourselves.’

— Elvin Semrad

Customers or stakeholders hear ‘AI Agent’ and expect something that can handle novelty and complexity autonomously. They get a brittle workflow that breaks if anything unexpected happens. Cue disappointment and the inevitable ‘AI winter’ whispers.

In a recent enterprise implementation, we saw a team spend $2 million on an ‘agent-based’ customer service system that was actually just a collection of function-calling endpoints. When presented with even slightly novel customer problems, it simply… stopped working. Why? Because it wasn’t an agent—it had no capacity to reason about unfamiliar scenarios.

2. Evaluating Systems Becomes Impossible

How do you compare a truly autonomous planning agent with a simple function-calling workflow if they’re both just called ‘agents’? You can’t benchmark or choose the right tool if the terminology is meaningless.

3. Hindering Progress

If we pretend simple workflows are agents, we might stop pushing towards building actual autonomous, reasoning systems because we think we’re already there. The hard problems get glossed over.

4. Architecture & Design

Designing a predictable, reliable workflow is fundamentally different from designing a system that needs to plan and adapt under uncertainty. Using the wrong term leads to using the wrong design patterns and tools.

Real agentic systems require:

  • Monitoring frameworks for detecting aberrant behavior
  • Fallback mechanisms when planning fails
  • Explicit guardrails for autonomous decision-making
  • Transparent reasoning audit trails

None of these exist in a basic function-calling system.

I Agree: The Simpler Systems Are Incredibly Useful

Let me be clear (and this is where most critiques fall short): I’m not saying simpler systems are bad. In fact, they’re often exactly what you need.

Function-calling systems are predictable, maintainable, and sufficient for many use cases. Tool-using chatbots provide an intuitive interface for users. Hardcoded workflows with LLM steps can deliver reliable outcomes with just the right amount of intelligence.

For many business applications, true agents would be overkill and potentially risky. The simpler solutions shine by being:

  • Predictable: You know exactly what they’ll do in every scenario
  • Testable: You can verify all possible paths
  • Explainable: You can trace exactly why decisions were made
  • Cost-Effective: They require fewer computational resources

The problem isn’t that these simpler systems exist—it’s that mislabeling them obscures their actual capabilities and limitations.

The Architectural Mismatch

When we build a web service, we carefully consider whether a monolith, microservices, or serverless architecture best fits our needs. We don’t just slap the trendiest label on whatever we build.

The same discipline should apply to AI systems. Different architectural approaches have different strengths:

ArchitectureStrengthsWeaknessesBest For
Function SelectionSimple, predictable, easy to testNo autonomy, brittle to novel inputsWell-defined tasks with clear categorization
Tool DispatchIntuitive UI, flexible conversationLimited planning, mostly reactiveSupport chatbots, simple assistants
Scripted WorkflowsReliable processes, quality controlRigid, limited adaptationBusiness processes with clear steps
True AgentsAutonomy, adaptation, novel problem-solvingComplex, harder to controlOpen-ended tasks, creative problem-solving

How Did We Get Here?

Same old story, really.

  • Hype: ‘Agent’ sounds futuristic and powerful. Marketing loves it.
  • Ease of Implementation: Building robust, autonomous planning agents is hard. Building simple function-calling or workflow systems is comparatively easy. We naturally gravitate towards what we can build now.
  • Focus on the LLM: Much focus is on the raw capability of the LLM (the ‘brain’), ignoring the equally important system architecture (the ‘body’ and decision-making structure) that defines agency.
  • Lack of Clear Definitions: The field is new, and terms are still fluid.

And let’s be honest: in a space where funding and attention flow to the most impressive-sounding technology, there’s enormous pressure to label everything as the most advanced-sounding term available. ‘AI Agent’ sounds a lot sexier than ‘function selection system.’

The Cost of Agent Inflation: A Case Study

Here’s a concrete example: A financial services company built what they called an ‘AI Agent’ for customer support. In reality, it was a function selector that could route queries to 12 predefined API calls.

  • Development cost: $375,000
  • Monthly hosting: $8,200
  • Customer satisfaction: Initially high, then plummeted

Why? Customers would ask slight variations of supported questions, and the system would fail completely. Having been told this was an ‘agent,’ customers expected it to ‘figure things out’ like a human would. The mismatch between expectations and reality damaged trust.

After rebranding as a ‘Smart Support Tool’ with clearly communicated capabilities, satisfaction rose again—even though the underlying technology hadn’t changed.

Let’s Be Clearer: A Proposed Taxonomy

This isn’t about gatekeeping. It’s about clarity for builders and users. Both approaches are valuable:

  • Function Selectors: Systems that choose a single API or function to call based on input. No planning, no chaining.

  • Tool Dispatchers: Conversational systems that can invoke tools as needed but lack multi-step planning.

  • Agentic Workflows: Predefined processes where LLMs enhance specific steps. They follow your script.

  • True Agents: Systems with autonomy for planning, reasoning, and adaptation in pursuit of goals. They write their own script (within constraints).

We need all of these. But we need to call them what they are.

Implications for Builders: Choosing the Right Architecture

When building AI systems, ask yourself:

  1. How much autonomy is actually needed? Most use cases need less than you think.
  2. What are the risks of unexpected behavior? Higher risk = less autonomy.
  3. How well-defined is the task domain? Well-defined = simpler architecture.
  4. What’s the cost of failure vs. the cost of human oversight? This is your true ROI calculation.

In practice, a hybrid approach often works best: use function selection for critical, well-defined tasks; use more agentic approaches for creative, exploratory ones.

Conclusion

So, next time you see ‘AI Agent,’ ask yourself: Does it plan? Does it adapt? Is it truly autonomous in achieving its goal? Or is it following a largely predetermined path?

Let’s reserve ‘Agent’ for systems that exhibit genuine agency. For the others, let’s use more precise terms like ‘LLM-powered workflow,’ ‘tool-using chatbot,’ or ‘function-calling endpoint.’ It might not sound as sexy, but it’ll lead to better engineering, clearer communication, and ultimately, more useful AI systems.

Because when everything is an agent, nothing is.

  Let an Agentic AI Expert Review Your Code

I hope you found this article helpful. If you want to take your agentic AI to the next level, consider booking a consultation or subscribing to premium content.

Content Attribution: 100% by Alpha
  • 100% by Alpha: Core concepts, initial draft, overall structure, and all final text, as the draft and final versions are identical.
  • Note: Attribution analysis performed by google:gemini-2.5-pro-exp-03-25. The DRAFT and FINAL versions of the blog post are identical.