Code assertions, specifically safety assertions, empower developers to build more robust and reliable systems. They act as guardrails, ensuring that ‘bad things never happen’ within the software’s execution.
This focus differs from liveness assertions, which verify that ‘good things eventually happen.’ Liveness is a separate, important topic for another time. Here, we concentrate on safety assertions: preventing invalid states, data corruption, and unexpected crashes.
What does this mean in practice? Let’s examine the Toyota unintended acceleration case.
In 2014, Toyota paid $1.2 billion for issues related to unintended acceleration. NASA reviewed their electronic throttle control software, applying established principles like The Power of 10: Rules for Developing Safety-Critical Code. These rules emphasize practices essential for safety, including:
- Simple Control Flow: Avoid complex constructs like
goto. - Fixed Loop Bounds: Prevent runaway code.
- Restricted Heap Use: Limit dynamic memory allocation after initialization.
- Concise Functions: Keep functions short and understandable.
- Runtime Assertions: Use at least two assertions per function to check assumptions.
- Minimal Data Scope: Limit data visibility.
- Check Return Values: Handle function results explicitly.
- Limited Preprocessing: Use the preprocessor mainly for includes and simple macros.
- Restricted Pointer Use: Limit dereferences and avoid function pointers.
- Address All Warnings: Compile with high warning levels and fix all issues.
NASA’s study uncovered thousands of violations in Toyota’s code.

Consider the violations related to rule #5 (Assertions):
- 502 ‘Unchecked parameter dereference’
- 425 ‘Parameter not checked before use as an index’
- 326 ‘Parameter not checked before dereferencing’
These represent instances where function inputs were used without validation. Summing these assertion-related violations (502 + 425 + 326 = 1,253) reveals they constituted approximately 13% of the total 9,603 violations found.
This 13% signifies numerous potential failure points where unchecked parameters could lead to crashes, incorrect calculations, or unpredictable behavior—critical failures in automotive systems. As safety experts emphasize, assertions effectively downgrade potentially catastrophic correctness bugs into more manageable, detectable failures (like controlled crashes). Many severe software failures stem from improper parameter handling, precisely what these assertions target.
Proper use of assertions could have significantly hardened Toyota’s software, directly addressing 13% of the identified violations and strengthening the system against undefined behavior.
How do assertions benefit software development broadly?
- Early Bug Detection: Catch errors during development and testing, not in production.
- Failure Isolation: Contain problems within specific modules, preventing system-wide cascades.
- Controlled Failures: Turn silent data corruption into explicit, detectable crashes.
- Executable Documentation: Enforce design contracts and assumptions directly in the code.
- Defense in Depth: Create multiple validation checkpoints throughout the codebase.
This diagram illustrates the difference:
flowchart TD
A[Software Without Assertions] --> B[Unchecked Parameters]
B --> C{Runtime Condition}
C -->|Normal Conditions| D[Silent Corruption]
C -->|Edge Case| E[Parameter Misuse]
D --> F[Corrupted System State]
E --> F
F --> G[Unpredictable System Behavior]
G --> H[Safety-Critical Function Failure]
H --> I[Potential Accident]
A2[Software With Assertions] --> B2[Parameter Validation]
B2 --> C2{Parameter Valid?}
C2 -->|Yes| D2[Verified System Operation]
C2 -->|No| E2[Controlled Crash/Warning]
E2 --> F2[Bug Detected and Fixed]
D2 --> G2[System Integrity Maintained]
F2 --> G2
G2 --> H2[Safety Functions Protected]
H2 --> I2[User Safety Preserved]
style A fill:#f9d,stroke:#333
style A2 fill:#9df,stroke:#333
style I fill:#f55,stroke:#333
style I2 fill:#5f5,stroke:#333
Now, how does this apply to agentic systems?
Agentic software, often built around the probabilistic and non-deterministic nature of Large Language Models (LLMs), benefits immensely from safety assertions. While the core intelligence might be unpredictable, the surrounding code that prepares inputs, processes outputs, triggers actions, and manages state can and should be made robust.
Assertions allow developers to define invariants and check critical assumptions despite the LLM’s variability. They provide essential guardrails:
- Input Validation: Ensure data passed to the LLM or agentic components meets strict criteria.
- Output Parsing Checks: Verify the structure and content of the LLM’s response before acting on it.
- State Management: Assert that the agent’s internal state remains consistent and valid between steps.
- Action Constraints: Check that proposed actions are safe and permissible before execution.
By embedding assertions, developers can build safer, more predictable agentic solutions. Errors are caught early, leading to controlled failures rather than silent corruption or dangerous emergent behavior. This practice elevates agentic development, enabling the creation of more reliable and trustworthy AI systems. Assertions become crucial tools for managing the inherent uncertainty of AI components within a structured, safety-conscious framework.
Connections to Constraint Solving
Assertions play a role similar to constraints in solver-based approaches: both declare invariants and reject invalid states. For a clear exposition of this mindset, see Hillel Wayne’s ‘Many Hard Leetcode Problems are Easy Constraint Problems’ (https://buttondown.com/hillelwayne/archive/many-hard-leetcode-problems-are-easy-constraint/). Constraint solvers discard states that violate constraints; assertions panic when invariants are broken. In both cases, you model correctness declaratively and fail fast, reducing complexity and avoiding brittle, edge‑case‑heavy code paths.
I hope you found this article helpful. If you want to take your agentic AI to the next level, consider booking a consultation or subscribing to premium content.