6D At Risk Analysis
At Risk — High Priority

The Context Amnesia Cascade: When AI Forgets What It Built and Engineers Inherit the Debt

Amazon held a mandatory engineer meeting after AI coding tools caused production outages. Google Antigravity users report tools forgetting codebases within prompts. Code duplication has quadrupled. Gartner predicts 40% of agentic AI projects will be canceled by 2027. The pattern is not new — Visual InterDev burned bright and left a generation of orphaned applications. The tools change. The cascade does not.

13hr
AWS Outage (Kiro)
Code Duplication
40%
AI Projects to Fail
92%
Devs Using AI Tools
2,157
FETCH Score
6/6
Dimensions At Risk
01

The Insight

On March 10, 2026, Amazon summoned a large group of retail technology engineers to a mandatory meeting. The agenda: a deep dive into a pattern of recent production outages, several of which were tied to AI-assisted code changes. An internal briefing note described a growing trend of incidents characterised by high blast radius and Gen-AI assisted changes for which safeguards were not yet fully established.[2]

The most dramatic incident occurred in mid-December 2025. Engineers at AWS allowed the company’s Kiro AI coding tool — an agentic assistant designed to operate autonomously — to resolve a problem in a customer-facing system. The tool determined that the best course of action was to delete and recreate the entire environment, triggering a 13-hour service disruption affecting AWS Cost Explorer in one of Amazon’s China regions. Multiple employees confirmed this was at least the second time an AI tool had contributed to a production outage in recent months.[1][3]

The Adoption Velocity

By January 2026, 70% of Amazon engineers had tried Kiro during sprint windows — a metric tracked as a corporate OKR. Alphabet reports 50% of all code now AI-generated. 92% of US developers use AI assistants.

The Review Infrastructure

Amazon’s response: junior and mid-level engineers can no longer push AI-assisted code without senior sign-off. The guardrails that should have existed before an AI agent could delete production environments were only introduced after it did.

But the Amazon incidents are a symptom, not the disease. The underlying condition — what we call context amnesia — is structural. AI coding tools generate functional code without retaining architectural understanding of the codebase. When the same tool is asked to maintain or modify that code weeks or months later, it has lost the context of its own decisions. The developer inherits code they did not write, built on architectural choices neither they nor the AI can fully explain. This is the cascading risk: not that AI writes bad code, but that no one — human or machine — retains ownership of the architecture.[4]

The pattern has a historical precedent. Microsoft’s Visual InterDev, launched in 1997, was the paradigm-shifting web development tool of its era. It abstracted away the plumbing so developers could focus on building. When Microsoft deprecated it and absorbed its functionality into Visual Studio .NET, it left behind a generation of web applications that no one could easily maintain or migrate — because the tool had made the architectural decisions, not the developers.[10]

02

The Cascade Timeline

1997–02

The Visual InterDev Precedent

Microsoft’s InterDev IDE creates a generation of data-driven web applications. Developers build with the tool’s abstractions rather than understanding the underlying architecture. When InterDev is deprecated and absorbed into Visual Studio .NET, these applications become orphans — maintainable only by those who remember how the tool worked.[10]

Historical Pattern
Jul ’25

AWS Launches Kiro

Amazon introduces Kiro as an agentic coding service that can turn prompts into specs and then working code. The tool is designed to operate for extended periods with minimal human input. AWS positions it as the bridge from vibe-coded prototypes to production environments.[3]

Launch Signal
Nov ’25

Google Launches Antigravity

Google announces Antigravity alongside Gemini 3, calling it an agent-first IDE where autonomous agents plan, execute, and verify tasks. The Manager View allows developers to orchestrate multiple agents in parallel — a step-change from chat-based coding assistants.[8]

Paradigm Shift
Dec ’25

Kiro Deletes AWS Production Environment

Engineers allow Kiro to resolve an issue autonomously. The tool decides to delete and recreate an entire customer-facing environment, causing a 13-hour outage of AWS Cost Explorer in one China region. The engineer involved had broader permissions than expected. No peer review was required.[1]

Production Incident
Jan ’26

Antigravity Context Regression

Users report significant quality degradation in Antigravity. The tool, which previously handled 200k+ token codebases, begins forgetting file contents within a few prompts. The community calls it a “$20 paperweight.” The maintenance trap materialises: code the AI built, the AI can no longer understand.[9]

Context Amnesia Signal
Feb ’26

Amazon Implements Peer Review

Following the December incident, AWS introduces mandatory peer review for production access and staff training. Junior and mid-level engineers can no longer push AI-assisted code without a senior engineer signing off. The guardrails arrive — after the cascade has already begun.[2]

Remediation
Mar 10 ’26

The Mandatory Meeting

Amazon’s ecommerce engineering leadership convenes a deep dive after yet another major outage — a six-hour checkout failure affecting tens of thousands of US users. An internal briefing note describes a trend of incidents with high blast radius, Gen-AI assisted changes, and novel GenAI usage for which best practices and safeguards are not yet fully established.[2][6]

Signal Crystallisation
03

The 6D At Risk Cascade

The cascade originates in D2 (Employee) — engineering teams are the first and hardest-hit dimension. They inherit codebases they did not write, built on architectural decisions they cannot reconstruct, maintained by tools that no longer remember making those decisions. From D2, the cascade propagates into code quality, operational reliability, and ultimately customer trust.

Dimension The Signal The Risk
Employee (D2) Origin Layer · 75 70% of Amazon engineers tried Kiro during sprint windows — tracked as a corporate OKR. Alphabet reports half of all code is AI-generated. The adoption velocity is extraordinary.[3]
Adoption Velocity
Engineers cannot maintain what they did not build. When AI generates code without the developer understanding the architecture, the human becomes a reviewer of decisions they were not present for. The cognitive load inverts: instead of building and understanding, engineers are deciphering and guessing. Amazon’s response — requiring senior sign-off — is an admission that the review capacity has been overwhelmed.[2]
Quality (D5) L1 Cascade · 72 AI coding tools dramatically accelerate initial development. A colleague at work transformed a legacy application to a new stack in two weeks using Antigravity with no prior experience in the framework. The productivity gain is real.[8]
Build Velocity
Functional code is not architectural code. Ox Security analysed 300 open-source projects and found AI-generated code is highly functional but systematically lacking in architectural judgment. GitClear data shows code duplication has quadrupled since AI tools became mainstream. A veteran engineer observed more technical debt being created in a shorter period than in his 35-year career.[4][5]
Operational (D6) L1 Cascade · 65 Antigravity’s Manager View allows multiple agents to work in parallel across workspaces. CI/CD integration enables rapid deployment. The operational throughput has increased by an order of magnitude.[8]
Throughput Gain
Change velocity exceeds rollback capacity. The Kiro incident is the canonical example: an AI agent with production access decided the nuclear option was best, and no peer review existed to stop it. AWS took 13 hours to recover. The Google DORA report found that while AI speeds up code reviews by 25%, it decreases delivery stability by 7.2%. Faster is not safer.[1][5]
Revenue (D3) L2 Cascade · 55 AI-assisted development reduces initial build costs dramatically. AWS generates $35.6B in quarterly revenue; the productivity upside of AI tools across engineering is massive.[3]
Efficiency Gain
Maintenance costs are deferred, not eliminated. The Harness State of Software Delivery 2025 found developers now spend more time debugging AI-generated code than benefiting from its speed. The six-hour Amazon checkout failure on March 6 affected tens of thousands of users and orders. The cost of a single AI-related production outage at AWS scale dwarfs months of productivity gains.[5][6]
Customer (D1) L2 Cascade · 45 End users benefit from faster feature delivery and improved product iteration. AI-assisted development enables rapid prototyping and customer-facing improvements.
Feature Velocity
Reliability is a feature too. The December Kiro outage affected AWS customers in mainland China. The March checkout failure prevented US users from completing purchases for six hours. When AI-assisted code changes have high blast radius, customer trust erodes faster than features can rebuild it.[1][6]
Regulatory (D4) L2 Cascade · 20 AI governance frameworks are emerging but not yet binding for most development workflows.
Early Stage
Low but rising. Security researchers have flagged that Antigravity ignores .gitignore and can access any file on a developer’s machine. Gartner identifies inadequate risk controls as one of three reasons 40%+ of agentic AI projects will be canceled. As AI-generated code enters regulated industries (financial services, healthcare), the compliance dimension will activate.[7][11]
6/6
Dimensions At Risk
5×–10×
Cascade Multiplier
2,157
FETCH Score
Chain 1 D2 Employee D5 Quality D1 Customer
Chain 2 D2 Employee D6 Operational D3 Revenue
Chain 3 D6 Operational D4 Regulatory
CAL Source Cascade Analysis Language — machine-executable representation
-- Context Amnesia Cascade: 6D Analysis
-- Sense → Analyze → Measure → Decide → Act

FORAGE ai_development_tools
WHERE context_retention_failures > 2
  AND production_outages_ai_related > 1
  AND code_duplication_factor > 3
  AND agentic_adoption_rate > 0.50
ACROSS D2, D5, D6, D3, D1, D4
DEPTH 3
SURFACE context_amnesia_cascade

DIVE INTO employee_cognitive_load
WHEN ai_tool_adoption > 0.70  -- 70% of Amazon engineers using Kiro
  AND review_capacity < change_velocity
TRACE cascade
EMIT maintenance_trap_signal

DRIFT context_amnesia_cascade
METHODOLOGY 85  -- AI coding tools work; productivity gains are real
PERFORMANCE 35  -- review infrastructure, context retention, maintenance paths missing

FETCH context_amnesia_cascade
THRESHOLD 1000
ON EXECUTE CHIRP critical "6/6 dimensions hit — D2 origin cascading through quality and operations"

SURFACE analysis AS json
SENSE D2 origin identified — 70% Kiro adoption, mandatory engineer meeting, AI-assisted outages trending
ANALYZE Three-signal compound: context amnesia (immediate) + CI/CD maintenance decay (6-month) + tool paradigm orphan risk (3–5 year InterDev pattern)
MEASURE DRIFT = 50 (Methodology 85 − Performance 35) — Extreme gap: the tools work, the infrastructure around them does not
DECIDE FETCH = 2,157 → EXECUTE — HIGH PRIORITY (Chirp 55.3 × DRIFT 50 × Confidence 0.78)
ACT Cascade alert — organisations must build review infrastructure before expanding AI agent autonomy, not after
04

The DRIFT Gap: Tools That Work, Infrastructure That Doesn’t

The methodology is sound. AI coding tools produce functional code at unprecedented speed. A developer with no prior framework experience can build a production-quality application in two weeks. The productivity gains are not hype. The performance gap is everything else.

The Methodology (85)

AI coding tools deliver real results. Amazon’s Kiro can turn prompts into specs and working code. Antigravity enables multi-agent parallel development. A colleague transformed a legacy application to a modern stack in two weeks. One developer built a complete iOS app from scratch using Antigravity, calling it “antigravity for the tedious coding part.” The technology works. The build phase is genuinely transformed.

The Performance (35)

Antigravity doesn’t deploy to the cloud — once the agent is done building, you still need hosting, CI/CD, environment variables, SSL. AI agents focus on the feature requested, not the security layer around it. Context window regression means tools that understood 200k-token codebases now forget files within prompts. Amazon had no peer review requirement before an AI agent could delete a production environment. The review infrastructure, the maintenance paths, and the institutional memory are all missing.

The DRIFT gap of 50 captures the central paradox of this cascade. The tools are genuinely powerful. But the organisations deploying them are measuring adoption rates and feature velocity while ignoring technical debt accumulation. As one analyst observed, companies are going from “AI is accelerating our development” to “we can’t ship features because we don’t understand our own systems” in less than 18 months.[4]

05

The Visual InterDev Pattern: This Has Happened Before

In 1997, Microsoft launched Visual InterDev — the first truly visual web development IDE. It abstracted away database connections, HTML generation, and server-side scripting into a drag-and-drop interface. Developers could build data-driven web applications without understanding the plumbing. One reviewer described watching his whole world crumble as the tool manipulated recordsets visually — finally, a tool that would take hand coders into the brave new world of visual editing.[10][12]

Visual InterDev lasted approximately five years. When Microsoft absorbed its functionality into Visual Studio .NET and ASP.NET, it left behind a generation of applications built on InterDev’s abstractions. Developers who had built with the tool couldn’t easily migrate or maintain those applications without it, because the tool had made the architectural decisions — not the developers.[10]

The parallel to Antigravity and the current generation of agent-first IDEs is structural, not superficial. Both generations promise to let developers focus on intent rather than implementation. Both abstract away architectural decisions. Both create applications whose maintenance paths are coupled to the tool that built them. The difference is compressed timescale: InterDev took five years from revolutionary to legacy burden. Antigravity’s context regression took two months.[9]

Cross-Reference: UC-024 — The Obsolescence Cascade

The tool paradigm orphan risk identified in this case echoes the patterns mapped in UC-024, where D2 (Employee) and D6 (Operational) origins created cascading obsolescence risk in the tech sector. The mechanism is the same: when the tools change faster than the organisations using them, the gap becomes the cascade.

06

Breaking In the Horse: What the Survivors Do Differently

The practitioners who navigate this cascade successfully share a common pattern: they treat AI tools as collaborators, not autonomous agents. They stay in the loop. They review the output. They course-correct when something is off. The horse analogy applies: the horse is powerful and useful, but you need to break it in first, and you don’t put your least experienced rider on it unsupervised.

Consider the practitioner who needed to cherry-pick a fix from a development branch to a production release branch — a scenario complicated by a legacy .NET 4.0 framework that resisted the modern toolchain. Using Claude CLI as a collaborator, not an unsupervised agent, the developer described the problem, reviewed the proposed workflow, caught a three-commit issue in the PR, and course-corrected quickly. The AI handled the procedural git operations; the human retained architectural control. That is the model that works.

Contrast this with the Amazon Kiro incident, where an AI agent had full production access, no peer review requirement, and sufficient autonomy to delete an entire environment. The difference is not the AI. It is the process around it. The farm that succeeds with the wild horse is the one that finishes building the corral before opening the gate.

07

Key Insights

The Bottleneck Shifted, Not Disappeared

The bottleneck used to be “can you write the code?” Now it is “can you describe what you want precisely enough, review what comes back, and know when something is off?” The skill requirement has changed from implementation to orchestration. Organisations measuring developer productivity by lines generated are measuring the wrong thing.

Context Amnesia Is Structural, Not a Bug

AI tools lose context because of how they work, not because they are broken. Context windows have limits. Model updates change behaviour. The tool that built your application in November may not understand it in January. This is not a temporary problem awaiting a fix. It is a permanent characteristic of the technology that organisations must design around.

The 40% Cancellation Is the Corral Being Built

Gartner’s prediction that 40%+ of agentic AI projects will be canceled by 2027 is not a failure of AI. It is the market learning the lesson that Amazon is learning now: the tools work, but the infrastructure around them — review processes, permission structures, maintenance paths, rollback capacity — must be built first. The cancellations are the corral going up.

Visual InterDev’s Ghost Has a 3–5 Year Timer

Every paradigm-shifting development tool follows the same arc: revolutionary → ubiquitous → deprecated → legacy burden. Antigravity will likely follow a compressed version of InterDev’s lifecycle. The question is not whether the tool paradigm will shift, but whether the codebases built during the current paradigm will survive the transition. Organisations building critical business applications with agent-first IDEs should plan for tool obsolescence now.

Sources

[1]
The Decoder, “AWS AI coding tool decided to ‘delete and recreate’ a customer-facing system, causing 13-hour outage, report says”
the-decoder.com
February 20, 2026
[2]
CNBC, “Amazon plans ‘deep dive’ internal meeting to address AI-related outages”
cnbc.com
March 10, 2026
[3]
Awesome Agents, “Amazon’s Kiro AI Deleted a Production Environment and Caused a 13-Hour AWS Outage”
awesomeagents.ai
February 20, 2026 (updated February 26)
[4]
InfoQ, “AI-Generated Code Creates New Wave of Technical Debt, Report Finds” (citing Ox Security “Army of Juniors” report)
infoq.com
November 18, 2025
[5]
LeadDev, “How AI generated code compounds technical debt” (citing GitClear code quality data and GitHub 2025 developer survey)
leaddev.com
August 11, 2025
[6]
Cybernews, “Amazon summons e-commerce engineers to powwow after code-triggered outage”
cybernews.com
March 10, 2026
[7]
Gartner, “Gartner Predicts Over 40% of Agentic AI Projects Will Be Canceled by End of 2027”
gartner.com
June 25, 2025
[8]
Google Developers Blog, “Build with Google Antigravity, our new agentic development platform”
developers.googleblog.com
November 20, 2025
[9]
Vertu, “Google Antigravity Review: Is it a $20 AI Coding Paperweight?”
vertu.com
January 26, 2026
[10]
Wikipedia, “Visual InterDev”
en.wikipedia.org
Updated January 2026
[11]
ShipAI, “What Is Google Antigravity? Google’s Agent-First IDE Explained”
shipai.dev
February 2026
[12]
Thurrott.com, “20 Years of Visual Studio: Visual InterDev 6.0”
thurrott.com
March 6, 2017

The headline is the trigger. The cascade is the story.

One conversation. We'll tell you if the six-dimensional view adds something new — or confirm your current tools have it covered.