I Tracked Every AI Suggestion for a Week — Here's What I Actually Shipped
Last week I ran an experiment: I logged every AI-generated code suggestion I received and tracked which ones made it to production unchanged, which ones needed edits, and which ones I threw away en...

Source: DEV Community
Last week I ran an experiment: I logged every AI-generated code suggestion I received and tracked which ones made it to production unchanged, which ones needed edits, and which ones I threw away entirely. The results surprised me. The Setup Duration: 5 working days Tools: Claude and GPT for code generation, Copilot for autocomplete Project: A medium-sized TypeScript backend (REST API, ~40 endpoints) Tracking: Simple markdown file, one entry per suggestion The Numbers Category Count Percentage Shipped unchanged 12 18% Shipped with edits 31 47% Thrown away 23 35% Total suggestions 66 100% Only 18% of AI suggestions shipped without changes. Almost half needed editing. And over a third were useless. What Got Shipped Unchanged The 12 suggestions that shipped as-is had something in common: they were small and well-specified. Unit tests for pure functions (given a clear function signature) Type definitions from a schema description Utility functions with obvious behavior (slugify, debounce, d